mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2024-10-01 01:06:10 -04:00
typo in training log documentation (#2452)
Corrected a typo in the training log documentation where "seemded" was changed to "seemed". This enhances the readability and professionalism of the document. Signed-off-by: CharlesCNorton <135471798+CharlesCNorton@users.noreply.github.com>
This commit is contained in:
parent
ea4c5546d8
commit
ce4dc2e789
@ -247,7 +247,7 @@ We trained multiple [GPT-J models](https://huggingface.co/EleutherAI/gpt-j-6b) w
|
||||
We release the checkpoint after epoch 1.
|
||||
|
||||
|
||||
Using Atlas, we extracted the embeddings of each point in the dataset and calculated the loss per sequence. We then uploaded [this to Atlas](https://atlas.nomic.ai/map/gpt4all-j-post-epoch-1-embeddings) and noticed that the higher loss items seem to cluster. On further inspection, the highest density clusters seemded to be of prompt/response pairs that asked for creative-like generations such as `Generate a story about ...` ![](figs/clustering_overfit.png)
|
||||
Using Atlas, we extracted the embeddings of each point in the dataset and calculated the loss per sequence. We then uploaded [this to Atlas](https://atlas.nomic.ai/map/gpt4all-j-post-epoch-1-embeddings) and noticed that the higher loss items seem to cluster. On further inspection, the highest density clusters seemed to be of prompt/response pairs that asked for creative-like generations such as `Generate a story about ...` ![](figs/clustering_overfit.png)
|
||||
|
||||
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user