Update TRAINING_LOG.md

This commit is contained in:
Zach Nussbaum 2023-03-28 13:24:05 -07:00 committed by GitHub
parent d55df6b254
commit f7b6263749
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -231,3 +231,7 @@ We additionally train a full model
| Warmup Steps | 100 |
Taking inspiration from [the Alpaca Repo](https://github.com/tatsu-lab/stanford_alpaca), we roughly scale the learning rate by `sqrt(k)`, where `k` is the increase in batch size, where Alpaca used a batch size of 128 and learning rate of 2e-5.
Comparing our model LoRa to the [Alpaca LoRa](https://huggingface.co/tloen/alpaca-lora-7b), our model has lower perplexity. Qualitatively, training on 3 epochs performed the best on perplexity as well as qualitative examples.
We tried training a full model using the parameters above, but found that during the second epoch the model overfit.