mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2024-10-01 01:06:10 -04:00
Update README.md
This commit is contained in:
parent
5264da9dda
commit
975f924ded
@ -8,7 +8,14 @@
|
||||
|
||||
|
||||
# Reproducibility
|
||||
To reproduce our trained assistant model with LoRA, do the following:
|
||||
|
||||
You can find trained LoRa model weights at:
|
||||
- gpt4all-lora https://huggingface.co/nomic-ai/vicuna-lora-multi-turn
|
||||
|
||||
We are not distributing LLaMa 7B checkpoint they need to be used in association with.
|
||||
|
||||
|
||||
To reproduce our LoRA training run, do the following:
|
||||
|
||||
## Setup
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user