gpt4all/README.md

81 lines
2.1 KiB
Markdown
Raw Normal View History

2023-03-27 20:20:59 -04:00
<h1 align="center">GPT4All</h1>
2023-03-28 12:09:09 -04:00
<p align="center">Demo, data and code to train an assistant-style large language model</p>
2023-03-28 16:04:18 -04:00
<p align="center">
<a href="https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf">:green_book: Technical Report</a>
</p>
2023-03-28 15:55:45 -04:00
![gpt4all-lora-demo](https://user-images.githubusercontent.com/13879686/228352356-de66ca7a-df70-474e-b929-2e3656165051.gif)
2023-03-25 12:43:27 -04:00
2023-03-27 20:20:59 -04:00
# Try it yourself
2023-03-28 15:55:45 -04:00
You can download pre-compiled LLaMa C++ Interactive Chat binaries here:
- [OSX Executable](https://s3.amazonaws.com/static.nomic.ai/gpt4all/models/gpt4all-lora-quantized-OSX-m1)
- [Intel/Windows]()
and the model
- [gpt4all-quantized]()
2023-03-27 20:20:59 -04:00
2023-03-25 12:43:27 -04:00
2023-03-28 11:56:16 -04:00
# Reproducibility
2023-03-28 12:26:23 -04:00
You can find trained LoRa model weights at:
2023-03-28 14:38:18 -04:00
- gpt4all-lora https://huggingface.co/nomic-ai/gpt4all-lora
2023-03-28 12:26:23 -04:00
2023-03-28 15:39:03 -04:00
We are not distributing a LLaMa 7B checkpoint.
2023-03-28 12:26:23 -04:00
2023-03-28 15:32:48 -04:00
You can reproduce our trained model by doing the following:
2023-03-28 11:56:16 -04:00
## Setup
2023-03-25 12:43:27 -04:00
Clone the repo
`git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git`
2023-03-27 20:46:14 -04:00
`git submodule configure && git submodule update`
2023-03-27 20:46:24 -04:00
2023-03-25 12:43:27 -04:00
Setup the environment
```
python -m pip install -r requirements.txt
cd transformers
pip install -e .
cd ../peft
pip install -e .
```
## Generate
2023-03-28 14:52:27 -04:00
```bash
python generate.py --config configs/generate/generate.yaml --prompt "Write a script to reverse a string in Python
```
2023-03-25 17:57:01 -04:00
## Train
2023-03-28 14:52:27 -04:00
```bash
accelerate launch --dynamo_backend=inductor --num_processes=8 --num_machines=1 --machine_rank=0 --deepspeed_multinode_launcher standard --mixed_precision=bf16 --use_deepspeed --deepspeed_config_file=configs/deepspeed/ds_config.json train.py --config configs/train/finetune-7b.yaml
```
2023-03-28 12:00:25 -04:00
If you utilize this reposistory, models or data in a downstream project, please consider citing it with:
```
@misc{gpt4all,
2023-03-28 14:50:27 -04:00
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
2023-03-28 12:00:25 -04:00
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
```