alpaca-lora/README.md

41 lines
2.1 KiB
Markdown
Raw Normal View History

2023-03-13 20:50:38 -04:00
## 🦙🌲🤏 Alpaca (Low-Rank Edition)
2023-03-13 18:00:05 -04:00
2023-03-14 01:45:34 -04:00
**The code in this repo is not yet fully tested. I'm still retraining the model with the outputs included. The goal is to have the code in `generate.py` be fully functional.**
2023-03-14 00:03:36 -04:00
2023-03-14 01:45:34 -04:00
This repository contains code for reproducing the [Stanford Alpaca results](https://github.com/tatsu-lab/stanford_alpaca#data-release).
Users will need to be ready to fork `transformers` to access Jason Phang's [LLaMA implementation](https://github.com/huggingface/transformers/pull/21955).
For fine-tuning we use [PEFT](https://github.com/huggingface/peft) to train low-rank approximations over the LLaMA foundation model.
Included also is code to download this model from the Huggingface model hub.
(Only run this code if you have permission from Meta Platforms Inc.!)
Once I've finished running the finetuning code myself, I'll put the LoRA on the Hub as well, and the code in `generate.py` should work as expected.
2023-03-13 18:00:05 -04:00
2023-03-13 20:44:21 -04:00
### Setup
2023-03-13 18:00:05 -04:00
2023-03-13 20:23:29 -04:00
1. Install dependencies (**install zphang's transformers fork**)
2023-03-13 18:00:05 -04:00
```
2023-03-14 01:47:26 -04:00
pip install -q datasets loralib sentencepiece
2023-03-13 18:00:05 -04:00
pip install -q git+https://github.com/zphang/transformers@llama_push
2023-03-13 20:23:29 -04:00
pip install -q git+https://github.com/huggingface/peft.git
2023-03-13 18:00:05 -04:00
```
2023-03-13 20:23:29 -04:00
2. [Install bitsandbytes from source](https://github.com/TimDettmers/bitsandbytes/blob/main/compile_from_source.md)
2023-03-13 18:00:05 -04:00
2023-03-14 01:49:33 -04:00
### Inference (`generate.py`)
2023-03-13 18:00:05 -04:00
2023-03-13 20:42:46 -04:00
See `generate.py`. This file reads the `decapoda-research/llama-7b-hf` model from the Huggingface model hub and the LoRA weights from `tloen/alpaca-lora-7b`, and runs inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.
2023-03-14 01:49:33 -04:00
### Training (`finetune.py`)
2023-03-13 18:00:05 -04:00
2023-03-14 01:49:33 -04:00
Under construction. If you're impatient, note that this file contains a set of hardcoded hyperparameters you should feel free to modify.
PRs adapting this code to multi-GPU setups and larger models are always welcome.
2023-03-13 20:50:38 -04:00
### To do
- [ ] Hyperparameter tuning
- [ ] Documentation for notebook
- [ ] Support for `13b`, `30b`, `65b`
2023-03-14 01:25:39 -04:00
- [ ] Train a version that doesn't waste tokens on the prompt header
2023-03-13 20:50:38 -04:00
- [ ] Inference CLI and evaluation
2023-03-14 00:03:36 -04:00
- [ ] Better disclaimers about why using LLaMA without permission is very bad!