Instruct-tune LLaMA on consumer hardware
Go to file
2023-03-13 22:25:47 -07:00
.gitignore initial commit 2023-03-13 14:34:26 -07:00
alpaca_data.json initial commit 2023-03-13 14:34:26 -07:00
conversion.py initial commit 2023-03-13 14:34:26 -07:00
DATA_LICENSE Licenses and whatnot 2023-03-13 15:00:05 -07:00
finetune.py tokenizer changes 2023-03-13 21:53:19 -07:00
generate.py decapoda 2023-03-13 17:23:29 -07:00
lengths.ipynb lengths cleanup 2023-03-13 21:53:19 -07:00
LICENSE Licenses and whatnot 2023-03-13 15:00:05 -07:00
loss.ipynb decapoda 2023-03-13 17:23:29 -07:00
README.md Add header removal to TODOs 2023-03-13 22:25:47 -07:00

🦙🌲🤏 Alpaca (Low-Rank Edition)

The code in this repo is not yet fully tested. I'm still retraining the model with the outputs included.

This repository contains code for reproducing the Stanford Alpaca results. Users will need to be ready to fork transformers.

Setup

  1. Install dependencies (install zphang's transformers fork)
pip install -q datasets accelerate loralib sentencepiece

pip install -q git+https://github.com/zphang/transformers@llama_push
pip install -q git+https://github.com/huggingface/peft.git
  1. Install bitsandbytes from source

Inference

See generate.py. This file reads the decapoda-research/llama-7b-hf model from the Huggingface model hub and the LoRA weights from tloen/alpaca-lora-7b, and runs inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.

Training

Under construction.

To do

  • Hyperparameter tuning
  • Documentation for notebook
  • Support for 13b, 30b, 65b
  • Train a version that doesn't waste tokens on the prompt header
  • Inference CLI and evaluation
  • Better disclaimers about why using LLaMA without permission is very bad!