mirror of
https://github.com/tloen/alpaca-lora.git
synced 2024-10-01 01:05:56 -04:00
README formatting
This commit is contained in:
parent
8f7447ea01
commit
6707775517
@ -1,8 +1,8 @@
|
||||
# alpaca-lora (WIP)
|
||||
## alpaca-lora (WIP)
|
||||
|
||||
This repository contains code for reproducing the [Stanford Alpaca results](https://github.com/tatsu-lab/stanford_alpaca#data-release). Users will need to be ready to fork `transformers`.
|
||||
|
||||
# Setup
|
||||
### Setup
|
||||
|
||||
1. Install dependencies (**install zphang's transformers fork**)
|
||||
|
||||
@ -16,11 +16,11 @@ pip install -q git+https://github.com/huggingface/peft.git
|
||||
2. [Install bitsandbytes from source](https://github.com/TimDettmers/bitsandbytes/blob/main/compile_from_source.md)
|
||||
|
||||
|
||||
# Inference
|
||||
### Inference
|
||||
|
||||
See `generate.py`. This file reads the `decapoda-research/llama-7b-hf` model from the Huggingface model hub and the LoRA weights from `tloen/alpaca-lora-7b`, and runs inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.
|
||||
|
||||
|
||||
# Training
|
||||
### Training
|
||||
|
||||
Under construction.
|
Loading…
Reference in New Issue
Block a user