mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-10-01 01:26:03 -04:00
Update LLaMA-model.md
This commit is contained in:
parent
038fa3eb39
commit
f5c36cca40
@ -3,7 +3,7 @@ LLaMA is a Large Language Model developed by Meta AI.
|
||||
It was trained on more tokens than previous models. The result is that the smallest version with 7 billion parameters has similar performance to GPT-3 with 175 billion parameters.
|
||||
|
||||
This guide will cover usage through the official `transformers` implementation. For 4-bit mode, head over to [GPTQ models (4 bit mode)
|
||||
](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-(4-bit-mode)).
|
||||
](GPTQ-models-(4-bit-mode).md).
|
||||
|
||||
## Getting the weights
|
||||
|
||||
@ -42,4 +42,4 @@ python convert_llama_weights_to_hf.py --input_dir /path/to/LLaMA --model_size 7B
|
||||
|
||||
```python
|
||||
python server.py --model llama-7b
|
||||
```
|
||||
```
|
||||
|
Loading…
Reference in New Issue
Block a user