Minor cleanup

This commit is contained in:
oobabooga 2023-06-09 00:30:22 -03:00
parent 92b45cb3f5
commit c6552785af
2 changed files with 7 additions and 7 deletions

View File

@ -28,17 +28,15 @@ Once downloaded, it will be automatically applied to **every** `LlamaForCausalLM
pip install protobuf==3.20.1
```
2. Use the script below to convert the model in `.pth` format that you, a fellow academic, downloaded using Meta's official link:
2. Use the script below to convert the model in `.pth` format that you, a fellow academic, downloaded using Meta's official link.
### Convert LLaMA to HuggingFace format
If you have `transformers` installed in place
If you have `transformers` installed in place:
```
python -m transformers.models.llama.convert_llama_weights_to_hf --input_dir /path/to/LLaMA --model_size 7B --output_dir /tmp/outputs/llama-7b
```
Otherwise download script [convert_llama_weights_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)
Otherwise download [convert_llama_weights_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py) first and run:
```
python convert_llama_weights_to_hf.py --input_dir /path/to/LLaMA --model_size 7B --output_dir /tmp/outputs/llama-7b

View File

@ -5,12 +5,14 @@ from threading import Thread
from extensions.api.util import build_parameters, try_start_cloudflared
from modules import shared
from modules.chat import generate_chat_reply
from modules.text_generation import encode, generate_reply, stop_everything_event
from modules.models import load_model, unload_model
from modules.LoRA import add_lora_to_model
from modules.models import load_model, unload_model
from modules.text_generation import (encode, generate_reply,
stop_everything_event)
from modules.utils import get_available_models
from server import get_model_specific_settings, update_model_parameters
def get_model_info():
return {
'model_name': shared.model_name,