mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-10-01 01:26:03 -04:00
7618f3fe8c
This works in a 4GB card now: ``` python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20 ``` |
||
---|---|---|
.. | ||
callbacks.py | ||
chat.py | ||
deepspeed_parameters.py | ||
extensions.py | ||
GPTQ_loader.py | ||
html_generator.py | ||
LoRA.py | ||
models.py | ||
RWKV.py | ||
shared.py | ||
text_generation.py | ||
ui.py |