mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-10-01 01:26:03 -04:00
Add AutoGPTQ wheels to requirements.txt
This commit is contained in:
parent
f344ccdddb
commit
d0aca83b53
@ -18,7 +18,7 @@ There are two ways of loading GPTQ models in the web UI at the moment:
|
||||
* supports more models
|
||||
* standardized (no need to guess any parameter)
|
||||
* is a proper Python library
|
||||
* no wheels are presently available so it requires manual compilation
|
||||
* ~no wheels are presently available so it requires manual compilation~
|
||||
* supports loading both triton and cuda models
|
||||
|
||||
For creating new quantizations, I recommend using AutoGPTQ: https://github.com/PanQiWei/AutoGPTQ
|
||||
@ -175,7 +175,7 @@ python server.py --model llama-7b-4bit-128g --listen --lora tloen_alpaca-lora-7b
|
||||
|
||||
### Installation
|
||||
|
||||
To load a model quantized with AutoGPTQ in the web UI, you need to first manually install the AutoGPTQ library:
|
||||
No additional steps are necessary as AutoGPTQ is already in the `requirements.txt` for the webui. If you still want or need to install it manually for whatever reason, these are the commands:
|
||||
|
||||
```
|
||||
conda activate textgen
|
||||
|
@ -21,3 +21,5 @@ bitsandbytes==0.39.0; platform_system != "Windows"
|
||||
https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.39.0-py3-none-any.whl; platform_system == "Windows"
|
||||
llama-cpp-python==0.1.56; platform_system != "Windows"
|
||||
https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.56/llama_cpp_python-0.1.56-cp310-cp310-win_amd64.whl; platform_system == "Windows"
|
||||
https://github.com/PanQiWei/AutoGPTQ/releases/download/v0.2.0/auto_gptq-0.2.0+cu118-cp310-cp310-win_amd64.whl; platform_system == "Windows"
|
||||
https://github.com/PanQiWei/AutoGPTQ/releases/download/v0.2.0/auto_gptq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl; platform_system != "Windows"
|
||||
|
Loading…
Reference in New Issue
Block a user