mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-10-01 01:26:03 -04:00
358 lines
19 KiB
Markdown
358 lines
19 KiB
Markdown
# Text generation web UI
|
|
|
|
A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, OPT, and GALACTICA.
|
|
|
|
Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) of text generation.
|
|
|
|
|![Image1](https://github.com/oobabooga/screenshots/raw/main/qa.png) | ![Image2](https://github.com/oobabooga/screenshots/raw/main/cai3.png) |
|
|
|:---:|:---:|
|
|
|![Image3](https://github.com/oobabooga/screenshots/raw/main/gpt4chan.png) | ![Image4](https://github.com/oobabooga/screenshots/raw/main/galactica.png) |
|
|
|
|
## Features
|
|
|
|
* 3 interface modes: default, notebook, and chat
|
|
* Multiple model backends: [transformers](https://github.com/huggingface/transformers), [llama.cpp](https://github.com/ggerganov/llama.cpp), [ExLlama](https://github.com/turboderp/exllama), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [ctransformers](https://github.com/marella/ctransformers)
|
|
* Dropdown menu for quickly switching between different models
|
|
* LoRA: load and unload LoRAs on the fly, train a new LoRA
|
|
* Precise instruction templates for chat mode, including Llama 2, Alpaca, Vicuna, WizardLM, StableLM, and many others
|
|
* [Multimodal pipelines, including LLaVA and MiniGPT-4](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal)
|
|
* 8-bit and 4-bit inference through bitsandbytes
|
|
* CPU mode for transformers models
|
|
* [DeepSpeed ZeRO-3 inference](docs/DeepSpeed.md)
|
|
* [Extensions](docs/Extensions.md)
|
|
* [Custom chat characters](docs/Chat-mode.md)
|
|
* Very efficient text streaming
|
|
* Markdown output with LaTeX rendering, to use for instance with [GALACTICA](https://github.com/paperswithcode/galai)
|
|
* Nice HTML output for GPT-4chan
|
|
* API, including endpoints for websocket streaming ([see the examples](https://github.com/oobabooga/text-generation-webui/blob/main/api-examples))
|
|
|
|
To learn how to use the various features, check out the Documentation: https://github.com/oobabooga/text-generation-webui/tree/main/docs
|
|
|
|
## Installation
|
|
|
|
### One-click installers
|
|
|
|
| Windows | Linux | macOS | WSL |
|
|
|--------|--------|--------|--------|
|
|
| [oobabooga-windows.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_windows.zip) | [oobabooga-linux.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_linux.zip) |[oobabooga-macos.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_macos.zip) | [oobabooga-wsl.zip](https://github.com/oobabooga/text-generation-webui/releases/download/installers/oobabooga_wsl.zip) |
|
|
|
|
Just download the zip above, extract it, and double-click on "start". The web UI and all its dependencies will be installed in the same folder.
|
|
|
|
* The source codes are here: https://github.com/oobabooga/one-click-installers
|
|
* There is no need to run the installers as admin.
|
|
* AMD doesn't work on Windows.
|
|
* Huge thanks to [@jllllll](https://github.com/jllllll), [@ClayShoaf](https://github.com/ClayShoaf), and [@xNul](https://github.com/xNul) for their contributions to these installers.
|
|
|
|
### Manual installation using Conda
|
|
|
|
Recommended if you have some experience with the command line.
|
|
|
|
#### 0. Install Conda
|
|
|
|
https://docs.conda.io/en/latest/miniconda.html
|
|
|
|
On Linux or WSL, it can be automatically installed with these two commands:
|
|
|
|
```
|
|
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
|
|
bash Miniconda3.sh
|
|
```
|
|
Source: https://educe-ubc.github.io/conda.html
|
|
|
|
#### 1. Create a new conda environment
|
|
|
|
```
|
|
conda create -n textgen python=3.10.9
|
|
conda activate textgen
|
|
```
|
|
|
|
#### 2. Install Pytorch
|
|
|
|
| System | GPU | Command |
|
|
|--------|---------|---------|
|
|
| Linux/WSL | NVIDIA | `pip3 install torch torchvision torchaudio` |
|
|
| Linux/WSL | CPU only | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu` |
|
|
| Linux | AMD | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2` |
|
|
| MacOS + MPS | Any | `pip3 install torch torchvision torchaudio` |
|
|
| Windows | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117` |
|
|
| Windows | CPU only | `pip3 install torch torchvision torchaudio` |
|
|
|
|
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
|
|
|
|
#### 2.1 Special instructions
|
|
|
|
* MacOS users: https://github.com/oobabooga/text-generation-webui/pull/393
|
|
* AMD users: https://rentry.org/eq3hg
|
|
|
|
#### 3. Install the web UI
|
|
|
|
```
|
|
git clone https://github.com/oobabooga/text-generation-webui
|
|
cd text-generation-webui
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
#### bitsandbytes
|
|
|
|
bitsandbytes >= 0.39 may not work on older NVIDIA GPUs. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
|
|
|
|
* Linux: `pip install bitsandbytes==0.38.1`
|
|
* Windows: `pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl`
|
|
|
|
### Alternative: Docker
|
|
|
|
```
|
|
ln -s docker/{Dockerfile,docker-compose.yml,.dockerignore} .
|
|
cp docker/.env.example .env
|
|
# Edit .env and set TORCH_CUDA_ARCH_LIST based on your GPU model
|
|
docker compose up --build
|
|
```
|
|
|
|
* You need to have docker compose v2.17 or higher installed. See [this guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Docker.md) for instructions.
|
|
* For additional docker files, check out [this repository](https://github.com/Atinoda/text-generation-webui-docker).
|
|
|
|
### Updating the requirements
|
|
|
|
From time to time, the `requirements.txt` changes. To update, use this command:
|
|
|
|
```
|
|
conda activate textgen
|
|
cd text-generation-webui
|
|
pip install -r requirements.txt --upgrade
|
|
```
|
|
## Downloading models
|
|
|
|
Models should be placed inside the `models/` folder.
|
|
|
|
[Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads) is the main place to download models. These are some examples:
|
|
|
|
* [Pythia](https://huggingface.co/models?sort=downloads&search=eleutherai%2Fpythia+deduped)
|
|
* [OPT](https://huggingface.co/models?search=facebook/opt)
|
|
* [GALACTICA](https://huggingface.co/models?search=facebook/galactica)
|
|
* [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)
|
|
|
|
You can automatically download a model from HF using the script `download-model.py`:
|
|
|
|
python download-model.py organization/model
|
|
|
|
For example:
|
|
|
|
python download-model.py facebook/opt-1.3b
|
|
|
|
To download a protected model, set env vars `HF_USER` and `HF_PASS` to your Hugging Face username and password (or [User Access Token](https://huggingface.co/settings/tokens)). The model's terms must first be accepted on the HF website.
|
|
|
|
#### GGML models
|
|
|
|
You can drop these directly into the `models/` folder, making sure that the file name contains `ggml` somewhere and ends in `.bin`.
|
|
|
|
#### GPT-4chan
|
|
|
|
<details>
|
|
<summary>
|
|
Instructions
|
|
</summary>
|
|
|
|
[GPT-4chan](https://huggingface.co/ykilcher/gpt-4chan) has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:
|
|
|
|
* Torrent: [16-bit](https://archive.org/details/gpt4chan_model_float16) / [32-bit](https://archive.org/details/gpt4chan_model)
|
|
* Direct download: [16-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model_float16/) / [32-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model/)
|
|
|
|
The 32-bit version is only relevant if you intend to run the model in CPU mode. Otherwise, you should use the 16-bit version.
|
|
|
|
After downloading the model, follow these steps:
|
|
|
|
1. Place the files under `models/gpt4chan_model_float16` or `models/gpt4chan_model`.
|
|
2. Place GPT-J 6B's config.json file in that same folder: [config.json](https://huggingface.co/EleutherAI/gpt-j-6B/raw/main/config.json).
|
|
3. Download GPT-J 6B's tokenizer files (they will be automatically detected when you attempt to load GPT-4chan):
|
|
|
|
```
|
|
python download-model.py EleutherAI/gpt-j-6B --text-only
|
|
```
|
|
|
|
When you load this model in default or notebook modes, the "HTML" tab will show the generated text in 4chan format.
|
|
</details>
|
|
|
|
## Starting the web UI
|
|
|
|
conda activate textgen
|
|
cd text-generation-webui
|
|
python server.py
|
|
|
|
Then browse to
|
|
|
|
`http://localhost:7860/?__theme=dark`
|
|
|
|
Optionally, you can use the following command-line flags:
|
|
|
|
#### Basic settings
|
|
|
|
| Flag | Description |
|
|
|--------------------------------------------|-------------|
|
|
| `-h`, `--help` | Show this help message and exit. |
|
|
| `--multi-user` | Multi-user mode. Chat histories are not saved or automatically loaded. WARNING: this is highly experimental. |
|
|
| `--character CHARACTER` | The name of the character to load in chat mode by default. |
|
|
| `--model MODEL` | Name of the model to load by default. |
|
|
| `--lora LORA [LORA ...]` | The list of LoRAs to load. If you want to load more than one LoRA, write the names separated by spaces. |
|
|
| `--model-dir MODEL_DIR` | Path to directory with all the models. |
|
|
| `--lora-dir LORA_DIR` | Path to directory with all the loras. |
|
|
| `--model-menu` | Show a model menu in the terminal when the web UI is first launched. |
|
|
| `--settings SETTINGS_FILE` | Load the default interface settings from this yaml file. See `settings-template.yaml` for an example. If you create a file called `settings.yaml`, this file will be loaded by default without the need to use the `--settings` flag. |
|
|
| `--extensions EXTENSIONS [EXTENSIONS ...]` | The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. |
|
|
| `--verbose` | Print the prompts to the terminal. |
|
|
|
|
#### Model loader
|
|
|
|
| Flag | Description |
|
|
|--------------------------------------------|-------------|
|
|
| `--loader LOADER` | Choose the model loader manually, otherwise, it will get autodetected. Valid options: transformers, autogptq, gptq-for-llama, exllama, exllama_hf, llamacpp, rwkv, ctransformers |
|
|
|
|
#### Accelerate/transformers
|
|
|
|
| Flag | Description |
|
|
|---------------------------------------------|-------------|
|
|
| `--cpu` | Use the CPU to generate text. Warning: Training on CPU is extremely slow.|
|
|
| `--auto-devices` | Automatically split the model across the available GPU(s) and CPU. |
|
|
| `--gpu-memory GPU_MEMORY [GPU_MEMORY ...]` | Maximum GPU memory in GiB to be allocated per GPU. Example: `--gpu-memory 10` for a single GPU, `--gpu-memory 10 5` for two GPUs. You can also set values in MiB like `--gpu-memory 3500MiB`. |
|
|
| `--cpu-memory CPU_MEMORY` | Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.|
|
|
| `--disk` | If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. |
|
|
| `--disk-cache-dir DISK_CACHE_DIR` | Directory to save the disk cache to. Defaults to `cache/`. |
|
|
| `--load-in-8bit` | Load the model with 8-bit precision (using bitsandbytes).|
|
|
| `--bf16` | Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. |
|
|
| `--no-cache` | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. |
|
|
| `--xformers` | Use xformer's memory efficient attention. This should increase your tokens/s. |
|
|
| `--sdp-attention` | Use torch 2.0's sdp attention. |
|
|
| `--trust-remote-code` | Set trust_remote_code=True while loading a model. Necessary for ChatGLM and Falcon. |
|
|
|
|
#### Accelerate 4-bit
|
|
|
|
⚠️ Requires minimum compute of 7.0 on Windows at the moment.
|
|
|
|
| Flag | Description |
|
|
|---------------------------------------------|-------------|
|
|
| `--load-in-4bit` | Load the model with 4-bit precision (using bitsandbytes). |
|
|
| `--compute_dtype COMPUTE_DTYPE` | compute dtype for 4-bit. Valid options: bfloat16, float16, float32. |
|
|
| `--quant_type QUANT_TYPE` | quant_type for 4-bit. Valid options: nf4, fp4. |
|
|
| `--use_double_quant` | use_double_quant for 4-bit. |
|
|
|
|
#### GGML (for llama.cpp and ctransformers)
|
|
|
|
| Flag | Description |
|
|
|-------------|-------------|
|
|
| `--threads` | Number of threads to use. |
|
|
| `--n_batch` | Maximum number of prompt tokens to batch together when calling llama_eval. |
|
|
| `--n-gpu-layers N_GPU_LAYERS` | Number of layers to offload to the GPU. Only works if llama-cpp-python was compiled with BLAS. Set this to 1000000000 to offload all layers to the GPU. |
|
|
| `--n_ctx N_CTX` | Size of the prompt context. |
|
|
|
|
#### llama.cpp
|
|
|
|
| Flag | Description |
|
|
|-------------|-------------|
|
|
| `--no-mmap` | Prevent mmap from being used. |
|
|
| `--mlock` | Force the system to keep the model in RAM. |
|
|
| `--cache-capacity CACHE_CAPACITY` | Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed. |
|
|
| `--llama_cpp_seed SEED` | Seed for llama-cpp models. Default 0 (random). |
|
|
| `--n_gqa N_GQA` | grouped-query attention. Must be 8 for llama-2 70b. |
|
|
| `--rms_norm_eps RMS_NORM_EPS` | 5e-6 is a good value for llama-2 models. |
|
|
| `--cpu` | Use the CPU version of llama-cpp-python instead of the GPU-accelerated version. |
|
|
|
|
#### ctransformers
|
|
|
|
| Flag | Description |
|
|
|-------------|-------------|
|
|
| `--model_type MODEL_TYPE` | Model type of pre-quantized model. Currently gpt2, gptj, gpt_neox, falcon, llama, mpt, gpt_bigcode, dolly-v2, and replit are supported. |
|
|
|
|
#### AutoGPTQ
|
|
|
|
| Flag | Description |
|
|
|------------------|-------------|
|
|
| `--triton` | Use triton. |
|
|
| `--no_inject_fused_attention` | Disable the use of fused attention, which will use less VRAM at the cost of slower inference. |
|
|
| `--no_inject_fused_mlp` | Triton mode only: disable the use of fused MLP, which will use less VRAM at the cost of slower inference. |
|
|
| `--no_use_cuda_fp16` | This can make models faster on some systems. |
|
|
| `--desc_act` | For models that don't have a quantize_config.json, this parameter is used to define whether to set desc_act or not in BaseQuantizeConfig. |
|
|
|
|
#### ExLlama
|
|
|
|
| Flag | Description |
|
|
|------------------|-------------|
|
|
|`--gpu-split` | Comma-separated list of VRAM (in GB) to use per GPU device for model layers, e.g. `20,7,7` |
|
|
|`--max_seq_len MAX_SEQ_LEN` | Maximum sequence length. |
|
|
|
|
#### GPTQ-for-LLaMa
|
|
|
|
| Flag | Description |
|
|
|---------------------------|-------------|
|
|
| `--wbits WBITS` | Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported. |
|
|
| `--model_type MODEL_TYPE` | Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. |
|
|
| `--groupsize GROUPSIZE` | Group size. |
|
|
| `--pre_layer PRE_LAYER [PRE_LAYER ...]` | The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg `--pre_layer 30 60`. |
|
|
| `--checkpoint CHECKPOINT` | The path to the quantized checkpoint file. If not specified, it will be automatically detected. |
|
|
| `--monkey-patch` | Apply the monkey patch for using LoRAs with quantized models.
|
|
|
|
#### DeepSpeed
|
|
|
|
| Flag | Description |
|
|
|---------------------------------------|-------------|
|
|
| `--deepspeed` | Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. |
|
|
| `--nvme-offload-dir NVME_OFFLOAD_DIR` | DeepSpeed: Directory to use for ZeRO-3 NVME offloading. |
|
|
| `--local_rank LOCAL_RANK` | DeepSpeed: Optional argument for distributed setups. |
|
|
|
|
#### RWKV
|
|
|
|
| Flag | Description |
|
|
|---------------------------------|-------------|
|
|
| `--rwkv-strategy RWKV_STRATEGY` | RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". |
|
|
| `--rwkv-cuda-on` | RWKV: Compile the CUDA kernel for better performance. |
|
|
|
|
#### RoPE (for llama.cpp, ExLlama, and transformers)
|
|
|
|
| Flag | Description |
|
|
|------------------|-------------|
|
|
|`--alpha_value ALPHA_VALUE` | Positional embeddings alpha factor for NTK RoPE scaling. Use either this or compress_pos_emb, not both. |
|
|
|`--compress_pos_emb COMPRESS_POS_EMB` | Positional embeddings compression factor. Should typically be set to max_seq_len / 2048. |
|
|
|
|
#### Gradio
|
|
|
|
| Flag | Description |
|
|
|---------------------------------------|-------------|
|
|
| `--listen` | Make the web UI reachable from your local network. |
|
|
| `--listen-host LISTEN_HOST` | The hostname that the server will use. |
|
|
| `--listen-port LISTEN_PORT` | The listening port that the server will use. |
|
|
| `--share` | Create a public URL. This is useful for running the web UI on Google Colab or similar. |
|
|
| `--auto-launch` | Open the web UI in the default browser upon launch. |
|
|
| `--gradio-auth USER:PWD` | set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3" |
|
|
| `--gradio-auth-path GRADIO_AUTH_PATH` | Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3" |
|
|
| `--ssl-keyfile SSL_KEYFILE` | The path to the SSL certificate key file. |
|
|
| `--ssl-certfile SSL_CERTFILE` | The path to the SSL certificate cert file. |
|
|
|
|
#### API
|
|
|
|
| Flag | Description |
|
|
|---------------------------------------|-------------|
|
|
| `--api` | Enable the API extension. |
|
|
| `--public-api` | Create a public URL for the API using Cloudfare. |
|
|
| `--public-api-id PUBLIC_API_ID` | Tunnel ID for named Cloudflare Tunnel. Use together with public-api option. |
|
|
| `--api-blocking-port BLOCKING_PORT` | The listening port for the blocking API. |
|
|
| `--api-streaming-port STREAMING_PORT` | The listening port for the streaming API. |
|
|
|
|
#### Multimodal
|
|
|
|
| Flag | Description |
|
|
|---------------------------------------|-------------|
|
|
| `--multimodal-pipeline PIPELINE` | The multimodal pipeline to use. Examples: `llava-7b`, `llava-13b`. |
|
|
|
|
## Presets
|
|
|
|
Inference settings presets can be created under `presets/` as yaml files. These files are detected automatically at startup.
|
|
|
|
The presets that are included by default are the result of a contest that received 7215 votes. More details can be found [here](https://github.com/oobabooga/oobabooga.github.io/blob/main/arena/results.md).
|
|
|
|
## Contributing
|
|
|
|
If you would like to contribute to the project, check out the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines).
|
|
|
|
## Community
|
|
|
|
* Subreddit: https://www.reddit.com/r/oobaboogazz/
|
|
* Discord: https://discord.gg/jwZCF2dPQN
|