mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-10-01 01:26:03 -04:00
1.3 KiB
1.3 KiB
llama.cpp
llama.cpp is the best backend in two important scenarios:
- You don't have a GPU.
- You want to run a model that doesn't fit into your GPU.
Setting up the models
Pre-converted
Download the ggml model directly into your text-generation-webui/models
folder, making sure that its name contains ggml
somewhere and ends in .bin
. It's a single file.
q4_K_M
quantization is recommended.
Convert Llama yourself
Follow the instructions in the llama.cpp README to generate a ggml: https://github.com/ggerganov/llama.cpp#prepare-data--run
GPU acceleration
Enabled with the --n-gpu-layers
parameter.
- If you have enough VRAM, use a high number like
--n-gpu-layers 1000
to offload all layers to the GPU. - Otherwise, start with a low number like
--n-gpu-layers 10
and then gradually increase it until you run out of memory.
This feature works out of the box for NVIDIA GPUs. For other GPUs, you need to uninstall llama-cpp-python
with
pip uninstall -y llama-cpp-python
and then recompile it using the commands here: https://pypi.org/project/llama-cpp-python/
macOS
For macOS, these are the commands:
pip uninstall -y llama-cpp-python
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir