Update llama.cpp.md

This commit is contained in:
oobabooga 2023-07-25 15:33:16 -07:00
parent 863d2f118f
commit d3abe7caa8

View File

@ -24,7 +24,7 @@ Enabled with the `--n-gpu-layers` parameter.
* If you have enough VRAM, use a high number like `--n-gpu-layers 1000` to offload all layers to the GPU.
* Otherwise, start with a low number like `--n-gpu-layers 10` and then gradually increase it until you run out of memory.
This feature works out of the box for NVIDIA GPUs. For other GPUs, you need to uninstall `llama-cpp-python` with
This feature works out of the box for NVIDIA GPUs on Linux (amd64) or Windows. For other GPUs, you need to uninstall `llama-cpp-python` with
```
pip uninstall -y llama-cpp-python