mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-10-01 01:26:03 -04:00
Add note about --no-fused_mlp ignoring --gpu-memory (#1301)
This commit is contained in:
parent
b57ffc2ec9
commit
3961f49524
@ -238,7 +238,7 @@ Optionally, you can use the following command-line flags:
|
|||||||
| `--pre_layer PRE_LAYER` | GPTQ: The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. |
|
| `--pre_layer PRE_LAYER` | GPTQ: The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. |
|
||||||
| `--no-quant_attn` | GPTQ: Disable quant attention for triton. If you encounter incoherent results try disabling this. |
|
| `--no-quant_attn` | GPTQ: Disable quant attention for triton. If you encounter incoherent results try disabling this. |
|
||||||
| `--no-warmup_autotune` | GPTQ: Disable warmup autotune for triton. |
|
| `--no-warmup_autotune` | GPTQ: Disable warmup autotune for triton. |
|
||||||
| `--no-fused_mlp` | GPTQ: Disable fused mlp for triton. If you encounter "Unexpected mma -> mma layout conversion" try disabling this. |
|
| `--no-fused_mlp` | GPTQ: Disable fused mlp for triton. If you encounter "Unexpected mma -> mma layout conversion" try disabling this. Disabling may also help model splitting for multi-gpu setups.|
|
||||||
| `--monkey-patch` | GPTQ: Apply the monkey patch for using LoRAs with quantized models. |
|
| `--monkey-patch` | GPTQ: Apply the monkey patch for using LoRAs with quantized models. |
|
||||||
|
|
||||||
#### FlexGen
|
#### FlexGen
|
||||||
|
Loading…
Reference in New Issue
Block a user