Update README

This commit is contained in:
oobabooga 2023-01-15 22:34:56 -03:00
parent ebf4d5f506
commit a3bb18002a

View File

@ -131,7 +131,7 @@ Optionally, you can use the following command-line flags:
| `--cpu` | Use the CPU to generate text.| | `--cpu` | Use the CPU to generate text.|
| `--auto-devices` | Automatically split the model across the available GPU(s) and CPU.| | `--auto-devices` | Automatically split the model across the available GPU(s) and CPU.|
| `--load-in-8bit` | Load the model with 8-bit precision.| | `--load-in-8bit` | Load the model with 8-bit precision.|
| `----max-gpu-memory MAX_GPU_MEMORY` | Maximum memory in GiB to allocate to the GPU while loading the model. This is useful if get out of memory errors while trying to generate text. Must be an integer number. | | `--max-gpu-memory MAX_GPU_MEMORY` | Maximum memory in GiB to allocate to the GPU while loading the model. This is useful if get out of memory errors while trying to generate text. Must be an integer number. |
| `--no-listen` | Make the webui unreachable from your local network.| | `--no-listen` | Make the webui unreachable from your local network.|
| `--settings-file SETTINGS_FILE` | Load default interface settings from this json file. See settings-template.json for an example.| | `--settings-file SETTINGS_FILE` | Load default interface settings from this json file. See settings-template.json for an example.|