In Python, run the following commands to retrieve a GPT4All model and generate a response
to a prompt.
**Download Note:**
By default, models are stored in `~/.cache/gpt4all/` (you can change this with `model_path`). If the file already exists, model download will be skipped.
Python bindings support the following ggml architectures: `gptj`, `llama`, `mpt`. See API reference for more details.
## Best Practices
There are two methods to interface with the underlying language model, `chat_completion()` and `generate()`. Chat completion formats a user-provided message dictionary into a prompt template (see API documentation for more details and options). This will usually produce much better results and is the approach we recommend. You may also prompt the model with `generate()` which will just pass the raw input string to the model.