mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2024-10-01 01:06:10 -04:00
Update gpt4all_chat.md (#1050)
* Update gpt4all_chat.md Cleaned up and made the sideloading part more readable, also moved Replit architecture to supported ones. (+ renamed all "ggML" to "GGML" because who calls it "ggML"??) Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com> * Removed the prefixing part Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com> * Bump version Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com> --------- Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com> Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com> Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
This commit is contained in:
parent
985d3bbfa4
commit
b8464073b8
@ -5,17 +5,17 @@ The [GPT4All Chat Client](https://gpt4all.io) lets you easily interact with any
|
||||
It is optimized to run 7-13B parameter LLMs on the CPU's of any computer running OSX/Windows/Linux.
|
||||
|
||||
## Running LLMs on CPU
|
||||
The GPT4All Chat UI supports models from all newer versions of `ggML`, `llama.cpp` including the `LLaMA`, `MPT` and `GPT-J` architectures. The `falcon` and `replit` architectures will soon also be supported.
|
||||
The GPT4All Chat UI supports models from all newer versions of `GGML`, `llama.cpp` including the `LLaMA`, `MPT`, `replit` and `GPT-J` architectures. The `falcon` architecture will soon also be supported.
|
||||
|
||||
GPT4All maintains an official list of recommended models located in [models.json](https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-chat/metadata/models.json). You can pull request new models to it and if accepted they will show up in the official download dialog.
|
||||
|
||||
#### Sideloading any ggML model
|
||||
#### Sideloading any GGML model
|
||||
If a model is compatible with the gpt4all-backend, you can sideload it into GPT4All Chat by:
|
||||
|
||||
1. Downloading your model in ggML format. It should be a 3-8 GB file similar to the ones [here](https://huggingface.co/TheBloke/Samantha-7B-GGML/tree/main).
|
||||
2. Identifying your GPT4All Chat downloads folder. This is the path listed at the bottom of the download dialog.
|
||||
3. Prefixing your downloaded model with string `ggml-` and placing it into the GPT4All Chat downloads folder.
|
||||
4. Restarting your chat app. Your model should appear in the download dialog.
|
||||
1. Downloading your model in GGML format. It should be a 3-8 GB file similar to the ones [here](https://huggingface.co/TheBloke/Samantha-7B-GGML/tree/main).
|
||||
2. Identifying your GPT4All model downloads folder. This is the path listed at the bottom of the downloads dialog(Three lines in top left>Downloads).
|
||||
3. Placing your downloaded model inside the GPT4All's model downloads folder.
|
||||
4. Restarting your GPT4ALL app. Your model should appear in the model selection list.
|
||||
|
||||
## Plugins
|
||||
GPT4All Chat Plugins allow you to expand the capabilities of Local LLMs.
|
||||
|
@ -61,7 +61,7 @@ copy_prebuilt_C_lib(SRC_CLIB_DIRECtORY,
|
||||
|
||||
setup(
|
||||
name=package_name,
|
||||
version="0.3.5",
|
||||
version="0.3.6",
|
||||
description="Python bindings for GPT4All",
|
||||
author="Richard Guo",
|
||||
author_email="richard@nomic.ai",
|
||||
|
Loading…
Reference in New Issue
Block a user