Update gpt4all_chat.md (#1050)

* Update gpt4all_chat.md

Cleaned up and made the sideloading part more readable, also moved Replit architecture to supported ones. (+ renamed all "ggML" to "GGML" because who calls it "ggML"??)

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>

* Removed the prefixing part

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>

* Bump version

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
This commit is contained in:
AMOGUS 2023-06-27 17:49:45 +03:00 committed by GitHub
parent 985d3bbfa4
commit b8464073b8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 7 additions and 7 deletions

View File

@ -5,17 +5,17 @@ The [GPT4All Chat Client](https://gpt4all.io) lets you easily interact with any
It is optimized to run 7-13B parameter LLMs on the CPU's of any computer running OSX/Windows/Linux.
## Running LLMs on CPU
The GPT4All Chat UI supports models from all newer versions of `ggML`, `llama.cpp` including the `LLaMA`, `MPT` and `GPT-J` architectures. The `falcon` and `replit` architectures will soon also be supported.
The GPT4All Chat UI supports models from all newer versions of `GGML`, `llama.cpp` including the `LLaMA`, `MPT`, `replit` and `GPT-J` architectures. The `falcon` architecture will soon also be supported.
GPT4All maintains an official list of recommended models located in [models.json](https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-chat/metadata/models.json). You can pull request new models to it and if accepted they will show up in the official download dialog.
#### Sideloading any ggML model
#### Sideloading any GGML model
If a model is compatible with the gpt4all-backend, you can sideload it into GPT4All Chat by:
1. Downloading your model in ggML format. It should be a 3-8 GB file similar to the ones [here](https://huggingface.co/TheBloke/Samantha-7B-GGML/tree/main).
2. Identifying your GPT4All Chat downloads folder. This is the path listed at the bottom of the download dialog.
3. Prefixing your downloaded model with string `ggml-` and placing it into the GPT4All Chat downloads folder.
4. Restarting your chat app. Your model should appear in the download dialog.
1. Downloading your model in GGML format. It should be a 3-8 GB file similar to the ones [here](https://huggingface.co/TheBloke/Samantha-7B-GGML/tree/main).
2. Identifying your GPT4All model downloads folder. This is the path listed at the bottom of the downloads dialog(Three lines in top left>Downloads).
3. Placing your downloaded model inside the GPT4All's model downloads folder.
4. Restarting your GPT4ALL app. Your model should appear in the model selection list.
## Plugins
GPT4All Chat Plugins allow you to expand the capabilities of Local LLMs.

View File

@ -61,7 +61,7 @@ copy_prebuilt_C_lib(SRC_CLIB_DIRECtORY,
setup(
name=package_name,
version="0.3.5",
version="0.3.6",
description="Python bindings for GPT4All",
author="Richard Guo",
author_email="richard@nomic.ai",