mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2024-10-01 01:06:10 -04:00
Update README.md of gpt4all-chat
### What changed: Updates Readme.md of gpt4all-chat - updates features - removes feature wish-list - removes any mention of gpt-j since support for it has been removed with https://github.com/nomic-ai/gpt4all/pull/2676 ### Motivation for pull-request: 1. Remove potential cause for duplication and conflicts between documentation on website and the wiki. 2. Readme should be future proof, so handle wish-list and roadmap via issue tracker. Signed-off-by: ThiloteE <73715071+ThiloteE@users.noreply.github.com>
This commit is contained in:
parent
56d5a23001
commit
66bef4fe04
@ -16,27 +16,17 @@ One click installers for macOS, Linux, and Windows at https://gpt4all.io
|
|||||||
## Features
|
## Features
|
||||||
|
|
||||||
* Cross-platform (Linux, Windows, MacOSX)
|
* Cross-platform (Linux, Windows, MacOSX)
|
||||||
* Fast CPU based inference using ggml for GPT-J based models
|
|
||||||
* The UI is made to look and feel like you've come to expect from a chatty gpt
|
* The UI is made to look and feel like you've come to expect from a chatty gpt
|
||||||
* Check for updates so you can always stay fresh with latest models
|
* Check for updates so you can always stay fresh with latest models
|
||||||
* Easy to install with precompiled binaries available for all three major desktop platforms
|
* Easy to install with precompiled binaries available for all three major desktop platforms
|
||||||
* Multi-modal - Ability to load more than one model and switch between them
|
* Multi-modal - Ability to load more than one model and switch between them
|
||||||
* Supports both llama.cpp and gptj.cpp style models
|
|
||||||
* Model downloader in GUI featuring many popular open source models
|
|
||||||
* Settings dialog to change temp, top_p, top_k, threads, etc
|
|
||||||
* Copy your conversation to clipboard
|
|
||||||
* Check for updates to get the very latest GUI
|
|
||||||
|
|
||||||
## Feature wishlist
|
|
||||||
|
|
||||||
* Multi-chat - a list of current and past chats and the ability to save/delete/export and switch between
|
* Multi-chat - a list of current and past chats and the ability to save/delete/export and switch between
|
||||||
* Text to speech - have the AI response with voice
|
* Supports models that are supported by llama.cpp
|
||||||
* Speech to text - give the prompt with your voice
|
* Model downloader in GUI featuring many popular open source models
|
||||||
* Plugin support for langchain other developer tools
|
* Settings dialog to change temp, top_p, min_p, top_k, threads, etc
|
||||||
* chat gui headless operation mode
|
* Copy your conversation to clipboard
|
||||||
* Advanced settings for changing temperature, topk, etc. (DONE)
|
* RAG via LocalDocs feature
|
||||||
* * Improve the accessibility of the installer for screen reader users
|
* Check for updates to get the very latest GUI
|
||||||
* YOUR IDEA HERE
|
|
||||||
|
|
||||||
## Building and running
|
## Building and running
|
||||||
|
|
||||||
@ -44,14 +34,7 @@ One click installers for macOS, Linux, and Windows at https://gpt4all.io
|
|||||||
|
|
||||||
## Getting the latest
|
## Getting the latest
|
||||||
|
|
||||||
If you've already checked out the source code and/or built the program make sure when you do a git fetch to get the latest changes and that you also do ```git submodule update --init --recursive``` to update the submodules.
|
If you've already checked out the source code and/or built the program make sure when you do a git fetch to get the latest changes and that you also do `git submodule update --init --recursive` to update the submodules. (If you ever run into trouble, deinitializing via `git submodule deinit -f .` and then initializing again via `git submodule update --init --recursive` fixes most issues)
|
||||||
|
|
||||||
## Manual download of models
|
|
||||||
* You can find a 'Model Explorer' on the official website where you can manually download models that we support: https://gpt4all.io/index.html
|
|
||||||
|
|
||||||
## Terminal Only Interface with no Qt dependency
|
|
||||||
|
|
||||||
Check out https://github.com/kuvaus/LlamaGPTJ-chat which is using the llmodel backend so it is compliant with our ecosystem and all models downloaded above should work with it.
|
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
@ -59,6 +42,4 @@ Check out https://github.com/kuvaus/LlamaGPTJ-chat which is using the llmodel ba
|
|||||||
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
The source code of this chat interface is currently under a MIT license. The underlying GPT4All-j model is released under non-restrictive open-source Apache 2 License.
|
The source code of this chat interface is currently under a MIT license.
|
||||||
|
|
||||||
The GPT4All-J license allows for users to use generated outputs as they see fit. Users take responsibility for ensuring their content meets applicable requirements for publication in a given context or region.
|
|
||||||
|
Loading…
Reference in New Issue
Block a user