gpt4all/gpt4all-chat/README.md

69 lines
3.2 KiB
Markdown
Raw Normal View History

2023-04-08 23:28:39 -04:00
# gpt4all-chat
2023-04-08 23:46:23 -04:00
Cross platform Qt based GUI for GPT4All versions with GPT-J as the base
2023-04-09 01:37:12 -04:00
model. NOTE: The model seen in the screenshot is actually a preview of a
new training run for GPT4All based on GPT-J. The GPT4All project is busy
2023-04-12 08:59:50 -04:00
at work getting ready to release this model including installers for all
three major OS's. In the meantime, you can try this UI out with the original
GPT-J model by following build instructions below.
2023-04-08 23:46:23 -04:00
2023-04-12 08:56:30 -04:00
![image](https://user-images.githubusercontent.com/50458173/231464085-da9edff6-a593-410e-8f38-7513f75c8aab.png)
2023-04-08 23:46:23 -04:00
2023-05-09 23:10:06 -04:00
## Install
One click installers for macOS, Linux, and Windows at https://gpt4all.io
2023-04-08 23:46:23 -04:00
## Features
2023-04-12 08:59:50 -04:00
* Cross-platform (Linux, Windows, MacOSX)
2023-04-08 23:46:23 -04:00
* Fast CPU based inference using ggml for GPT-J based models
* The UI is made to look and feel like you've come to expect from a chatty gpt
2023-05-15 16:44:45 -04:00
* Check for updates so you can always stay fresh with latest models
2023-04-13 07:12:48 -04:00
* Easy to install with precompiled binaries available for all three major desktop platforms
2023-04-20 19:43:16 -04:00
* Multi-modal - Ability to load more than one model and switch between them
* Supports both llama.cpp and gptj.cpp style models
* Model downloader in GUI featuring many popular open source models
* Settings dialog to change temp, top_p, top_k, threads, etc
* Copy your conversation to clipboard
* Check for updates to get the very latest GUI
2023-04-13 07:12:48 -04:00
## Feature wishlist
* Multi-chat - a list of current and past chats and the ability to save/delete/export and switch between
* Text to speech - have the AI response with voice
* Speech to text - give the prompt with your voice
* Python bindings
* Typescript bindings
* Plugin support for langchain other developer tools
* Save your prompt/responses to disk
2023-05-15 16:44:45 -04:00
* Upload prompt/response manually/automatically to nomic.ai to aid future training runs
2023-04-13 17:37:18 -04:00
* Syntax highlighting support for programming languages, etc.
2023-04-13 18:57:33 -04:00
* REST API with a built-in webserver in the chat gui itself with a headless operation mode as well
2023-04-16 11:53:02 -04:00
* Advanced settings for changing temperature, topk, etc. (DONE)
2023-04-13 07:12:48 -04:00
* YOUR IDEA HERE
2023-04-08 23:46:23 -04:00
2023-04-09 00:01:42 -04:00
## Building and running
2023-04-08 23:53:24 -04:00
2023-05-09 23:04:54 -04:00
* Follow the visual instructions on the [build_and_run](build_and_run.md) page
2023-04-17 09:02:26 -04:00
2023-05-09 23:10:53 -04:00
## Getting the latest
If you've already checked out the source code and/or built the program make sure when you do a git fetch to get the latest changes and that you also do ```git submodule update --init --recursive``` to update the submodules.
2023-04-30 07:02:01 -04:00
## Manual download of models
* You can find a 'Model Explorer' on the official website where you can manually download models that we support: https://gpt4all.io/index.html
2023-04-30 07:02:01 -04:00
2023-04-30 16:07:59 -04:00
## Terminal Only Interface with no Qt dependency
Check out https://github.com/kuvaus/LlamaGPTJ-chat which is using the llmodel backend so it is compliant with our ecosystem and all models downloaded above should work with it.
2023-04-08 23:46:23 -04:00
## Contributing
2023-04-13 07:12:48 -04:00
* Pull requests welcome. See the feature wish list for ideas :)
2023-04-08 23:53:24 -04:00
2023-04-13 12:17:58 -04:00
## License
2023-04-17 16:41:28 -04:00
The source code of this chat interface is currently under a MIT license. The underlying GPT4All-j model is released under non-restrictive open-source Apache 2 License.
2023-04-14 09:17:02 -04:00
The GPT4All-J license allows for users to use generated outputs as they see fit. Users take responsibility for ensuring their content meets applicable requirements for publication in a given context or region.