gpt4all/gpt4all-chat
Jared Van Bortel 855fd22417
localdocs: load model before checking which model is loaded (#2284)
* localdocs: load model before checking what we loaded

Fixes "WARNING: Request to generate sync embeddings for non-local model
invalid"

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* fix inverted assertion

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

---------

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-02 09:30:36 -04:00
..
cmake fix references to old backend implementations 2023-10-30 10:37:05 -04:00
flatpak-manifest appdata: update software description 2023-10-05 10:12:43 -04:00
hnswlib LocalDocs version 2 with text embeddings. 2023-11-17 11:59:31 -05:00
icons chat: temporarily revert some UI changes before next release (#2234) 2024-04-18 14:52:29 -04:00
metadata chat: add release notes for v2.7.4 and bump version (#2269) 2024-04-26 12:55:54 -04:00
qml improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
build_and_run.md add fedora command for QT and related packages (#1871) 2024-01-24 18:00:49 -05:00
chat.cpp improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
chat.h improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
chatapi.cpp feat: Add support for Mistral API models (#2053) 2024-03-13 18:23:57 -04:00
chatapi.h feat: Add support for Mistral API models (#2053) 2024-03-13 18:23:57 -04:00
chatlistmodel.cpp Some cleanps 2024-01-03 08:41:40 -06:00
chatlistmodel.h Fix for issue #2080 where the GUI appears to hang when a chat with a large 2024-03-06 16:52:17 -06:00
chatllm.cpp mixpanel: improved GPU device statistics (plus GPU sort order fix) (#2297) 2024-05-01 16:15:48 -04:00
chatllm.h improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
chatmodel.h Restore state from text if necessary. 2023-10-11 09:16:02 -04:00
CMakeLists.txt chat: add release notes for v2.7.4 and bump version (#2269) 2024-04-26 12:55:54 -04:00
database.cpp mixpanel: fix doc_collections_total of localdocs_startup (#2270) 2024-04-26 14:05:47 -04:00
database.h improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
download.cpp improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
download.h improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
embeddings.cpp Add Nomic Embed model for atlas with localdocs. 2024-01-31 22:22:08 -05:00
embeddings.h LocalDocs version 2 with text embeddings. 2023-11-17 11:59:31 -05:00
embllm.cpp localdocs: load model before checking which model is loaded (#2284) 2024-05-02 09:30:36 -04:00
embllm.h implement local Nomic Embed via llama.cpp (#2086) 2024-03-13 18:09:24 -04:00
LICENSE Moving everything to subdir for monorepo merge. 2023-05-10 10:26:55 -04:00
llm.cpp improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
llm.h Don't show the download button if we are not connected to an online network. 2024-02-07 09:40:49 -06:00
localdocs.cpp improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
localdocs.h improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
localdocsmodel.cpp improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
localdocsmodel.h improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
logger.cpp Logger should also output to stderr 2023-06-01 14:15:11 -04:00
logger.h Implemented logging mechanism (#785) 2023-06-01 16:50:42 +02:00
main.cpp chat: join ChatLLM threads without calling destructors (#2043) 2024-03-06 16:42:59 -05:00
main.qml chat: temporarily revert some UI changes before next release (#2234) 2024-04-18 14:52:29 -04:00
modellist.cpp localdocs: small but important fixes to local docs (#2236) 2024-04-18 14:51:13 -04:00
modellist.h implement local Nomic Embed via llama.cpp (#2086) 2024-03-13 18:09:24 -04:00
mysettings.cpp improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
mysettings.h improve mixpanel usage statistics (#2238) 2024-04-25 13:16:52 -04:00
network.cpp mixpanel: fix opt-out events after #2238 (#2296) 2024-05-01 12:08:40 -04:00
network.h mixpanel: fix opt-out events after #2238 (#2296) 2024-05-01 12:08:40 -04:00
README.md Updated chat wishlist (#1351) 2023-10-12 14:01:44 -04:00
responsetext.cpp responsetext : fix markdown code block trimming (#2232) 2024-04-18 14:50:32 -04:00
responsetext.h Fix bugs with the context link text for localdocs to make the context links 2024-04-15 14:30:26 -05:00
server.cpp feat: added api server port setting 2024-03-09 09:26:40 -06:00
server.h Start working on more thread safety and model load error handling. 2023-06-20 14:39:22 -03:00

gpt4all-chat

Cross platform Qt based GUI for GPT4All versions with GPT-J as the base model. NOTE: The model seen in the screenshot is actually a preview of a new training run for GPT4All based on GPT-J. The GPT4All project is busy at work getting ready to release this model including installers for all three major OS's. In the meantime, you can try this UI out with the original GPT-J model by following build instructions below.

image

Install

One click installers for macOS, Linux, and Windows at https://gpt4all.io

Features

  • Cross-platform (Linux, Windows, MacOSX)
  • Fast CPU based inference using ggml for GPT-J based models
  • The UI is made to look and feel like you've come to expect from a chatty gpt
  • Check for updates so you can always stay fresh with latest models
  • Easy to install with precompiled binaries available for all three major desktop platforms
  • Multi-modal - Ability to load more than one model and switch between them
  • Supports both llama.cpp and gptj.cpp style models
  • Model downloader in GUI featuring many popular open source models
  • Settings dialog to change temp, top_p, top_k, threads, etc
  • Copy your conversation to clipboard
  • Check for updates to get the very latest GUI

Feature wishlist

  • Multi-chat - a list of current and past chats and the ability to save/delete/export and switch between
  • Text to speech - have the AI response with voice
  • Speech to text - give the prompt with your voice
  • Plugin support for langchain other developer tools
  • chat gui headless operation mode
  • Advanced settings for changing temperature, topk, etc. (DONE)
    • Improve the accessibility of the installer for screen reader users
  • YOUR IDEA HERE

Building and running

Getting the latest

If you've already checked out the source code and/or built the program make sure when you do a git fetch to get the latest changes and that you also do git submodule update --init --recursive to update the submodules.

Manual download of models

Terminal Only Interface with no Qt dependency

Check out https://github.com/kuvaus/LlamaGPTJ-chat which is using the llmodel backend so it is compliant with our ecosystem and all models downloaded above should work with it.

Contributing

  • Pull requests welcome. See the feature wish list for ideas :)

License

The source code of this chat interface is currently under a MIT license. The underlying GPT4All-j model is released under non-restrictive open-source Apache 2 License.

The GPT4All-J license allows for users to use generated outputs as they see fit. Users take responsibility for ensuring their content meets applicable requirements for publication in a given context or region.