gpt4all/gpt4all-chat
Jared Van Bortel 88d85be0f9
chat: fix build on Windows and Nomic Embed path on macOS (#2467)
* chat: remove unused oscompat source files

These files are no longer needed now that the hnswlib index is gone.
This fixes an issue with the Windows build as there was a compilation
error in oscompat.cpp.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* llm: fix pragma to be recognized by MSVC

Replaces this MSVC warning:
C:\msys64\home\Jared\gpt4all\gpt4all-chat\llm.cpp(53,21): warning C4081: expected '('; found 'string'

With this:
C:\msys64\home\Jared\gpt4all\gpt4all-chat\llm.cpp : warning : offline installer build will not check for updates!

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* usearch: fork usearch to fix `CreateFile` build error

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* dlhandle: fix incorrect assertion on Windows

SetErrorMode returns the previous value of the error mode flags, not an
indicator of success.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* llamamodel: fix UB in LLamaModel::embedInternal

It is undefined behavior to increment an STL iterator past the end of
the container. Use offsets to do the math instead.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* cmake: install embedding model to bundle's Resources dir on macOS

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* ci: fix macOS build by explicitly installing Rosetta

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

---------

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-06-25 17:22:51 -04:00
..
cmake installer script: fix detection of macOS on newer QtIFW (#2361) 2024-05-17 12:28:46 -04:00
flatpak-manifest Update appdata.xml (#2307) 2024-05-09 12:51:38 -04:00
icons chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
metadata Add a latest news markdown file for future version. 2024-06-24 13:57:12 -04:00
qml chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
resources chat: fix window icon on Windows (#2321) 2024-05-09 13:42:46 -04:00
usearch@22cfa3bd00 chat: fix build on Windows and Nomic Embed path on macOS (#2467) 2024-06-25 17:22:51 -04:00
build_and_run.md support the llama.cpp CUDA backend (#2310) 2024-05-15 15:27:50 -04:00
chat.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
chat.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
chatapi.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
chatapi.h chat: don't use incomplete types with signals/slots/Q_INVOKABLE (#2408) 2024-06-06 11:59:28 -04:00
chatlistmodel.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
chatlistmodel.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
chatllm.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
chatllm.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
chatmodel.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
CMakeLists.txt chat: fix build on Windows and Nomic Embed path on macOS (#2467) 2024-06-25 17:22:51 -04:00
database.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
database.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
download.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
download.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
embllm.cpp chat: fix build on Windows and Nomic Embed path on macOS (#2467) 2024-06-25 17:22:51 -04:00
embllm.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
LICENSE Moving everything to subdir for monorepo merge. 2023-05-10 10:26:55 -04:00
llm.cpp chat: fix build on Windows and Nomic Embed path on macOS (#2467) 2024-06-25 17:22:51 -04:00
llm.h chat: fix #includes with include-what-you-use (#2401) 2024-06-04 14:47:11 -04:00
localdocs.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
localdocs.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
localdocsmodel.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
localdocsmodel.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
logger.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
logger.h chat: fix #includes with include-what-you-use (#2401) 2024-06-04 14:47:11 -04:00
main.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
main.qml chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
modellist.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
modellist.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
mysettings.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
mysettings.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
network.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
network.h chat: don't use incomplete types with signals/slots/Q_INVOKABLE (#2408) 2024-06-06 11:59:28 -04:00
qa_checklist.md Update qa_checklist.md 2024-06-25 13:50:19 -04:00
README.md Updated chat wishlist (#1351) 2023-10-12 14:01:44 -04:00
responsetext.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
responsetext.h chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
server.cpp chat: major UI redesign for v3.0.0 (#2396) 2024-06-24 18:49:23 -04:00
server.h chat: don't use incomplete types with signals/slots/Q_INVOKABLE (#2408) 2024-06-06 11:59:28 -04:00

gpt4all-chat

Cross platform Qt based GUI for GPT4All versions with GPT-J as the base model. NOTE: The model seen in the screenshot is actually a preview of a new training run for GPT4All based on GPT-J. The GPT4All project is busy at work getting ready to release this model including installers for all three major OS's. In the meantime, you can try this UI out with the original GPT-J model by following build instructions below.

image

Install

One click installers for macOS, Linux, and Windows at https://gpt4all.io

Features

  • Cross-platform (Linux, Windows, MacOSX)
  • Fast CPU based inference using ggml for GPT-J based models
  • The UI is made to look and feel like you've come to expect from a chatty gpt
  • Check for updates so you can always stay fresh with latest models
  • Easy to install with precompiled binaries available for all three major desktop platforms
  • Multi-modal - Ability to load more than one model and switch between them
  • Supports both llama.cpp and gptj.cpp style models
  • Model downloader in GUI featuring many popular open source models
  • Settings dialog to change temp, top_p, top_k, threads, etc
  • Copy your conversation to clipboard
  • Check for updates to get the very latest GUI

Feature wishlist

  • Multi-chat - a list of current and past chats and the ability to save/delete/export and switch between
  • Text to speech - have the AI response with voice
  • Speech to text - give the prompt with your voice
  • Plugin support for langchain other developer tools
  • chat gui headless operation mode
  • Advanced settings for changing temperature, topk, etc. (DONE)
    • Improve the accessibility of the installer for screen reader users
  • YOUR IDEA HERE

Building and running

Getting the latest

If you've already checked out the source code and/or built the program make sure when you do a git fetch to get the latest changes and that you also do git submodule update --init --recursive to update the submodules.

Manual download of models

Terminal Only Interface with no Qt dependency

Check out https://github.com/kuvaus/LlamaGPTJ-chat which is using the llmodel backend so it is compliant with our ecosystem and all models downloaded above should work with it.

Contributing

  • Pull requests welcome. See the feature wish list for ideas :)

License

The source code of this chat interface is currently under a MIT license. The underlying GPT4All-j model is released under non-restrictive open-source Apache 2 License.

The GPT4All-J license allows for users to use generated outputs as they see fit. Users take responsibility for ensuring their content meets applicable requirements for publication in a given context or region.