gpt4all: an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue
Go to file
2023-10-18 20:24:54 -04:00
.circleci python: prepare version 2.0.0rc1 (#1529) 2023-10-18 20:24:54 -04:00
.github issue template: remove "Related Components" section 2023-10-10 10:39:28 -07:00
gpt4all-api Switch to new models2.json for new gguf release and bump our version to 2023-10-05 18:16:19 -04:00
gpt4all-backend python: prepare version 2.0.0rc1 (#1529) 2023-10-18 20:24:54 -04:00
gpt4all-bindings python: prepare version 2.0.0rc1 (#1529) 2023-10-18 20:24:54 -04:00
gpt4all-chat docs: clarify Vulkan dep in build instructions for bindings (#1525) 2023-10-18 12:09:52 -04:00
gpt4all-docker mono repo structure 2023-05-01 15:45:23 -04:00
gpt4all-training make codespell happy 2023-10-10 12:00:06 -04:00
.codespellrc Another codespell attempted fix. 2023-10-11 09:17:38 -04:00
.gitignore fix: update train scripts and configs for other models (#1164) 2023-07-12 15:18:24 -04:00
.gitmodules remove old llama.cpp submodules 2023-10-05 18:16:19 -04:00
CONTRIBUTING.md [DATALAD RUNCMD] run codespell throughout 2023-05-16 11:33:59 -04:00
gpt4all-lora-demo.gif GIF 2023-03-28 15:54:44 -04:00
LICENSE_SOM.txt Nomic vulkan backend licensed under the Software for Open Models License (SOM), version 1.0. 2023-08-31 15:29:54 -04:00
LICENSE.txt Add MIT license. 2023-04-06 11:28:59 -04:00
monorepo_plan.md Update monorepo_plan.md 2023-05-05 09:32:45 -04:00
README.md Update README.md 2023-10-12 07:52:36 -04:00

GPT4All

Open-source assistant-style large language models that run locally on your CPU

New: Now with Nomic Vulkan Universal GPU support. Learn more.

GPT4All Website

GPT4All Documentation

Discord

🦜🔗 Official Langchain Backend

GPT4All is made possible by our compute partner Paperspace.

Run on an M1 macOS Device (not sped up!)

GPT4All: An ecosystem of open-source on-edge large language models.

GPT4All is an ecosystem to train and deploy powerful and customized large language models that run locally on consumer grade CPUs. Note that your CPU needs to support AVX or AVX2 instructions.

Learn more in the documentation.

The goal is simple - be the best instruction tuned assistant-style language model that any person or enterprise can freely use, distribute and build on.

A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. Nomic AI supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.

Chat Client

Run any GPT4All model natively on your home desktop with the auto-updating desktop chat client. See GPT4All Website for a full list of open-source models you can run with this powerful desktop application.

Direct Installer Links:

Find the most up-to-date information on the GPT4All Website

Chat Client building and running

  • Follow the visual instructions on the chat client build_and_run page

Bindings

Integrations

Contributing

GPT4All welcomes contributions, involvement, and discussion from the open source community! Please see CONTRIBUTING.md and follow the issues, bug reports, and PR markdown templates.

Check project discord, with project owners, or through existing issues/PRs to avoid duplicate work. Please make sure to tag all of the above with relevant project identifiers or your contribution could potentially get lost. Example tags: backend, bindings, python-bindings, documentation, etc.

Technical Reports

📗 Technical Report 3: GPT4All Snoozy and Groovy

📗 Technical Report 2: GPT4All-J

📗 Technical Report 1: GPT4All

Star History

Star History Chart

Citation

If you utilize this repository, models or data in a downstream project, please consider citing it with:

@misc{gpt4all,
  author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
  title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}