diff --git a/README.md b/README.md index 63a341eb..9e73eb6f 100644 --- a/README.md +++ b/README.md @@ -1,83 +1,64 @@
Open-source large language models that run locally on your CPU and nearly any GPU
- +Privacy-oriented software for chatting with large language models that run on your own computer.
-GPT4All Models • GPT4All Documentation • Discord + Official Website • Documentation • Discord
- --🦜️🔗 Official Langchain Backend + Official Download Links: Windows — macOS — Ubuntu
- --Sign up for updates and news + NEW: Subscribe to our mailing list for updates and news!
-GPT4All is made possible by our compute partner Paperspace.
-- +
-Run on an M1 macOS Device (not sped up!) +Run on an M2 MacBook Pro (not sped up!)
-## GPT4All: An ecosystem of open-source on-edge large language models. -GPT4All is an ecosystem to run **powerful** and **customized** large language models that work locally on consumer grade CPUs and any GPU. Note that your CPU needs to support [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions). +## About GPT4All + +GPT4All is an ecosystem to run **powerful** and **customized** large language models that work locally on consumer grade CPUs and NVIDIA and AMD GPUs. Note that your CPU needs to support [AVX instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions). Learn more in the [documentation](https://docs.gpt4all.io). -A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. **Nomic AI** supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models. +A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All software. **Nomic AI** supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily deploy their own on-edge large language models. -### What's New ([Issue Tracker](https://github.com/orgs/nomic-ai/projects/2)) -- [Latest Release ](https://github.com/nomic-ai/gpt4all/releases) + +### What's New - **October 19th, 2023**: GGUF Support Launches with Support for: - Mistral 7b base model, an updated model gallery on [gpt4all.io](https://gpt4all.io), several new local code models including Rift Coder v1.5 - [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4\_0 and Q4\_1 quantizations in GGUF. - Offline build support for running old versions of the GPT4All Local LLM Chat Client. -- **September 18th, 2023**: [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) launches supporting local LLM inference on AMD, Intel, Samsung, Qualcomm and NVIDIA GPUs. -- **August 15th, 2023**: GPT4All API launches allowing inference of local LLMs from docker containers. -- **July 2023**: Stable support for LocalDocs, a GPT4All Plugin that allows you to privately and locally chat with your data. +- **September 18th, 2023**: [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) launches supporting local LLM inference on NVIDIA and AMD GPUs. +- **July 2023**: Stable support for LocalDocs, a feature that allows you to privately and locally chat with your data. +- **June 28th, 2023**: Docker-based API server launches allowing inference of local LLMs from an OpenAI-compatible HTTP endpoint. -### Chat Client -Run any GPT4All model natively on your home desktop with the auto-updating desktop chat client. See GPT4All Website for a full list of open-source models you can run with this powerful desktop application. +### Building From Source -Direct Installer Links: +* Follow the instructions [here](gpt4all-chat/build_and_run.md) to build the GPT4All Chat UI from source. -* [macOS](https://gpt4all.io/installers/gpt4all-installer-darwin.dmg) - -* [Windows](https://gpt4all.io/installers/gpt4all-installer-win64.exe) - -* [Ubuntu](https://gpt4all.io/installers/gpt4all-installer-linux.run) - -Find the most up-to-date information on the [GPT4All Website](https://gpt4all.io/) - -### Chat Client building and running - -* Follow the visual instructions on the chat client [build_and_run](gpt4all-chat/build_and_run.md) page ### Bindings -* :snake: Official Python Bindings [![Downloads](https://static.pepy.tech/badge/gpt4all/week)](https://pepy.tech/project/gpt4all) -* :computer: Official Typescript Bindings -* :computer: Official GoLang Bindings -* :computer: Official C# Bindings -* :computer: Official Java Bindings +* :snake: Official Python Bindings [![Downloads](https://static.pepy.tech/badge/gpt4all/week)](https://pepy.tech/project/gpt4all) +* :computer: Typescript Bindings + ### Integrations -* 🗃️ [Weaviate Vector Database](https://github.com/weaviate/weaviate) - [module docs](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-gpt4all) +* :parrot::link: [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) +* :card_file_box: [Weaviate Vector Database](https://github.com/weaviate/weaviate) - [module docs](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-gpt4all) + ## Contributing GPT4All welcomes contributions, involvement, and discussion from the open source community! @@ -139,6 +120,7 @@ Each item should have an issue link below. - [ ] First class documentation - [ ] Improving developer use and quality of server mode (e.g. support larger batches) + ## Technical Reports@@ -153,6 +135,7 @@ Each item should have an issue link below. :green_book: Technical Report 1: GPT4All
+ ## Citation If you utilize this repository, models or data in a downstream project, please consider citing it with: