mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2024-10-01 01:06:10 -04:00
baf1dfc5d7
minor fix Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
164 lines
12 KiB
Markdown
164 lines
12 KiB
Markdown
<h1 align="center">GPT4All</h1>
|
|
|
|
<p align="center">Open-source large language models that run locally on your CPU and nearly any GPU</p>
|
|
|
|
<p align="center">
|
|
<a href="https://gpt4all.io">GPT4All Website and Models</a> • <a href="https://docs.gpt4all.io">GPT4All Documentation</a> • <a href="https://discord.gg/mGZE39AS3e">Discord</a>
|
|
</p>
|
|
|
|
|
|
<p align="center">
|
|
<a href="https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html">🦜️🔗 Official Langchain Backend</a>
|
|
</p>
|
|
|
|
<p align="center">
|
|
GPT4All is made possible by our compute partner <a href="https://www.paperspace.com/">Paperspace</a>.
|
|
</p>
|
|
|
|
<p align="center">
|
|
<a href="https://www.phorm.ai/query?projectId=755eecd3-24ad-49cc-abf4-0ab84caacf63"><img src="https://img.shields.io/badge/Phorm-Ask_AI-%23F2777A.svg?&logo=data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNSIgaGVpZ2h0PSI0IiBmaWxsPSJub25lIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPgogIDxwYXRoIGQ9Ik00LjQzIDEuODgyYTEuNDQgMS40NCAwIDAgMS0uMDk4LjQyNmMtLjA1LjEyMy0uMTE1LjIzLS4xOTIuMzIyLS4wNzUuMDktLjE2LjE2NS0uMjU1LjIyNmExLjM1MyAxLjM1MyAwIDAgMS0uNTk1LjIxMmMtLjA5OS4wMTItLjE5Mi4wMTQtLjI3OS4wMDZsLTEuNTkzLS4xNHYtLjQwNmgxLjY1OGMuMDkuMDAxLjE3LS4xNjkuMjQ2LS4xOTFhLjYwMy42MDMgMCAwIDAgLjItLjEwNi41MjkuNTI5IDAgMCAwIC4xMzgtLjE3LjY1NC42NTQgMCAwIDAgLjA2NS0uMjRsLjAyOC0uMzJhLjkzLjkzIDAgMCAwLS4wMzYtLjI0OS41NjcuNTY3IDAgMCAwLS4xMDMtLjIuNTAyLjUwMiAwIDAgMC0uMTY4LS4xMzguNjA4LjYwOCAwIDAgMC0uMjQtLjA2N0wyLjQzNy43MjkgMS42MjUuNjcxYS4zMjIuMzIyIDAgMCAwLS4yMzIuMDU4LjM3NS4zNzUgMCAwIDAtLjExNi4yMzJsLS4xMTYgMS40NS0uMDU4LjY5Ny0uMDU4Ljc1NEwuNzA1IDRsLS4zNTctLjA3OUwuNjAyLjkwNkMuNjE3LjcyNi42NjMuNTc0LjczOS40NTRhLjk1OC45NTggMCAwIDEgLjI3NC0uMjg1Ljk3MS45NzEgMCAwIDEgLjMzNy0uMTRjLjExOS0uMDI2LjIyNy0uMDM0LjMyNS0uMDI2TDMuMjMyLjE2Yy4xNTkuMDE0LjMzNi4wMy40NTkuMDgyYTEuMTczIDEuMTczIDAgMCAxIC41NDUuNDQ3Yy4wNi4wOTQuMTA5LjE5Mi4xNDQuMjkzYTEuMzkyIDEuMzkyIDAgMCAxIC4wNzguNThsLS4wMjkuMzJaIiBmaWxsPSIjRjI3NzdBIi8+CiAgPHBhdGggZD0iTTQuMDgyIDIuMDA3YTEuNDU1IDEuNDU1IDAgMCAxLS4wOTguNDI3Yy0uMDUuMTI0LS4xMTQuMjMyLS4xOTIuMzI0YTEuMTMgMS4xMyAwIDAgMS0uMjU0LjIyNyAxLjM1MyAxLjM1MyAwIDAgMS0uNTk1LjIxNGMtLjEuMDEyLS4xOTMuMDE0LS4yOC4wMDZsLTEuNTYtLjEwOC4wMzQtLjQwNi4wMy0uMzQ4IDEuNTU5LjE1NGMuMDkgMCAuMTczLS4wMS4yNDgtLjAzM2EuNjAzLjYwMyAwIDAgMCAuMi0uMTA2LjUzMi41MzIgMCAwIDAgLjEzOS0uMTcyLjY2LjY2IDAgMCAwIC4wNjQtLjI0MWwuMDI5LS4zMjFhLjk0Ljk0IDAgMCAwLS4wMzYtLjI1LjU3LjU3IDAgMCAwLS4xMDMtLjIwMi41MDIuNTAyIDAgMCAwLS4xNjgtLjEzOC42MDUuNjA1IDAgMCAwLS4yNC0uMDY3TDEuMjczLjgyN2MtLjA5NC0uMDA4LS4xNjguMDEtLjIyMS4wNTUtLjA1My4wNDUtLjA4NC4xMTQtLjA5Mi4yMDZMLjcwNSA0IDAgMy45MzhsLjI1NS0yLjkxMUExLjAxIDEuMDEgMCAwIDEgLjM5My41NzIuOTYyLjk2MiAwIDAgMSAuNjY2LjI4NmEuOTcuOTcgMCAwIDEgLjMzOC0uMTRDMS4xMjIuMTIgMS4yMy4xMSAxLjMyOC4xMTlsMS41OTMuMTRjLjE2LjAxNC4zLjA0Ny40MjMuMWExLjE3IDEuMTcgMCAwIDEgLjU0NS40NDhjLjA2MS4wOTUuMTA5LjE5My4xNDQuMjk1YTEuNDA2IDEuNDA2IDAgMCAxIC4wNzcuNTgzbC0uMDI4LjMyMloiIGZpbGw9IndoaXRlIi8+CiAgPHBhdGggZD0iTTQuMDgyIDIuMDA3YTEuNDU1IDEuNDU1IDAgMCAxLS4wOTguNDI3Yy0uMDUuMTI0LS4xMTQuMjMyLS4xOTIuMzI0YTEuMTMgMS4xMyAwIDAgMS0uMjU0LjIyNyAxLjM1MyAxLjM1MyAwIDAgMS0uNTk1LjIxNGMtLjEuMDEyLS4xOTMuMDE0LS4yOC4wMDZsLTEuNTYtLjEwOC4wMzQtLjQwNi4wMy0uMzQ4IDEuNTU5LjE1NGMuMDkgMCAuMTczLS4wMS4yNDgtLjAzM2EuNjAzLjYwMyAwIDAgMCAuMi0uMTA2LjUzMi41MzIgMCAwIDAgLjEzOS0uMTcyLjY2LjY2IDAgMCAwIC4wNjQtLjI0MWwuMDI5LS4zMjFhLjk0Ljk0IDAgMCAwLS4wMzYtLjI1LjU3LjU3IDAgMCAwLS4xMDMtLjIwMi41MDIuNTAyIDAgMCAwLS4xNjgtLjEzOC42MDUuNjA1IDAgMCAwLS4yNC0uMDY3TDEuMjczLjgyN2MtLjA5NC0uMDA4LS4xNjguMDEtLjIyMS4wNTUtLjA1My4wNDUtLjA4NC4xMTQtLjA5Mi4yMDZMLjcwNSA0IDAgMy45MzhsLjI1NS0yLjkxMUExLjAxIDEuMDEgMCAwIDEgLjM5My41NzIuOTYyLjk2MiAwIDAgMSAuNjY2LjI4NmEuOTcuOTcgMCAwIDEgLjMzOC0uMTRDMS4xMjIuMTIgMS4yMy4xMSAxLjMyOC4xMTlsMS41OTMuMTRjLjE2LjAxNC4zLjA0Ny40MjMuMWExLjE3IDEuMTcgMCAwIDEgLjU0NS40NDhjLjA2MS4wOTUuMTA5LjE5My4xNDQuMjk1YTEuNDA2IDEuNDA2IDAgMCAxIC4wNzcuNTgzbC0uMDI4LjMyMloiIGZpbGw9IndoaXRlIi8+Cjwvc3ZnPgo=" alt="phorm.ai"></a>
|
|
</p>
|
|
|
|
<p align="center">
|
|
<img width="600" height="365" src="https://user-images.githubusercontent.com/13879686/231876409-e3de1934-93bb-4b4b-9013-b491a969ebbc.gif">
|
|
</p>
|
|
<p align="center">
|
|
Run on an M1 macOS Device (not sped up!)
|
|
</p>
|
|
|
|
## GPT4All: An ecosystem of open-source on-edge large language models.
|
|
|
|
GPT4All is an ecosystem to run **powerful** and **customized** large language models that work locally on consumer grade CPUs and any GPU. Note that your CPU needs to support [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions).
|
|
|
|
Learn more in the [documentation](https://docs.gpt4all.io).
|
|
|
|
A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. **Nomic AI** supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.
|
|
|
|
### What's New ([Issue Tracker](https://github.com/orgs/nomic-ai/projects/2))
|
|
- [Latest Release ](https://github.com/nomic-ai/gpt4all/releases)
|
|
- **October 19th, 2023**: GGUF Support Launches with Support for:
|
|
- Mistral 7b base model, an updated model gallery on [gpt4all.io](https://gpt4all.io), several new local code models including Rift Coder v1.5
|
|
- [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4\_0 and Q4\_1 quantizations in GGUF.
|
|
- Offline build support for running old versions of the GPT4All Local LLM Chat Client.
|
|
- **September 18th, 2023**: [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) launches supporting local LLM inference on AMD, Intel, Samsung, Qualcomm and NVIDIA GPUs.
|
|
- **August 15th, 2023**: GPT4All API launches allowing inference of local LLMs from docker containers.
|
|
- **July 2023**: Stable support for LocalDocs, a GPT4All Plugin that allows you to privately and locally chat with your data.
|
|
|
|
|
|
### Chat Client
|
|
Run any GPT4All model natively on your home desktop with the auto-updating desktop chat client. See <a href="https://gpt4all.io">GPT4All Website</a> for a full list of open-source models you can run with this powerful desktop application.
|
|
|
|
Direct Installer Links:
|
|
|
|
* [macOS](https://gpt4all.io/installers/gpt4all-installer-darwin.dmg)
|
|
|
|
* [Windows](https://gpt4all.io/installers/gpt4all-installer-win64.exe)
|
|
|
|
* [Ubuntu](https://gpt4all.io/installers/gpt4all-installer-linux.run)
|
|
|
|
Find the most up-to-date information on the [GPT4All Website](https://gpt4all.io/)
|
|
|
|
### Chat Client building and running
|
|
|
|
* Follow the visual instructions on the chat client [build_and_run](gpt4all-chat/build_and_run.md) page
|
|
|
|
### Bindings
|
|
|
|
* <a href="https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python/README.md">:snake: Official Python Bindings</a> [![Downloads](https://static.pepy.tech/badge/gpt4all/week)](https://pepy.tech/project/gpt4all)
|
|
* <a href="https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/typescript">:computer: Official Typescript Bindings</a>
|
|
* <a href="https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/golang">:computer: Official GoLang Bindings</a>
|
|
* <a href="https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/csharp">:computer: Official C# Bindings</a>
|
|
* <a href="https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/java">:computer: Official Java Bindings</a>
|
|
|
|
### Integrations
|
|
|
|
* 🗃️ [Weaviate Vector Database](https://github.com/weaviate/weaviate) - [module docs](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-gpt4all)
|
|
|
|
## Contributing
|
|
GPT4All welcomes contributions, involvement, and discussion from the open source community!
|
|
Please see CONTRIBUTING.md and follow the issues, bug reports, and PR markdown templates.
|
|
|
|
Check project discord, with project owners, or through existing issues/PRs to avoid duplicate work.
|
|
Please make sure to tag all of the above with relevant project identifiers or your contribution could potentially get lost.
|
|
Example tags: `backend`, `bindings`, `python-bindings`, `documentation`, etc.
|
|
|
|
|
|
## GPT4All 2024 Roadmap
|
|
To contribute to the development of any of the below roadmap items, make or find the corresponding issue and cross-reference the [in-progress task](https://github.com/orgs/nomic-ai/projects/2/views/1).
|
|
|
|
Each item should have an issue link below.
|
|
|
|
- Chat UI Language Localization (localize UI into the native languages of users)
|
|
- [ ] Chinese
|
|
- [ ] German
|
|
- [ ] French
|
|
- [ ] Portuguese
|
|
- [ ] Your native language here.
|
|
- UI Redesign: an internal effort at Nomic to improve the UI/UX of gpt4all for all users.
|
|
- [ ] Design new user interface and gather community feedback
|
|
- [ ] Implement the new user interface and experience.
|
|
- Installer and Update Improvements
|
|
- [ ] Seamless native installation and update process on OSX
|
|
- [ ] Seamless native installation and update process on Windows
|
|
- [ ] Seamless native installation and update process on Linux
|
|
- Model discoverability improvements:
|
|
- [x] Support huggingface model discoverability
|
|
- [ ] Support Nomic hosted model discoverability
|
|
- LocalDocs (towards a local perplexity)
|
|
- Multilingual LocalDocs Support
|
|
- [ ] Create a multilingual experience
|
|
- [ ] Incorporate a multilingual embedding model
|
|
- [ ] Specify a preferred multilingual LLM for localdocs
|
|
- Improved RAG techniques
|
|
- [ ] Query augmentation and re-writing
|
|
- [ ] Improved chunking and text extraction from arbitrary modalities
|
|
- [ ] Custom PDF extractor past the QT default (charts, tables, text)
|
|
- [ ] Faster indexing and local exact search with v1.5 hamming embeddings and reranking (skip ANN index construction!)
|
|
- Support queries like 'summarize X document'
|
|
- Multimodal LocalDocs support with Nomic Embed
|
|
- Nomic Dataset Integration with real-time LocalDocs
|
|
- [ ] Include an option to allow the export of private LocalDocs collections to Nomic Atlas for debugging data/chat quality
|
|
- [ ] Allow optional sharing of LocalDocs collections between users.
|
|
- [ ] Allow the import of a LocalDocs collection from an Atlas Datasets
|
|
- Chat with live version of Wikipedia, Chat with Pubmed, chat with the latest snapshot of world news.
|
|
- First class Multilingual LLM Support
|
|
- [ ] Recommend and set a default LLM for German
|
|
- [ ] Recommend and set a default LLM for English
|
|
- [ ] Recommend and set a default LLM for Chinese
|
|
- [ ] Recommend and set a default LLM for Spanish
|
|
|
|
- Server Mode improvements
|
|
- Improved UI and new requested features:
|
|
- [ ] Fix outstanding bugs and feature requests around networking configurations.
|
|
- [ ] Support Nomic Embed inferencing
|
|
- [ ] First class documentation
|
|
- [ ] Improving developer use and quality of server mode (e.g. support larger batches)
|
|
|
|
## Technical Reports
|
|
|
|
<p align="center">
|
|
<a href="https://gpt4all.io/reports/GPT4All_Technical_Report_3.pdf">:green_book: Technical Report 3: GPT4All Snoozy and Groovy </a>
|
|
</p>
|
|
|
|
<p align="center">
|
|
<a href="https://static.nomic.ai/gpt4all/2023_GPT4All-J_Technical_Report_2.pdf">:green_book: Technical Report 2: GPT4All-J </a>
|
|
</p>
|
|
|
|
<p align="center">
|
|
<a href="https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf">:green_book: Technical Report 1: GPT4All</a>
|
|
</p>
|
|
|
|
## Citation
|
|
|
|
If you utilize this repository, models or data in a downstream project, please consider citing it with:
|
|
```
|
|
@misc{gpt4all,
|
|
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
|
|
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
|
|
year = {2023},
|
|
publisher = {GitHub},
|
|
journal = {GitHub repository},
|
|
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
|
|
}
|
|
```
|