* new skeleton Signed-off-by: Max Cembalest <max@nomic.ai> * v3 docs Signed-off-by: Max Cembalest <max@nomic.ai> --------- Signed-off-by: Max Cembalest <max@nomic.ai>
2.7 KiB
Frequently Asked Questions
Models
Which language models are supported?
Our backend supports models with a llama.cpp
implementation which have been uploaded to HuggingFace.
Which embedding models are supported?
The following embedding models can be used within the application and with the Embed4All
class from the gpt4all
Python library. The default context length as GGUF files is 2048 but can be extended.
Name | Initializing with Embed4All |
Context Length | Embedding Length | File Size |
---|---|---|---|---|
SBert | pythonemb = Embed4All("all-MiniLM-L6-v2.gguf2.f16.gguf") |
512 | 384 | 44 MiB |
Nomic Embed v1 | nomic‑embed‑text‑v1.f16.gguf | 2048 | 768 | 262 MiB |
Nomic Embed v1.5 | nomic‑embed‑text‑v1.5.f16.gguf | 2048 | 64-768 | 262 MiB |
Software
What software do I need?
All you need is to install GPT4all onto you Windows, Mac, or Linux computer.
Which SDK languages are supported?
Our SDK is in Python for usability, but these are light bindings around llama.cpp
implementations that we contribute to for efficiency and accessibility on everyday computers.
Is there an API?
Yes, you can run your model in server-mode with our OpenAI-compatible API, which you can configure in settings
Can I monitor a GPT4All deployment?
Yes, GPT4All integrates with OpenLIT so you can deploy LLMs with user interactions and hardware usage automatically monitored for full observability.
Is there a command line interface (CLI)?
Yes, we have a lightweight use of the Python client as a CLI. We welcome further contributions!
Hardware
What hardware do I need?
GPT4All can run on CPU, Metal (Apple Silicon M1+), and GPU.
What are the system requirements?
Your CPU needs to support AVX or AVX2 instructions and you need enough RAM to load a model into memory. If the