2023-05-10 13:38:32 -04:00
|
|
|
# Python GPT4All
|
|
|
|
|
2023-05-11 11:02:44 -04:00
|
|
|
This package contains a set of Python bindings around the `llmodel` C-API.
|
2023-05-10 13:38:32 -04:00
|
|
|
|
2023-06-08 13:43:31 -04:00
|
|
|
Package on PyPI: https://pypi.org/project/gpt4all/
|
|
|
|
|
2023-05-12 10:30:03 -04:00
|
|
|
## Documentation
|
2023-06-08 13:43:31 -04:00
|
|
|
https://docs.gpt4all.io/gpt4all_python.html
|
2023-05-12 10:30:03 -04:00
|
|
|
|
2023-05-11 14:29:17 -04:00
|
|
|
## Installation
|
|
|
|
|
|
|
|
```
|
|
|
|
pip install gpt4all
|
|
|
|
```
|
2023-05-10 13:38:32 -04:00
|
|
|
|
2023-05-11 11:02:44 -04:00
|
|
|
## Local Build Instructions
|
2023-05-10 13:38:32 -04:00
|
|
|
|
2023-05-16 15:29:27 -04:00
|
|
|
**NOTE**: If you are doing this on a Windows machine, you must build the GPT4All backend using [MinGW64](https://www.mingw-w64.org/) compiler.
|
|
|
|
|
2023-05-10 13:38:32 -04:00
|
|
|
1. Setup `llmodel`
|
|
|
|
|
|
|
|
```
|
2023-06-28 12:15:05 -04:00
|
|
|
git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git
|
2023-05-13 14:43:46 -04:00
|
|
|
cd gpt4all/gpt4all-backend/
|
2023-05-10 13:38:32 -04:00
|
|
|
mkdir build
|
|
|
|
cd build
|
|
|
|
cmake ..
|
2023-07-22 21:04:29 -04:00
|
|
|
cmake --build . --parallel # optionally append: --config Release
|
2023-05-10 13:38:32 -04:00
|
|
|
```
|
2023-05-14 07:41:45 -04:00
|
|
|
Confirm that `libllmodel.*` exists in `gpt4all-backend/build`.
|
2023-05-10 13:38:32 -04:00
|
|
|
|
|
|
|
2. Setup Python package
|
|
|
|
|
|
|
|
```
|
2023-05-10 13:48:36 -04:00
|
|
|
cd ../../gpt4all-bindings/python
|
2023-05-10 13:38:32 -04:00
|
|
|
pip3 install -e .
|
|
|
|
```
|
|
|
|
|
2023-06-09 04:13:35 -04:00
|
|
|
## Usage
|
|
|
|
|
|
|
|
Test it out! In a Python script or console:
|
2023-05-10 13:38:32 -04:00
|
|
|
|
|
|
|
```python
|
|
|
|
from gpt4all import GPT4All
|
2023-07-01 18:52:39 -04:00
|
|
|
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin")
|
|
|
|
output = model.generate("The capital of France is ", max_tokens=3)
|
2023-09-01 13:01:40 -04:00
|
|
|
print(output)
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
|
|
GPU Usage
|
|
|
|
```python
|
|
|
|
from gpt4all import GPT4All
|
|
|
|
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin", device='gpu') # device='amd', device='intel'
|
|
|
|
output = model.generate("The capital of France is ", max_tokens=3)
|
2023-07-01 18:52:39 -04:00
|
|
|
print(output)
|
2023-05-10 13:38:32 -04:00
|
|
|
```
|
2023-06-18 14:08:43 -04:00
|
|
|
|
|
|
|
## Troubleshooting a Local Build
|
|
|
|
- If you're on Windows and have compiled with a MinGW toolchain, you might run into an error like:
|
|
|
|
```
|
|
|
|
FileNotFoundError: Could not find module '<...>\gpt4all-bindings\python\gpt4all\llmodel_DO_NOT_MODIFY\build\libllmodel.dll'
|
|
|
|
(or one of its dependencies). Try using the full path with constructor syntax.
|
|
|
|
```
|
|
|
|
The key phrase in this case is _"or one of its dependencies"_. The Python interpreter you're using
|
|
|
|
probably doesn't see the MinGW runtime dependencies. At the moment, the following three are required:
|
|
|
|
`libgcc_s_seh-1.dll`, `libstdc++-6.dll` and `libwinpthread-1.dll`. You should copy them from MinGW
|
|
|
|
into a folder where Python will see them, preferably next to `libllmodel.dll`.
|
|
|
|
|
|
|
|
- Note regarding the Microsoft toolchain: Compiling with MSVC is possible, but not the official way to
|
|
|
|
go about it at the moment. MSVC doesn't produce DLLs with a `lib` prefix, which the bindings expect.
|
|
|
|
You'd have to amend that yourself.
|