gpt4all/gpt4all-api
Andriy Mulyar a67f8132e1
Update README.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-28 14:29:15 -04:00
..
gpt4all_api GPT4All API Scaffolding. Matches OpenAI OpenAPI spec for chats and completions (#839) 2023-06-28 14:28:52 -04:00
.gitignore GPT4All API Scaffolding. Matches OpenAI OpenAPI spec for chats and completions (#839) 2023-06-28 14:28:52 -04:00
docker-compose.yaml GPT4All API Scaffolding. Matches OpenAI OpenAPI spec for chats and completions (#839) 2023-06-28 14:28:52 -04:00
LICENSE GPT4All API Scaffolding. Matches OpenAI OpenAPI spec for chats and completions (#839) 2023-06-28 14:28:52 -04:00
makefile GPT4All API Scaffolding. Matches OpenAI OpenAPI spec for chats and completions (#839) 2023-06-28 14:28:52 -04:00
README.md Update README.md 2023-06-28 14:29:15 -04:00

GPT4All REST API

This directory contains the source code to run and build docker images that run a FastAPI app for serving inference from GPT4All models. The API matches the OpenAI API spec.

Tutorial

Starting the app

First build the FastAPI docker image. You only have to do this on initial build or when you add new dependencies to the requirements.txt file:

DOCKER_BUILDKIT=1 docker build -t gpt4all_api --progress plain -f gpt4all_api/Dockerfile.buildkit .

Then, start the backend with:

docker compose up --build

Spinning up your app

Run docker compose up to spin up the backend. Monitor the logs for errors in-case you forgot to set an environment variable above.

Development

Run

docker compose up --build

and edit files in the api directory. The api will hot-reload on changes.

You can run the unit tests with

make test

Viewing API documentation

Once the FastAPI ap is started you can access its documentation and test the search endpoint by going to:

localhost:80/docs

This documentation should match the OpenAI OpenAPI spec located at https://github.com/openai/openai-openapi/blob/master/openapi.yaml