mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2024-10-01 01:06:10 -04:00
bc481f2ab7
* Added modal labs example to documentation * Added gpt4all chat * Typo * Andriy can't spell
2.0 KiB
2.0 KiB
GPT4All Chat Client
The GPT4All Chat Client lets you easily interact with any local large language model.
It is optimized to run 7-13B parameter LLMs on the CPU's of any computer running OSX/Windows/Linux.
GPT4All Chat Server Mode
GPT4All Chat comes with a built-in server mode allowing you to programmatically interact with any supported local LLM through a very familiar HTTP API. You can find the API documentation here.
Enabling server mode in the chat client will spin-up on an HTTP server running on localhost
port
4891
(the reverse of 1984). You can enable the webserver via GPT4All Chat > Settings > Enable web server
.
Begin using local LLMs in your AI powered apps by changing a single line of code: the base path for requests.
import openai
openai.api_base = "http://localhost:4891/v1"
#openai.api_base = "https://api.openai.com/v1"
openai.api_key = "not needed for a local LLM"
# Set up the prompt and other parameters for the API request
prompt = "Who is Michael Jordan?"
# model = "gpt-3.5-turbo"
#model = "mpt-7b-chat"
model = "gpt4all-j-v1.3-groovy"
# Make the API request
response = openai.Completion.create(
model=model,
prompt=prompt,
max_tokens=50,
temperature=0.28,
top_p=0.95,
n=1,
echo=True,
stream=False
)
# Print the generated completion
print(response)
which gives the following response
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"text": "Who is Michael Jordan?\nMichael Jordan is a former professional basketball player who played for the Chicago Bulls in the NBA. He was born on December 30, 1963, and retired from playing basketball in 1998."
}
],
"created": 1684260896,
"id": "foobarbaz",
"model": "gpt4all-j-v1.3-groovy",
"object": "text_completion",
"usage": {
"completion_tokens": 35,
"prompt_tokens": 39,
"total_tokens": 74
}
}