mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2024-10-01 01:06:10 -04:00
Chat Client Documentation (#596)
* GPT4All Chat Client Documentation * Updated documentation wording
This commit is contained in:
parent
3cb6dd7a66
commit
17de7f0529
54
gpt4all-bindings/python/docs/gpt4all_chat.md
Normal file
54
gpt4all-bindings/python/docs/gpt4all_chat.md
Normal file
@ -0,0 +1,54 @@
|
||||
# GPT4All Chat Client
|
||||
|
||||
The [GPT4All Chat Client](https://gpt4all.io) lets you easily interact with any local large language model.
|
||||
|
||||
It is optimized to run 7-13B parameter LLMs on the CPU's of any computer in computer running OSX/Windows/Linux.
|
||||
|
||||
|
||||
|
||||
|
||||
## GPT4All Chat Server Mode
|
||||
|
||||
GPT4All Chat comes with a built-in server mode allowing you to programmatically interact
|
||||
with any supported local LLM through a *very familiar* HTTP API. You can find the API documentation [here](https://platform.openai.com/docs/api-reference/completions).
|
||||
|
||||
Enabling server mode in the chat client will spin-up on an HTTP server running on `localhost` port
|
||||
`4891` (the reverse of 1984).
|
||||
|
||||
You can begin using local LLMs in your AI powered apps by changing a single line of code: the bath path for requests.
|
||||
|
||||
```python
|
||||
import os
|
||||
import openai
|
||||
|
||||
openai.api_base = "http://localhost:4891/v1"
|
||||
#openai.api_base = "https://api.openai.com/v1"
|
||||
|
||||
# Read the OpenAI API key from an environment variable
|
||||
openai_api_key = os.environ.get("OPENAI_API_KEY")
|
||||
if not openai_api_key:
|
||||
raise ValueError("Please set the 'OPENAI_API_KEY' environment variable.")
|
||||
openai.api_key = openai_api_key
|
||||
|
||||
# Set up the prompt and other parameters for the API request
|
||||
prompt = "Who is Michael Jordan?"
|
||||
|
||||
# model = "gpt-3.5-turbo"
|
||||
#model = "mpt-7b-chat"
|
||||
model = "gpt4all-j-v1.3-groovy"
|
||||
|
||||
# Make the API request
|
||||
response = openai.Completion.create(
|
||||
model=model,
|
||||
prompt=prompt,
|
||||
max_tokens=50,
|
||||
temperature=0.28,
|
||||
top_p=0.95,
|
||||
n=1,
|
||||
echo=True,
|
||||
stream=False
|
||||
)
|
||||
|
||||
# Print the generated completion
|
||||
print(response)
|
||||
```
|
@ -31,4 +31,4 @@ def main():
|
||||
model = GPT4All()
|
||||
for i in range(10):
|
||||
model.generate.call()
|
||||
```
|
||||
```
|
@ -9,9 +9,11 @@ use_directory_urls: false
|
||||
|
||||
nav:
|
||||
- 'index.md'
|
||||
- 'gpt4all_modal.md'
|
||||
- 'gpt4all_chat.md'
|
||||
- 'Tutorials':
|
||||
- 'gpt4all_modal.md'
|
||||
- 'API Reference':
|
||||
- 'gpt4all_api.md'
|
||||
- 'gpt4all_python.md'
|
||||
|
||||
theme:
|
||||
name: material
|
||||
|
Loading…
Reference in New Issue
Block a user