Commit Graph

11 Commits

Author SHA1 Message Date
cebtenzzre
37b007603a
bindings: replace references to GGMLv3 models with GGUF (#1547) 2023-10-22 11:58:28 -04:00
Adam Treat
ea66669cef Switch to new models2.json for new gguf release and bump our version to
2.5.0.
2023-10-05 18:16:19 -04:00
Cosmic Snow
55f96aacc6 Move FAQ entries to general FAQ and adjust, plus minor improvements 2023-07-31 01:34:06 +02:00
Cosmic Snow
19d6460282 Extend & Update Python documentation
- Expand Quickstart
  - Add Examples & Explanations:
    - Info on generation parameters
    - Model folder examples
    - Templates
    - Introspection with logging
    - Notes on allow_download=False
    - Interrupting generation (response callback)
    - FAQ
2023-07-31 01:34:06 +02:00
385olt
b4dbbd1485
Python bindings: Custom callbacks, chat session improvement, refactoring (#1145)
* Added the following features: \n 1) Now prompt_model uses the positional argument callback to return the response tokens. \n 2) Due to the callback argument of prompt_model, prompt_model_streaming only manages the queue and threading now, which reduces duplication of the code. \n 3) Added optional verbose argument to prompt_model which prints out the prompt that is passed to the model. \n 4) Chat sessions can now have a header, i.e. an instruction before the transcript of the conversation. The header is set at the creation of the chat session context. \n 5) generate function now accepts an optional callback. \n 6) When streaming and using chat session, the user doesn't need to save assistant's messages by himself. This is done automatically.

* added _empty_response_callback so I don't have to check if callback is None

* added docs

* now if the callback stop generation, the last token is ignored

* fixed type hints, reimplemented chat session header as a system prompt, minor refactoring, docs: removed section about manual update of chat session for streaming

* forgot to add some type hints!

* keep the config of the model in GPT4All class which is taken from models.json if the download is allowed

* During chat sessions, the model-specific systemPrompt and promptTemplate are applied.

* implemented the changes

* Fixed typing. Now the user can set a prompt template that will be applied even outside of a chat session. The template can also have multiple placeholders that can be filled by passing a dictionary to the generate function

* reversed some changes concerning the prompt templates and their functionality

* fixed some type hints, changed list[float] to List[Float]

* fixed type hints, changed List[Float] to List[float]

* fix typo in the comment: Pepare => Prepare

---------

Signed-off-by: 385olt <385olt@gmail.com>
2023-07-19 18:36:49 -04:00
Adam Treat
f543affa9a Add better docs and threading support to bert. 2023-07-14 14:14:22 -04:00
Adam Treat
0c0a4f2c22 Add the docs. 2023-07-14 10:48:18 -04:00
Andriy Mulyar
01bd3d6802
Python chat streaming (#1127)
* Support streaming in chat session

* Uncommented tests
2023-07-03 12:59:39 -04:00
Andriy Mulyar
46a0762bd5
Python Bindings: Improved unit tests, documentation and unification of API (#1090)
* Makefiles, black, isort

* Black and isort

* unit tests and generation method

* chat context provider

* context does not reset

* Current state

* Fixup

* Python bindings with unit tests

* GPT4All Python Bindings: chat contexts, tests

* New python bindings and backend fixes

* Black and Isort

* Documentation error

* preserved n_predict for backwords compat with langchain

---------

Co-authored-by: Adam Treat <treat.adam@gmail.com>
2023-06-30 16:02:02 -04:00
Richard Guo
213e033540
GPT4All Updated Docs and FAQ (#632)
* working on docs

* more doc organization

* faq

* some reformatting
2023-05-18 16:07:57 -04:00
Andriy Mulyar
17de7f0529
Chat Client Documentation (#596)
* GPT4All Chat Client Documentation

* Updated documentation wording
2023-05-16 12:46:31 -04:00