Commit Graph

1487 Commits

Author SHA1 Message Date
Andriy Mulyar
36f7fb5848
Update README.md with download statistics
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-08-17 16:56:28 -04:00
Jacob Nguyen
b43eec0e2c
fix ts tests on windows (#1342)
* fix ts tests on windows

* fix cleanup

* fix tests

* hold on c sharp workflows

* fix: downloadModel doesnt not mkdirp
2023-08-17 10:32:08 -04:00
Adam Treat
a63093554f Remove older models that rely upon soon to be no longer supported quantization formats. 2023-08-15 13:19:41 -04:00
Andriy Mulyar
a9668eb2e4 Added optional top_p and top_k 2023-08-15 12:06:49 -04:00
Adam Treat
2c0ee50dce Add starcoder 7b. 2023-08-15 09:27:55 -04:00
Jacob Nguyen
4e55940edf
feat(typescript)/dynamic template (#1287) (#1326)
* feat(typescript)/dynamic template (#1287)

* remove packaged yarn

* prompt templates update wip

* prompt template update

* system prompt template, update types, remove embed promises, cleanup

* support both snakecased and camelcased prompt context

* fix #1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build

* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.

* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call

* add DEFAULT_PROMPT_CONTEXT, export new constants

* add md5sum testcase and fix constants export

* update types

* throw if attempting to list models without a source

* rebuild docs

* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report

* added overload with union types

* bump to 2.2.0, remove alpha

* code speling

---------

Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
2023-08-14 12:45:45 -04:00
Elin Angelov
4d855afe97
Update README.md (#1260)
* Update README.md

Signed-off-by: Elin Angelov <me@zetxx.eu>

* Update README.md

Signed-off-by: Elin Angelov <me@zetxx.eu>

* Update README.md

Signed-off-by: Elin Angelov <me@zetxx.eu>

* Changed wording a tiny bit again

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>

* Added missing space

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>

---------

Signed-off-by: Elin Angelov <me@zetxx.eu>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-08-11 14:14:53 -04:00
cosmic-snow
af6fe5fbb5 Update gpt4all_faq.md
- minor oversight: there are now six supported architectures
- LLAMA -> LLaMA (for v1)
- note about Llama 2 and link to license
- limit some of the paragraphs to 150 chars


Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-08-10 23:56:54 +02:00
Victor Tsaran
ca8baa294b
Updated README.md with a wishlist idea (#1315)
Signed-off-by: Victor Tsaran <vtsaran@yahoo.com>
2023-08-10 11:27:09 -04:00
David Okpare
889c8d1758
Add embeddings endpoint for gpt4all-api (#1314)
* Add embeddings endpoint

* Add test for embedding endpoint
2023-08-10 10:43:07 -04:00
Cosmic Snow
108d950874 Fix Windows unable to load models on older Windows builds
- Replace high-level IsProcessorFeaturePresent
- Reintroduce low-level compiler intrinsics implementation
2023-08-09 09:27:43 +02:00
Lakshay Kansal
0f2bb506a8
font size changer and updates (#1322) 2023-08-07 13:54:13 -04:00
Akarshan Biswas
c449b71b56
Add LLaMA2 7B model to model.json. (#1296)
* Add LLaMA2 7B model to model.json.

---------

Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-08-02 16:58:14 +02:00
Lakshay Kansal
cbdcde8b75
scrollbar fixed for main chat and chat drawer (#1301) 2023-07-31 12:18:38 -04:00
Lakshay Kansal
3d2db76070
fixed issue of text color changing for code blocks in light mode (#1299) 2023-07-31 12:18:19 -04:00
Cosmic Snow
55f96aacc6 Move FAQ entries to general FAQ and adjust, plus minor improvements 2023-07-31 01:34:06 +02:00
Cosmic Snow
e56f977b67 Move Chat GUI out of the Bindings group in the docs navigation. 2023-07-31 01:34:06 +02:00
Cosmic Snow
e285ce91da black & isort
Please enter the commit message for your changes. Lines starting
2023-07-31 01:34:06 +02:00
Cosmic Snow
19d6460282 Extend & Update Python documentation
- Expand Quickstart
  - Add Examples & Explanations:
    - Info on generation parameters
    - Model folder examples
    - Templates
    - Introspection with logging
    - Notes on allow_download=False
    - Interrupting generation (response callback)
    - FAQ
2023-07-31 01:34:06 +02:00
Cosmic Snow
83ad6b42c4 Add build hint to Python Readme
- CMake build can be told run in Release mode
2023-07-31 01:34:06 +02:00
385olt
3ed6d176a5
Python bindings: unicode decoding (#1281)
* rewrote the unicode decoding using the structure of multi-byte unicode symbols.
2023-07-30 11:29:51 -07:00
Zach Nussbaum
91a32c0e84
ci: pin (#1292) 2023-07-28 17:00:56 -04:00
Andriy Mulyar
39acbc8378
Python version bump
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-27 12:19:23 -04:00
Aaron Miller
b9e2553995
remove trailing comma from models json (#1284) 2023-07-27 09:14:33 -07:00
Adam Treat
09a143228c New release notes and bump version. 2023-07-27 11:48:16 -04:00
Lakshay Kansal
fc1af4a234 light mode vs dark mode 2023-07-27 09:31:55 -04:00
Adam Treat
6d03b3e500 Add starcoder support. 2023-07-27 09:15:16 -04:00
Adam Treat
397f3ba2d7 Add a little size to the monospace font. 2023-07-27 09:15:16 -04:00
Jacob Nguyen
0e866a0e8f
Refactor(typescript)/error handling (#1283)
* actually display error if it occurs while instantiating

* bump version
2023-07-26 20:06:16 -07:00
Jacob Nguyen
9100b2ef6f
fix continue_config.yml (#1270)
* fix continue_config.yml

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* fix continue_config.yml

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

---------

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
2023-07-25 17:24:19 -04:00
Andriy Mulyar
14f4b522d5
Allow you to monitor GPT4All-API with Sentry (#1271) 2023-07-25 12:47:41 -04:00
Jacob Nguyen
545c23b4bd
typescript: fix final bugs and polishing, circle ci documentation (#960)
* fix: esm and cjs compatibility

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update prebuild.js

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* fix gpt4all.js

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!

* version bump

* polish up spec and build scripts

* lock file refresh

* fix: proper resource closing and error handling

* check make sure libPath not null

* add msvc build script and update readme requirements

* python workflows in circleci

* dummy python change

* no need for main

* second hold for pypi deploy

* let me deploy pls

* bring back when condition

* Typo, ignore list  (#967)

Fix typo in javadoc,
Add word to ignore list for codespellrc

---------

Co-authored-by: felix <felix@zaslavskiy.net>

* llmodel: change tokenToString to not use string_view (#968)

fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)

* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)

* Initial Library Loader

* Load library as part of Model factory

* Dynamically search and find the dlls

* Update tests to use locally built runtimes

* Fix dylib loading, add macos runtime support for sample/tests

* Bypass automatic loading by default.

* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile

* Switch Loading again

* Update build scripts for mac/linux

* Update bindings to support newest breaking changes

* Fix build

* Use llmodel for Windows

* Actually, it does need to be libllmodel

* Name

* Remove TFMs, bypass loading by default

* Fix script

* Delete mac script

---------

Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>

* bump llama.cpp mainline to latest (#964)

* fix prompt context so it's preserved in class

* update setup.py

* metal replit (#931)

metal+replit

makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)

* update documentation scripts and generation to include readme.md

* update readme and documentation for source

* begin tests, import jest, fix listModels export

* fix typo

* chore: update spec

* fix: finally, reduced potential of empty string

* chore: add stub for createTokenSream

* refactor: protecting resources properly

* add basic jest tests

* update

* update readme

* refactor: namespace the res variable

* circleci integration to automatically build docs

* add starter docs

* typo

* more circle ci typo

* forgot to add nodejs circle ci orb

* fix circle ci

* feat: @iimez verify download and fix prebuild script

* fix: oops, option name wrong

* fix: gpt4all utils not emitting docs

* chore: fix up scripts

* fix: update docs and typings for md5 sum

* fix: macos compilation

* some refactoring

* Update index.cc

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* update readme and enable exceptions on mac

* circle ci progress

* basic embedding with sbert (not tested & cpp side only)

* fix circle ci

* fix circle ci

* update circle ci script

* bruh

* fix again

* fix

* fixed required workflows

* fix ci

* fix pwd

* fix pwd

* update ci

* revert

* fix

* prevent rebuild

* revmove noop

* Update continue_config.yml

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update binding.gyp

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* fix fs not found

* remove cpp 20 standard

* fix warnings, safer way to calculate arrsize

* readd build backend

* basic embeddings and yarn test"

* fix circle ci

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

Update continue_config.yml

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

fix macos paths

update readme and roadmap

split up spec

update readme

check for url in modelsjson

update docs and inline stuff

update yarn configuration and readme

update readme

readd npm publish script

add exceptions

bruh one space broke the yaml

codespell

oops forgot to add runtimes folder

bump version

try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images

add fallback for unknown architectures

attached to wrong workspace

hopefuly fix

moving everything under backend to persist

should work now

* update circle ci script

* prevent rebuild

* revmove noop

* Update continue_config.yml

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update binding.gyp

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* fix fs not found

* remove cpp 20 standard

* fix warnings, safer way to calculate arrsize

* readd build backend

* basic embeddings and yarn test"

* fix circle ci

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

Update continue_config.yml

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

fix macos paths

update readme and roadmap

split up spec

update readme

check for url in modelsjson

update docs and inline stuff

update yarn configuration and readme

update readme

readd npm publish script

add exceptions

bruh one space broke the yaml

codespell

oops forgot to add runtimes folder

bump version

try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images

add fallback for unknown architectures

attached to wrong workspace

hopefuly fix

moving everything under backend to persist

should work now

* Update README.md

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

---------

Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-07-25 11:46:40 -04:00
Zach Nussbaum
b3f84c56e7
fix: don't pass around the same dict object (#1264) 2023-07-24 15:28:12 -04:00
Andriy Mulyar
41f640577c
Update setup.py (#1263)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-24 14:25:04 -04:00
cosmic-snow
6431d46776
Fix models not getting downloaded in Python bindings (#1262)
- custom callbacks & session improvements PR (v1.0.6) had one too many checks
- remove the problematic config['url'] check
- add a crude test
- fixes #1261
2023-07-24 12:57:06 -04:00
Andriy Mulyar
2befff83d6 top_p error in gpt4all-api 2023-07-24 12:01:37 -04:00
Andriy Mulyar
3d10110314 Moved model check into cpu only paths 2023-07-24 11:34:50 -04:00
Zach Nussbaum
8aba2c9009
GPU Inference Server (#1112)
* feat: local inference server

* fix: source to use bash + vars

* chore: isort and black

* fix: make file + inference mode

* chore: logging

* refactor: remove old links

* fix: add new env vars

* feat: hf inference server

* refactor: remove old links

* test: batch and single response

* chore: black + isort

* separate gpu and cpu dockerfiles

* moved gpu to separate dockerfile

* Fixed test endpoints

* Edits to API. server won't start due to failed instantiation error

* Method signature

* fix: gpu_infer

* tests: fix tests

---------

Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-21 15:13:29 -04:00
Andriy Mulyar
58f0fcab57
Added health endpoint
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-20 21:23:29 -04:00
385olt
b4dbbd1485
Python bindings: Custom callbacks, chat session improvement, refactoring (#1145)
* Added the following features: \n 1) Now prompt_model uses the positional argument callback to return the response tokens. \n 2) Due to the callback argument of prompt_model, prompt_model_streaming only manages the queue and threading now, which reduces duplication of the code. \n 3) Added optional verbose argument to prompt_model which prints out the prompt that is passed to the model. \n 4) Chat sessions can now have a header, i.e. an instruction before the transcript of the conversation. The header is set at the creation of the chat session context. \n 5) generate function now accepts an optional callback. \n 6) When streaming and using chat session, the user doesn't need to save assistant's messages by himself. This is done automatically.

* added _empty_response_callback so I don't have to check if callback is None

* added docs

* now if the callback stop generation, the last token is ignored

* fixed type hints, reimplemented chat session header as a system prompt, minor refactoring, docs: removed section about manual update of chat session for streaming

* forgot to add some type hints!

* keep the config of the model in GPT4All class which is taken from models.json if the download is allowed

* During chat sessions, the model-specific systemPrompt and promptTemplate are applied.

* implemented the changes

* Fixed typing. Now the user can set a prompt template that will be applied even outside of a chat session. The template can also have multiple placeholders that can be filled by passing a dictionary to the generate function

* reversed some changes concerning the prompt templates and their functionality

* fixed some type hints, changed list[float] to List[Float]

* fixed type hints, changed List[Float] to List[float]

* fix typo in the comment: Pepare => Prepare

---------

Signed-off-by: 385olt <385olt@gmail.com>
2023-07-19 18:36:49 -04:00
AMOGUS
5f0aaf8bdb python binding's TopP also needs some love
Changed the Python binding's TopP from 0.1 to 0.4

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
2023-07-19 10:36:23 -04:00
AMOGUS
4974ae917c Update default TopP to 0.4
TopP 0.1 was found to be somewhat too aggressive, so a more moderate default of 0.4 would be better suited for general use.

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
2023-07-19 10:36:23 -04:00
cosmic-snow
63849d9afc Add AVX/AVX2 requirement to main README.md
Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-07-19 13:05:42 +02:00
cosmic-snow
2d02c65177
Handle edge cases when generating embeddings (#1215)
* Handle edge cases when generating embeddings
* Improve Python handling & add llmodel_c.h note
- In the Python bindings fail fast with a ValueError when text is empty
- Advice other bindings authors to do likewise in llmodel_c.h
2023-07-17 13:21:03 -07:00
Felix Zaslavskiy
1e74171a7b
Java binding - Improve error check before loading Model file (#1206)
* Javav binding - Add check for Model file be Readable.

* add todo for java binding.

---------

Co-authored-by: Feliks Zaslavskiy <feliks.zaslavskiy@optum.com>
Co-authored-by: felix <felix@zaslavskiy.net>
2023-07-15 18:07:42 -04:00
Andriy Mulyar
cfd70b69fc
Update gpt4all_python_embedding.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-14 14:54:56 -04:00
Andriy Mulyar
306105e62f
Update gpt4all_python_embedding.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-14 14:54:36 -04:00
Andriy Mulyar
89e277bb3c
Update gpt4all_python_embedding.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-07-14 14:30:14 -04:00
Adam Treat
f543affa9a Add better docs and threading support to bert. 2023-07-14 14:14:22 -04:00
Lakshay Kansal
6c8669cad3 highlighting rules for html and php and latex 2023-07-14 11:36:01 -04:00