* make sure encoding is identity for Range requests
* use a .part file for partial downloads
* verify using file size and MD5 from models3.json
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* chat: fix non-AVX CPU detection on Windows
* bindings: throw exception instead of logging to console
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Key changes:
* honor empty system prompt argument
* current_chat_session is now read-only and defaults to None
* deprecate fallback prompt template for unknown models
* fix mistakes from #2086
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Tare Ebelo <75279482+TareHimself@users.noreply.github.com>
Signed-off-by: jacob <jacoobes@sern.dev>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: jacob <jacoobes@sern.dev>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
Also dynamically limit the GPU layers and context length fields to the maximum supported by the model.
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
The only reason to use as_file is to support copying a file from a
frozen package. We don't currently support this anyway, and as_file
isn't supported until Python 3.9, so get rid of it.
Fixes#1605
* disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is
nonempty
* make verbose flag for retrieve_model default false (but also be
overridable via gpt4all constructor)
should be able to run a basic test:
```python
import gpt4all
model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf')
print(model.generate('def fib(n):'))
```
and see no non-model output when successful
Running `git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git`
returns `Permission denied (publickey)` as shown below:
```
git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git
Cloning into gpt4all...
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
```
This change replaces `git@github.com:nomic-ai/gpt4all.git` with
`https://github.com/nomic-ai/gpt4all.git` which runs without permission issues.
resolvesnomic-ai/gpt4all#8, resolvesnomic-ai/gpt4all#49
* feat(typescript)/dynamic template (#1287)
* remove packaged yarn
* prompt templates update wip
* prompt template update
* system prompt template, update types, remove embed promises, cleanup
* support both snakecased and camelcased prompt context
* fix#1277 libbert, libfalcon and libreplit libs not being moved into the right folder after build
* added support for modelConfigFile param, allowing the user to specify a local file instead of downloading the remote models.json. added a warning message if code fails to load a model config. included prompt context docs by amogus.
* snakecase warning, put logic for loading local models.json into listModels, added constant for the default remote model list url, test improvements, simpler hasOwnProperty call
* add DEFAULT_PROMPT_CONTEXT, export new constants
* add md5sum testcase and fix constants export
* update types
* throw if attempting to list models without a source
* rebuild docs
* fix download logging undefined url, toFixed typo, pass config filesize in for future progress report
* added overload with union types
* bump to 2.2.0, remove alpha
* code speling
---------
Co-authored-by: Andreas Obersteiner <8959303+iimez@users.noreply.github.com>
- minor oversight: there are now six supported architectures
- LLAMA -> LLaMA (for v1)
- note about Llama 2 and link to license
- limit some of the paragraphs to 150 chars
Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
* fix: esm and cjs compatibility
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update prebuild.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix gpt4all.js
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Fix compile for windows and linux again. PLEASE DON'T REVERT THISgit gui!
* version bump
* polish up spec and build scripts
* lock file refresh
* fix: proper resource closing and error handling
* check make sure libPath not null
* add msvc build script and update readme requirements
* python workflows in circleci
* dummy python change
* no need for main
* second hold for pypi deploy
* let me deploy pls
* bring back when condition
* Typo, ignore list (#967)
Fix typo in javadoc,
Add word to ignore list for codespellrc
---------
Co-authored-by: felix <felix@zaslavskiy.net>
* llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
* Initial Library Loader for .NET Bindings / Update bindings to support newest changes (#763)
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* bump llama.cpp mainline to latest (#964)
* fix prompt context so it's preserved in class
* update setup.py
* metal replit (#931)
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
* update documentation scripts and generation to include readme.md
* update readme and documentation for source
* begin tests, import jest, fix listModels export
* fix typo
* chore: update spec
* fix: finally, reduced potential of empty string
* chore: add stub for createTokenSream
* refactor: protecting resources properly
* add basic jest tests
* update
* update readme
* refactor: namespace the res variable
* circleci integration to automatically build docs
* add starter docs
* typo
* more circle ci typo
* forgot to add nodejs circle ci orb
* fix circle ci
* feat: @iimez verify download and fix prebuild script
* fix: oops, option name wrong
* fix: gpt4all utils not emitting docs
* chore: fix up scripts
* fix: update docs and typings for md5 sum
* fix: macos compilation
* some refactoring
* Update index.cc
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* update readme and enable exceptions on mac
* circle ci progress
* basic embedding with sbert (not tested & cpp side only)
* fix circle ci
* fix circle ci
* update circle ci script
* bruh
* fix again
* fix
* fixed required workflows
* fix ci
* fix pwd
* fix pwd
* update ci
* revert
* fix
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* update circle ci script
* prevent rebuild
* revmove noop
* Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update binding.gyp
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* fix fs not found
* remove cpp 20 standard
* fix warnings, safer way to calculate arrsize
* readd build backend
* basic embeddings and yarn test"
* fix circle ci
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Update continue_config.yml
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
fix macos paths
update readme and roadmap
split up spec
update readme
check for url in modelsjson
update docs and inline stuff
update yarn configuration and readme
update readme
readd npm publish script
add exceptions
bruh one space broke the yaml
codespell
oops forgot to add runtimes folder
bump version
try code snippet https://support.circleci.com/hc/en-us/articles/8325075309339-How-to-install-NPM-on-Windows-images
add fallback for unknown architectures
attached to wrong workspace
hopefuly fix
moving everything under backend to persist
should work now
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Felix Zaslavskiy <felix.zaslavskiy@gmail.com>
Co-authored-by: felix <felix@zaslavskiy.net>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Tim Miller <drasticactions@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
- custom callbacks & session improvements PR (v1.0.6) had one too many checks
- remove the problematic config['url'] check
- add a crude test
- fixes#1261
* Added the following features: \n 1) Now prompt_model uses the positional argument callback to return the response tokens. \n 2) Due to the callback argument of prompt_model, prompt_model_streaming only manages the queue and threading now, which reduces duplication of the code. \n 3) Added optional verbose argument to prompt_model which prints out the prompt that is passed to the model. \n 4) Chat sessions can now have a header, i.e. an instruction before the transcript of the conversation. The header is set at the creation of the chat session context. \n 5) generate function now accepts an optional callback. \n 6) When streaming and using chat session, the user doesn't need to save assistant's messages by himself. This is done automatically.
* added _empty_response_callback so I don't have to check if callback is None
* added docs
* now if the callback stop generation, the last token is ignored
* fixed type hints, reimplemented chat session header as a system prompt, minor refactoring, docs: removed section about manual update of chat session for streaming
* forgot to add some type hints!
* keep the config of the model in GPT4All class which is taken from models.json if the download is allowed
* During chat sessions, the model-specific systemPrompt and promptTemplate are applied.
* implemented the changes
* Fixed typing. Now the user can set a prompt template that will be applied even outside of a chat session. The template can also have multiple placeholders that can be filled by passing a dictionary to the generate function
* reversed some changes concerning the prompt templates and their functionality
* fixed some type hints, changed list[float] to List[Float]
* fixed type hints, changed List[Float] to List[float]
* fix typo in the comment: Pepare => Prepare
---------
Signed-off-by: 385olt <385olt@gmail.com>
* Handle edge cases when generating embeddings
* Improve Python handling & add llmodel_c.h note
- In the Python bindings fail fast with a ValueError when text is empty
- Advice other bindings authors to do likewise in llmodel_c.h
* python: do not mutate locals()
* python: fix (some) typing complaints
* python: queue sentinel need not be a str
* python: make long inference tests opt in
* Makefiles, black, isort
* Black and isort
* unit tests and generation method
* chat context provider
* context does not reset
* Current state
* Fixup
* Python bindings with unit tests
* GPT4All Python Bindings: chat contexts, tests
* New python bindings and backend fixes
* Black and Isort
* Documentation error
* preserved n_predict for backwords compat with langchain
---------
Co-authored-by: Adam Treat <treat.adam@gmail.com>