- the bindings API changed in 057b9, but the CLI was not updated
- change 'std_passthrough' param to the renamed 'streaming'
- remove '_cli_override_response_callback' as it breaks and is no longer needed
- bump version to 0.3.4
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by 9c6c09cbd2
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1eb.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e7.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5ed
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* chore: boilerplate, refactor in future
* chore: boilerplate
* feat: can compile succesfully
* document .gyp file
* add src, test and fix gyp
* progress on prompting and some helper methods
* add destructor and basic prompting work, prepare download function
* download function done
* download function edits and adding documentation
* fix bindings memory issue and add tests and specs
* add more documentation and readme
* add npmignore
* Update README.md
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* Update package.json - redundant scripts
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
---------
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Fix occurrences of the prompt callback being incorrectly specified, or
the response callback's prototype being incorrectly used in its place.
Signed-off-by: Juuso Alasuutari <juuso.alasuutari@gmail.com>