typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 15:00:20 -04:00
|
|
|
#include "prompt.h"
|
2024-02-24 17:50:14 -05:00
|
|
|
#include <future>
|
typescript: publish alpha on npm and lots of cleanup, documentation, and more (#913)
* fix typo so padding can be accessed
* Small cleanups for settings dialog.
* Fix the build.
* localdocs
* Fixup the rescan. Fix debug output.
* Add remove folder implementation.
* Remove this signal as unnecessary for now.
* Cleanup of the database, better chunking, better matching.
* Add new reverse prompt for new localdocs context feature.
* Add a new muted text color.
* Turn off the debugging messages by default.
* Add prompt processing and localdocs to the busy indicator in UI.
* Specify a large number of suffixes we will search for now.
* Add a collection list to support a UI.
* Add a localdocs tab.
* Start fleshing out the localdocs ui.
* Begin implementing the localdocs ui in earnest.
* Clean up the settings dialog for localdocs a bit.
* Add more of the UI for selecting collections for chats.
* Complete the settings for localdocs.
* Adds the collections to serialize and implement references for localdocs.
* Store the references separately so they are not sent to datalake.
* Add context link to references.
* Don't use the full path in reference text.
* Various fixes to remove unnecessary warnings.
* Add a newline
* ignore rider and vscode dirs
* create test project and basic model loading tests
* make sample print usage and cleaner
* Get the backend as well as the client building/working with msvc.
* Libraries named differently on msvc.
* Bump the version number.
* This time remember to bump the version right after a release.
* rm redundant json
* More precise condition
* Nicer handling of missing model directory.
Correct exception message.
* Log where the model was found
* Concise model matching
* reduce nesting, better error reporting
* convert to f-strings
* less magic number
* 1. Cleanup the interrupted download
2. with-syntax
* Redundant else
* Do not ignore explicitly passed 4 threads
* Correct return type
* Add optional verbosity
* Correct indentation of the multiline error message
* one funcion to append .bin suffix
* hotfix default verbose optioin
* export hidden types and fix prompt() type
* tiny typo (#739)
* Update README.md (#738)
* Update README.md
fix golang gpt4all import path
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* Update README.md
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
* fix(training instructions): model repo name (#728)
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
* C# Bindings - Prompt formatting (#712)
* Added support for custom prompt formatting
* more docs added
* bump version
* clean up cc files and revert things
* LocalDocs documentation initial (#761)
* LocalDocs documentation initial
* Improved localdocs documentation (#762)
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* Improved localdocs documentation
* New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
* buf_ref.into() can be const now
* add tokenizer readme w/ instructions for convert script
* Revert "add tokenizer readme w/ instructions for convert script"
This reverts commit 9c15d1f83ee2f9387126cf4892cd94f39bdbff5e.
* Revert "buf_ref.into() can be const now"
This reverts commit 840e011b75fb77f761f288a75b4b2a86358dcb9e.
* Revert "New tokenizer implementation for MPT and GPT-J"
This reverts commit ee3469ba6c6d5f51a1c5fb9c6ec96eff3f4075e3.
* Fix remove model from model download for regular models.
* Fixed formatting of localdocs docs (#770)
* construct and return the correct reponse when the request is a chat completion
* chore: update typings to keep consistent with python api
* progress, updating createCompletion to mirror py api
* update spec, unfinished backend
* prebuild binaries for package distribution using prebuildify/node-gyp-build
* Get rid of blocking behavior for regenerate response.
* Add a label to the model loading visual indicator.
* Use the new MyButton for the regenerate response button.
* Add a hover and pressed to the visual indication of MyButton.
* Fix wording of this accessible description.
* Some color and theme enhancements to make the UI contrast a bit better.
* Make the comboboxes align in UI.
* chore: update namespace and fix prompt bug
* fix linux build
* add roadmap
* Fix offset of prompt/response icons for smaller text.
* Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
* Add a custom busy indicator to further align look and feel across platforms.
* Draw the indicator for combobox to ensure it looks the same on all platforms.
* Fix warning.
* Use the proper text color for sending messages.
* Fixup the plus new chat button.
* Make all the toolbuttons highlight on hover.
* Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
* chore: support llamaversion >= 3 and ggml default
* Dlopen better implementation management (Version 2)
* Add fixme's and clean up a bit.
* Documentation improvements on LocalDocs (#790)
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* typo
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Adapt code
* Makefile changes (WIP to test)
* Debug
* Adapt makefile
* Style
* Implemented logging mechanism (#785)
* Cleaned up implementation management (#787)
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
* Fix compile
* Fixed double-free in LLModel::Implementation destructor
* Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789)
* Drop leftover include
* Add ldl in gpt4all.go for dynamic linking (#797)
* Logger should also output to stderr
* Fix MSVC Build, Update C# Binding Scripts
* Update gpt4all_chat.md (#800)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* C# Bindings - improved logging (#714)
* added optional support for .NET logging
* bump version and add missing alpha suffix
* avoid creating additional namespace for extensions
* prefer NullLogger/NullLoggerFactory over null-conditional ILogger to avoid errors
---------
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
* Make localdocs work with server mode.
* Better name for database results.
* Fix for stale references after we regenerate.
* Don't hardcode these.
* Fix bug with resetting context with chatgpt model.
* Trying to shrink the copy+paste code and do more code sharing between backend model impl.
* Remove this as it is no longer useful.
* Try and fix build on mac.
* Fix mac build again.
* Add models/release.json to github repo to allow PRs
* Fixed spelling error in models.json
to make CI happy
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* updated bindings code for updated C api
* load all model libs
* model creation is failing... debugging
* load libs correctly
* fixed finding model libs
* cleanup
* cleanup
* more cleanup
* small typo fix
* updated binding.gyp
* Fixed model type for GPT-J (#815)
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Fixed tons of warnings and clazy findings (#811)
* Some tweaks to UI to make window resizing smooth and flow nicely.
* Min constraints on about dialog.
* Prevent flashing of white on resize.
* Actually use the theme dark color for window background.
* Add the ability to change the directory via text field not just 'browse' button.
* add scripts to build dlls
* markdown doc gen
* add scripts, nearly done moving breaking changes
* merge with main
* oops, fixed comment
* more meaningful name
* leave for testing
* Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by https://github.com/nomic-ai/gpt4all/commit/9c6c09cbd21a91773e724bd6ddff6084747af000
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
* Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
* some tweaks to optional types and defaults
* mingw script for windows compilation
* Update README.md
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
* Backend prompt dedup (#822)
* Deduplicated prompt() function code
* Better error handling when the model fails to load.
* We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833)
* Update build_and_run.md (#834)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Trying out a new feature to download directly from huggingface.
* Try again with the url.
* Allow for download of models hosted on third party hosts.
* Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
* Update to latest llama.cpp
* Remove older models that are not as popular. (#837)
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json (#838)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* feat: finalyl compiled on windows (MSVC) goadman
* update README and spec and promisfy createCompletion
* update d.ts
* Make installers work with mac/windows for big backend change.
* Need this so the linux installer packages it as a dependency.
* Try and fix mac.
* Fix compile on mac.
* These need to be installed for them to be packaged and work for both mac and windows.
* Fix installers for windows and linux.
* Fix symbol resolution on windows.
* updated pypi version
* Release notes for version 2.4.5 (#853)
* Update README.md (#854)
Signed-off-by: AT <manyoso@users.noreply.github.com>
* Documentation for model sideloading (#851)
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Speculative fix for windows llama models with installer.
* Revert "Speculative fix for windows llama models with installer."
This reverts commit add725d1ebef2391c6c74f86898ae0afda4d3337.
* Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit e0dcf6a14f89134987fa63cdb33a40305885921a.
* Fix llama models on linux and windows.
* Bump the version.
* New release notes
* Set thread counts after loading model (#836)
* Update gpt4all_faq.md (#861)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Supports downloading officially supported models not hosted on gpt4all R2
* Replit Model (#713)
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
* Synced llama.cpp.cmake with upstream (#887)
* Fix for windows.
* fix: build script
* Revert "Synced llama.cpp.cmake with upstream (#887)"
This reverts commit 5c5e10c1f5ac03f9dbab4cc4d8c5bb02d286b46f.
* Update README.md (#906)
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Update CollectionsDialog.qml (#856)
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* sampling: remove incorrect offset for n_vocab (#900)
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
* non-llama: explicitly greedy sampling for temp<=0 (#901)
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
* work on thread safety and cleaning up, adding object option
* chore: cleanup tests and spec
* refactor for object based startup
* more docs
* Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
* more docs
* Synced llama.cpp.cmake with upstream
* add lock file to ignore codespell
* Move usage in Python bindings readme to own section (#907)
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
* Always sync for circleci.
* update models json with replit model
* Forgot to bump.
* Change the default values for generation in GUI
* Removed double-static from variables in replit.cpp
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* Do auto detection by default in C++ API
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
* remove comment
* add comments for index.h
* chore: add new models and edit ignore files and documentation
* llama on Metal (#885)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* Revert "llama on Metal (#885)"
This reverts commit b59ce1c6e70645d13c687b46c116a75906b1fbc9.
* add more readme stuff and debug info
* spell
* Metal+LLama take two (#929)
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
* add prebuilts for windows
* Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
* add prettier
* split out non llm related methods into util.js, add listModels method
* add prebuild script for creating all platforms bindings at once
* check in prebuild linux/so libs and allow distribution of napi prebuilds
* apply autoformatter
* move constants in config.js, add loadModel and retrieveModel methods
* Clean up the context links a bit.
* Don't interfere with selection.
* Add code blocks and python syntax highlighting.
* Spelling error.
* Add c++/c highighting support.
* Fix some bugs with bash syntax and add some C23 keywords.
* Bugfixes for prompt syntax highlighting.
* Try and fix a false positive from codespell.
* When recalculating context we can't erase the BOS.
* Fix Windows MSVC AVX builds
- bug introduced in 557c82b5eddb4120340b837a8bdeeeca2a82eac3
- currently getting: `warning C5102: ignoring invalid command-line macro definition '/arch:AVX2'`
- solution is to use `_options(...)` not `_definitions(...)`
* remove .so unneeded path
---------
Signed-off-by: Nandakumar <nandagunasekaran@gmail.com>
Signed-off-by: Chase McDougall <chasemcdougall@hotmail.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Signed-off-by: mvenditto <venditto.matteo@gmail.com>
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: Justin Wang <justinwang46@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: redthing1 <redthing1@alt.icu>
Co-authored-by: Konstantin Gukov <gukkos@gmail.com>
Co-authored-by: Richard Guo <richardg7890@gmail.com>
Co-authored-by: Joseph Mearman <joseph@mearman.co.uk>
Co-authored-by: Nandakumar <nandagunasekaran@gmail.com>
Co-authored-by: Chase McDougall <chasemcdougall@hotmail.com>
Co-authored-by: mvenditto <venditto.matteo@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: FoivosC <christoulakis.foivos@adlittle.com>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
Co-authored-by: niansa <anton-sa@web.de>
Co-authored-by: mudler <mudler@mocaccino.org>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Tim Miller <innerlogic4321@gmail.com>
Co-authored-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: Claudius Ellsel <claudius.ellsel@live.de>
Co-authored-by: pingpongching <golololologol02@gmail.com>
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: Cosmic Snow <cosmic-snow@mailfence.com>
2023-06-12 15:00:20 -04:00
|
|
|
|
2024-02-24 17:50:14 -05:00
|
|
|
PromptWorker::PromptWorker(Napi::Env env, PromptWorkerConfig config)
|
2024-03-28 12:08:23 -04:00
|
|
|
: promise(Napi::Promise::Deferred::New(env)), _config(config), AsyncWorker(env)
|
|
|
|
{
|
|
|
|
if (_config.hasResponseCallback)
|
2024-02-24 17:50:14 -05:00
|
|
|
{
|
2024-03-28 12:08:23 -04:00
|
|
|
_responseCallbackFn = Napi::ThreadSafeFunction::New(config.responseCallback.Env(), config.responseCallback,
|
|
|
|
"PromptWorker", 0, 1, this);
|
2024-02-24 17:50:14 -05:00
|
|
|
}
|
|
|
|
|
2024-03-28 12:08:23 -04:00
|
|
|
if (_config.hasPromptCallback)
|
2024-02-24 17:50:14 -05:00
|
|
|
{
|
2024-03-28 12:08:23 -04:00
|
|
|
_promptCallbackFn = Napi::ThreadSafeFunction::New(config.promptCallback.Env(), config.promptCallback,
|
|
|
|
"PromptWorker", 0, 1, this);
|
2024-02-24 17:50:14 -05:00
|
|
|
}
|
2024-03-28 12:08:23 -04:00
|
|
|
}
|
2024-02-24 17:50:14 -05:00
|
|
|
|
2024-03-28 12:08:23 -04:00
|
|
|
PromptWorker::~PromptWorker()
|
|
|
|
{
|
|
|
|
if (_config.hasResponseCallback)
|
2024-02-24 17:50:14 -05:00
|
|
|
{
|
2024-03-28 12:08:23 -04:00
|
|
|
_responseCallbackFn.Release();
|
2024-02-24 17:50:14 -05:00
|
|
|
}
|
2024-03-28 12:08:23 -04:00
|
|
|
if (_config.hasPromptCallback)
|
2024-02-24 17:50:14 -05:00
|
|
|
{
|
2024-03-28 12:08:23 -04:00
|
|
|
_promptCallbackFn.Release();
|
2024-02-24 17:50:14 -05:00
|
|
|
}
|
2024-03-28 12:08:23 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
void PromptWorker::Execute()
|
|
|
|
{
|
|
|
|
_config.mutex->lock();
|
|
|
|
|
|
|
|
LLModelWrapper *wrapper = reinterpret_cast<LLModelWrapper *>(_config.model);
|
|
|
|
|
|
|
|
auto ctx = &_config.context;
|
|
|
|
|
|
|
|
if (size_t(ctx->n_past) < wrapper->promptContext.tokens.size())
|
|
|
|
wrapper->promptContext.tokens.resize(ctx->n_past);
|
|
|
|
|
|
|
|
// Copy the C prompt context
|
|
|
|
wrapper->promptContext.n_past = ctx->n_past;
|
|
|
|
wrapper->promptContext.n_ctx = ctx->n_ctx;
|
|
|
|
wrapper->promptContext.n_predict = ctx->n_predict;
|
|
|
|
wrapper->promptContext.top_k = ctx->top_k;
|
|
|
|
wrapper->promptContext.top_p = ctx->top_p;
|
|
|
|
wrapper->promptContext.temp = ctx->temp;
|
|
|
|
wrapper->promptContext.n_batch = ctx->n_batch;
|
|
|
|
wrapper->promptContext.repeat_penalty = ctx->repeat_penalty;
|
|
|
|
wrapper->promptContext.repeat_last_n = ctx->repeat_last_n;
|
|
|
|
wrapper->promptContext.contextErase = ctx->context_erase;
|
|
|
|
|
|
|
|
// Call the C++ prompt method
|
|
|
|
|
|
|
|
wrapper->llModel->prompt(
|
|
|
|
_config.prompt, _config.promptTemplate, [this](int32_t token_id) { return PromptCallback(token_id); },
|
|
|
|
[this](int32_t token_id, const std::string token) { return ResponseCallback(token_id, token); },
|
|
|
|
[](bool isRecalculating) { return isRecalculating; }, wrapper->promptContext, _config.special,
|
|
|
|
_config.fakeReply);
|
|
|
|
|
|
|
|
// Update the C context by giving access to the wrappers raw pointers to std::vector data
|
|
|
|
// which involves no copies
|
|
|
|
ctx->logits = wrapper->promptContext.logits.data();
|
|
|
|
ctx->logits_size = wrapper->promptContext.logits.size();
|
|
|
|
ctx->tokens = wrapper->promptContext.tokens.data();
|
|
|
|
ctx->tokens_size = wrapper->promptContext.tokens.size();
|
|
|
|
|
|
|
|
// Update the rest of the C prompt context
|
|
|
|
ctx->n_past = wrapper->promptContext.n_past;
|
|
|
|
ctx->n_ctx = wrapper->promptContext.n_ctx;
|
|
|
|
ctx->n_predict = wrapper->promptContext.n_predict;
|
|
|
|
ctx->top_k = wrapper->promptContext.top_k;
|
|
|
|
ctx->top_p = wrapper->promptContext.top_p;
|
|
|
|
ctx->temp = wrapper->promptContext.temp;
|
|
|
|
ctx->n_batch = wrapper->promptContext.n_batch;
|
|
|
|
ctx->repeat_penalty = wrapper->promptContext.repeat_penalty;
|
|
|
|
ctx->repeat_last_n = wrapper->promptContext.repeat_last_n;
|
|
|
|
ctx->context_erase = wrapper->promptContext.contextErase;
|
|
|
|
|
|
|
|
_config.mutex->unlock();
|
|
|
|
}
|
|
|
|
|
|
|
|
void PromptWorker::OnOK()
|
|
|
|
{
|
|
|
|
Napi::Object returnValue = Napi::Object::New(Env());
|
|
|
|
returnValue.Set("text", result);
|
|
|
|
returnValue.Set("nPast", _config.context.n_past);
|
|
|
|
promise.Resolve(returnValue);
|
|
|
|
delete _config.fakeReply;
|
|
|
|
}
|
|
|
|
|
|
|
|
void PromptWorker::OnError(const Napi::Error &e)
|
|
|
|
{
|
|
|
|
delete _config.fakeReply;
|
|
|
|
promise.Reject(e.Value());
|
|
|
|
}
|
|
|
|
|
|
|
|
Napi::Promise PromptWorker::GetPromise()
|
|
|
|
{
|
|
|
|
return promise.Promise();
|
|
|
|
}
|
|
|
|
|
|
|
|
bool PromptWorker::ResponseCallback(int32_t token_id, const std::string token)
|
|
|
|
{
|
|
|
|
if (token_id == -1)
|
2024-02-24 17:50:14 -05:00
|
|
|
{
|
2024-03-28 12:08:23 -04:00
|
|
|
return false;
|
2024-02-24 17:50:14 -05:00
|
|
|
}
|
|
|
|
|
2024-03-28 12:08:23 -04:00
|
|
|
if (!_config.hasResponseCallback)
|
2024-02-24 17:50:14 -05:00
|
|
|
{
|
2024-03-28 12:08:23 -04:00
|
|
|
return true;
|
2024-02-24 17:50:14 -05:00
|
|
|
}
|
|
|
|
|
2024-03-28 12:08:23 -04:00
|
|
|
result += token;
|
|
|
|
|
|
|
|
std::promise<bool> promise;
|
|
|
|
|
|
|
|
auto info = new ResponseCallbackData();
|
|
|
|
info->tokenId = token_id;
|
|
|
|
info->token = token;
|
|
|
|
|
|
|
|
auto future = promise.get_future();
|
|
|
|
|
|
|
|
auto status = _responseCallbackFn.BlockingCall(
|
|
|
|
info, [&promise](Napi::Env env, Napi::Function jsCallback, ResponseCallbackData *value) {
|
|
|
|
try
|
|
|
|
{
|
|
|
|
// Transform native data into JS data, passing it to the provided
|
|
|
|
// `jsCallback` -- the TSFN's JavaScript function.
|
|
|
|
auto token_id = Napi::Number::New(env, value->tokenId);
|
|
|
|
auto token = Napi::String::New(env, value->token);
|
|
|
|
auto jsResult = jsCallback.Call({token_id, token}).ToBoolean();
|
|
|
|
promise.set_value(jsResult);
|
|
|
|
}
|
|
|
|
catch (const Napi::Error &e)
|
|
|
|
{
|
|
|
|
std::cerr << "Error in onResponseToken callback: " << e.what() << std::endl;
|
|
|
|
promise.set_value(false);
|
|
|
|
}
|
|
|
|
|
|
|
|
delete value;
|
|
|
|
});
|
|
|
|
if (status != napi_ok)
|
2024-02-24 17:50:14 -05:00
|
|
|
{
|
2024-03-28 12:08:23 -04:00
|
|
|
Napi::Error::Fatal("PromptWorkerResponseCallback", "Napi::ThreadSafeNapi::Function.NonBlockingCall() failed");
|
2024-02-24 17:50:14 -05:00
|
|
|
}
|
|
|
|
|
2024-03-28 12:08:23 -04:00
|
|
|
return future.get();
|
|
|
|
}
|
|
|
|
|
|
|
|
bool PromptWorker::RecalculateCallback(bool isRecalculating)
|
|
|
|
{
|
|
|
|
return isRecalculating;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool PromptWorker::PromptCallback(int32_t token_id)
|
|
|
|
{
|
|
|
|
if (!_config.hasPromptCallback)
|
2024-02-24 17:50:14 -05:00
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
2024-03-28 12:08:23 -04:00
|
|
|
|
|
|
|
std::promise<bool> promise;
|
|
|
|
|
|
|
|
auto info = new PromptCallbackData();
|
|
|
|
info->tokenId = token_id;
|
|
|
|
|
|
|
|
auto future = promise.get_future();
|
|
|
|
|
|
|
|
auto status = _promptCallbackFn.BlockingCall(
|
|
|
|
info, [&promise](Napi::Env env, Napi::Function jsCallback, PromptCallbackData *value) {
|
|
|
|
try
|
|
|
|
{
|
|
|
|
// Transform native data into JS data, passing it to the provided
|
|
|
|
// `jsCallback` -- the TSFN's JavaScript function.
|
|
|
|
auto token_id = Napi::Number::New(env, value->tokenId);
|
|
|
|
auto jsResult = jsCallback.Call({token_id}).ToBoolean();
|
|
|
|
promise.set_value(jsResult);
|
|
|
|
}
|
|
|
|
catch (const Napi::Error &e)
|
|
|
|
{
|
|
|
|
std::cerr << "Error in onPromptToken callback: " << e.what() << std::endl;
|
|
|
|
promise.set_value(false);
|
|
|
|
}
|
|
|
|
delete value;
|
|
|
|
});
|
|
|
|
if (status != napi_ok)
|
|
|
|
{
|
|
|
|
Napi::Error::Fatal("PromptWorkerPromptCallback", "Napi::ThreadSafeNapi::Function.NonBlockingCall() failed");
|
|
|
|
}
|
|
|
|
|
|
|
|
return future.get();
|
|
|
|
}
|