* Don't stop generating at end of context
* Use llama_kv_cache ops to shift context
* Fix and improve reverse prompt detection
* Replace prompt recalc callback with a flag to disallow context shift
This change updates the UI to allow for dynamic changes of language and
locale at runtime. Right now none of the language translations are finished
yet or in releasable shape so it also adds a new option to the build that
enables/disables the feature. By default no translations are currently
enabled to be built as part of a release.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
* Fix up concat strings in favor of args, remove some translations that are not meant to be translated and add chinese.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
* user can configure the prompt and when they appear
* also make the name generation prompt configurable
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
* Style and align with a rounded border for combobox popups.
* Convert this menu to use the new style as well.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Adds workflow signing Windows installers with
EV certificate from Azure Key Vault via
AzureSignTool
Adds CMake to sign Windows binaries as they're processed
Installs dotnet 8 as required by AST
Signed-off-by: John Parent <john.parent@kitware.com>
* Use the same font size for code blocks as we do for the rest of the chat text.
* Add a conversation tray after discussion with Vincent and Andriy and gathering
of feedback from some other users. This adds the reset context back as a
recycle button and copy chat features back to the app for v3.0.0.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Adds verification functionality to codesign script
Adds required context to enable XCode to perform the signing
Adds install time check + signing for all binaries
Adds instructions allowing macdeployqt to sign the finalized app bundle
Signed-off-by: John Parent <john.parent@kitware.com>
* Correctly displays inline code blocks with syntax highlighting turned on
as well as markdown at the same time
* Adds a context menu item for toggling markdown on and off which also
which essentially turns on/off all text processing
* Uses QTextDocument::MarkdownNoHTML to handle markdown in QTextDocument
which allows display of html tags like normal, but unfortunately does not
allow display of markdown tables as markdown
Signed-off-by: Adam Treat <treat.adam@gmail.com>
* chat: remove unused oscompat source files
These files are no longer needed now that the hnswlib index is gone.
This fixes an issue with the Windows build as there was a compilation
error in oscompat.cpp.
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* llm: fix pragma to be recognized by MSVC
Replaces this MSVC warning:
C:\msys64\home\Jared\gpt4all\gpt4all-chat\llm.cpp(53,21): warning C4081: expected '('; found 'string'
With this:
C:\msys64\home\Jared\gpt4all\gpt4all-chat\llm.cpp : warning : offline installer build will not check for updates!
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* usearch: fork usearch to fix `CreateFile` build error
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* dlhandle: fix incorrect assertion on Windows
SetErrorMode returns the previous value of the error mode flags, not an
indicator of success.
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* llamamodel: fix UB in LLamaModel::embedInternal
It is undefined behavior to increment an STL iterator past the end of
the container. Use offsets to do the math instead.
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* cmake: install embedding model to bundle's Resources dir on macOS
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* ci: fix macOS build by explicitly installing Rosetta
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
---------
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* rebase onto llama.cpp commit ggerganov/llama.cpp@d46dbc76f
* support for CUDA backend (enabled by default)
* partial support for Occam's Vulkan backend (disabled by default)
* partial support for HIP/ROCm backend (disabled by default)
* sync llama.cpp.cmake with upstream llama.cpp CMakeLists.txt
* changes to GPT4All backend, bindings, and chat UI to handle choice of llama.cpp backend (Kompute or CUDA)
* ship CUDA runtime with installed version
* make device selection in the UI on macOS actually do something
* model whitelist: remove dbrx, mamba, persimmon, plamo; add internlm and starcoder2
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* chat: fix window icon on Windows
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* chat: remove redundant copy of macOS app icon
This has been redundant since PR #2180.
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
---------
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* chat: revert PR #2187
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* chat: revert PR #2148
This reverts commit f571e7e450.
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
---------
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
This commit allow changing the install path during CMake configure step using the CMAKE_INSTALL_PREFIX variable. If the variable is not set, it still defaults to {CMAKE_BINARY_DIR}/install.
Signed-off-by: Tim453 <50015765+Tim453@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Cédric Sazos <cedric.sazos@tutanota.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>