Commit Graph

40 Commits

Author SHA1 Message Date
AT
b89314df96
Change to a whitelist for released translations. (#2830)
- Change to a whitelist for released translations.
- Added changelog entry.
- Bump the version for translation release.

Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-08-12 11:00:49 -04:00
Jared Van Bortel
6957706af7
chat: fix crash at startup due to missing en_US translation (#2816)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-08-08 18:44:15 -04:00
Jared Van Bortel
d59b1331f9
chat: translation tweaks (#2797)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-08-08 13:41:47 -04:00
AT
d7f7c36bb3
Fix settings translations (#2690)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-07-19 14:28:54 -04:00
AT
88a206ab93
settings: use enums for ChatTheme/FontSize, translate choices (#2667)
Also change SuggestionMode to work the same way.

Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-07-16 16:12:44 -04:00
Adam Treat
4996824ab1 Replace hyphens with underscores and fix build.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-07-12 17:18:01 -04:00
AT
d515ad3b18
Feature: dynamic changes of language and locale at runtime issue #2644 (#2659)
This change updates the UI to allow for dynamic changes of language and
locale at runtime. Right now none of the language translations are finished
yet or in releasable shape so it also adds a new option to the build that
enables/disables the feature. By default no translations are currently
enabled to be built as part of a release.

Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-07-12 16:14:58 -04:00
AT
66bc04aa8e
chat: generate follow-up questions after response (#2634)
* user can configure the prompt and when they appear
* also make the name generation prompt configurable

Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-07-10 15:45:20 -04:00
AT
37dbd56153
Latest rc5 fixes (#2492)
* Adjust the size of the new conversation tray to enlarge a bit.

* Add themeable code syntax highlighting.

* Change the default size to a larger context chunk for localdocs.

Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-06-30 19:15:01 -04:00
Jared Van Bortel
2c8d634b5b
UI and embedding device changes for GPT4All v3.0.0-rc3 (#2477)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-06-28 12:57:57 -04:00
AT
9273b49b62
chat: major UI redesign for v3.0.0 (#2396)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-06-24 18:49:23 -04:00
Jared Van Bortel
41c9013fa4
chat: don't use incomplete types with signals/slots/Q_INVOKABLE (#2408)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-06-06 11:59:28 -04:00
Jared Van Bortel
d3d777bc51
chat: fix #includes with include-what-you-use (#2401)
Also use qGuiApp instead of qApp.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-06-04 14:47:11 -04:00
Jared Van Bortel
d2a99d9bc6
support the llama.cpp CUDA backend (#2310)
* rebase onto llama.cpp commit ggerganov/llama.cpp@d46dbc76f
* support for CUDA backend (enabled by default)
* partial support for Occam's Vulkan backend (disabled by default)
* partial support for HIP/ROCm backend (disabled by default)
* sync llama.cpp.cmake with upstream llama.cpp CMakeLists.txt
* changes to GPT4All backend, bindings, and chat UI to handle choice of llama.cpp backend (Kompute or CUDA)
* ship CUDA runtime with installed version
* make device selection in the UI on macOS actually do something
* model whitelist: remove dbrx, mamba, persimmon, plamo; add internlm and starcoder2

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-15 15:27:50 -04:00
Jared Van Bortel
c622921894
improve mixpanel usage statistics (#2238)
Other changes:
- Always display first start dialog if privacy options are unset (e.g. if the user closed GPT4All without selecting them)
- LocalDocs scanQueue is now always deferred
- Fix a potential crash in magic_match
- LocalDocs indexing is now started after the first start dialog is dismissed so usage stats are included

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-04-25 13:16:52 -04:00
Adam Treat
59f99b7f21 Minor fixes to server port feature.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-03-09 10:32:53 -05:00
Daniel Alencar
fe653d1489 feat: added api server port setting 2024-03-09 09:26:40 -06:00
Adam Treat
83c76be68a Model discovery.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-03-05 11:31:47 -05:00
chrisbarrera
f8b1069a1c
add min_p sampling parameter (#2014)
Signed-off-by: Christopher Barrera <cb@arda.tx.rr.com>
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
2024-02-24 17:51:34 -05:00
Adam Treat
fa0a2129dc Don't try and detect model load error on startup.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-02-21 10:15:20 -06:00
Jared Van Bortel
061d1969f8
expose n_gpu_layers parameter of llama.cpp (#1890)
Also dynamically limit the GPU layers and context length fields to the maximum supported by the model.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-01-31 14:17:44 -05:00
Jared Van Bortel
d1c56b8b28
Implement configurable context length (#1749) 2023-12-16 17:58:15 -05:00
Jared Van Bortel
3acbef14b7
fix AVX support by removing direct linking to AVX2 libs (#1750) 2023-12-13 12:11:09 -05:00
Adam Treat
908aec27fe Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.
2023-10-12 07:52:11 -04:00
Adam Treat
045f6e6cdc Link against ggml in bin so we can get the available devices without loading a model. 2023-09-15 14:45:25 -04:00
Adam Treat
8f99dca70f Bring the vulkan backend to the GUI. 2023-09-13 11:26:10 -04:00
Lakshay Kansal
0f2bb506a8
font size changer and updates (#1322) 2023-08-07 13:54:13 -04:00
Lakshay Kansal
fc1af4a234 light mode vs dark mode 2023-07-27 09:31:55 -04:00
Adam Treat
18dbfddcb3 Fix default thread setting. 2023-07-11 13:07:41 -04:00
Adam Treat
88bbe30952 Provide a guardrail for OOM errors. 2023-07-11 12:09:33 -04:00
Adam Treat
99cd555743 Provide some guardrails for thread count. 2023-07-10 17:29:51 -04:00
Adam Treat
58d6f40f50 Fix broken installs. 2023-07-09 11:50:44 -04:00
Adam Treat
eab92a9d73 Fix typo and add new show references setting to localdocs. 2023-07-05 19:41:23 -04:00
Adam Treat
6d9cdf228c Huge change that completely revamps the settings dialog and implements
per model settings as well as the ability to clone a model into a "character."
This also implements system prompts as well as quite a few bugfixes for
instance this fixes chatgpt.
2023-07-05 15:51:42 -04:00
Adam Treat
7f252b4970 This completes the work of consolidating all settings that can be changed by the user on new settings object. 2023-06-29 00:44:48 -03:00
Adam Treat
285aa50b60 Consolidate generation and application settings on the new settings object. 2023-06-28 20:36:43 -03:00
Adam Treat
a8baa4da52 The sync for save should be after. 2023-06-28 20:11:24 -03:00
Adam Treat
705b480d72 Start moving toward a single authoritative class for all settings. This
is necessary to get rid of technical debt before we drastically increase
the complexity of settings by adding per model settings and mirostat and
other fun things. Right now the settings are divided between QML and C++
and some convenience methods to deal with settings sync and so on that are
in other singletons. This change consolidates all the logic for settings
into a single class with a single API for both C++ and QML.
2023-06-28 20:11:24 -03:00
Adam Treat
42c0a6673a Don't persist the force metal setting. 2023-06-27 14:23:56 -03:00
Adam Treat
267601d670 Enable the force metal setting. 2023-06-27 14:23:56 -03:00