Commit Graph

414 Commits

Author SHA1 Message Date
AT
abd4703c79 Update gpt4all-chat/embllm.cpp
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-11-17 11:59:31 -05:00
AT
4b413a60e4 Update gpt4all-chat/embeddings.cpp
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-11-17 11:59:31 -05:00
AT
17b346dfe7 Update gpt4all-chat/embeddings.cpp
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-11-17 11:59:31 -05:00
AT
71e37816cc Update gpt4all-chat/qml/ModelDownloaderDialog.qml
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-11-17 11:59:31 -05:00
Adam Treat
371e2a5cbc LocalDocs version 2 with text embeddings. 2023-11-17 11:59:31 -05:00
Adam Treat
bc88271520 Bump version to v2.5.3 and release notes. 2023-10-30 11:15:12 -04:00
cebtenzzre
5508e43466 build_and_run: clarify which additional Qt libs are needed
Signed-off-by: cebtenzzre <cebtenzzre@gmail.com>
2023-10-30 10:37:32 -04:00
cebtenzzre
79a5522931 fix references to old backend implementations 2023-10-30 10:37:05 -04:00
Adam Treat
f529d55380 Move this logic to QML. 2023-10-30 09:57:21 -04:00
Adam Treat
5c0d077f74 Remove leading whitespace in responses. 2023-10-28 16:53:42 -04:00
Adam Treat
131cfcdeae Don't regenerate the name for deserialized chats. 2023-10-28 16:41:23 -04:00
Adam Treat
dc2e7d6e9b Don't start recalculating context immediately upon switching to a new chat
but rather wait until the first prompt. This allows users to switch between
chats fast and to delete chats more easily.

Fixes issue #1545
2023-10-28 16:41:23 -04:00
Adam Treat
89a59e7f99 Bump version and add release notes for 2.5.1 2023-10-24 13:13:04 -04:00
cebtenzzre
f5dd74bcf0
models2.json: add tokenizer merges to mpt-7b-chat model (#1563) 2023-10-24 12:43:49 -04:00
cebtenzzre
e90263c23f
make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
cebtenzzre
c25dc51935 chat: fix syntax error in main.qml 2023-10-21 21:22:37 -07:00
Victor Tsaran
721d854095
chat: improve accessibility fields (#1532)
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
2023-10-21 10:38:46 -04:00
Adam Treat
9e99cf937a Add release notes for 2.5.0 and bump the version. 2023-10-19 16:25:55 -04:00
cebtenzzre
245c5ce5ea
update default model URLs (#1538) 2023-10-19 15:25:37 -04:00
cebtenzzre
4338e72a51
MPT: use upstream llama.cpp implementation (#1515) 2023-10-19 15:25:17 -04:00
cebtenzzre
fd3014016b
docs: clarify Vulkan dep in build instructions for bindings (#1525) 2023-10-18 12:09:52 -04:00
cebtenzzre
ac33bafb91
docs: improve build_and_run.md (#1524) 2023-10-18 11:37:28 -04:00
Aaron Miller
10f9b49313 update mini-orca 3b to gguf2, license
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-12 14:57:07 -04:00
niansa/tuxifan
a35f1ab784
Updated chat wishlist (#1351) 2023-10-12 14:01:44 -04:00
Adam Treat
908aec27fe Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.
2023-10-12 07:52:11 -04:00
cebtenzzre
04499d1c7d
chatllm: do not write uninitialized data to stream (#1486) 2023-10-11 11:31:34 -04:00
Adam Treat
f0742c22f4 Restore state from text if necessary. 2023-10-11 09:16:02 -04:00
Adam Treat
35f9cdb70a Do not delete saved chats if we fail to serialize properly. 2023-10-11 09:16:02 -04:00
cebtenzzre
9fb135e020
cmake: install the GPT-J plugin (#1487) 2023-10-10 15:50:03 -04:00
Aaron Miller
3c25d81759 make codespell happy 2023-10-10 12:00:06 -04:00
Jan Philipp Harries
4f0cee9330 added EM German Mistral Model 2023-10-10 11:44:43 -04:00
Adam Treat
56c0d2898d Update the language here to avoid misunderstanding. 2023-10-06 14:38:42 -04:00
Adam Treat
b2cd3bdb3f Fix crasher with an empty string for prompt template. 2023-10-06 12:44:53 -04:00
Cebtenzzre
5fe685427a chat: clearer CPU fallback messages 2023-10-06 11:35:14 -04:00
Aaron Miller
9325075f80 fix stray comma in models2.json
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-05 18:32:23 -04:00
Adam Treat
f028f67c68 Add starcoder, rift and sbert to our models2.json. 2023-10-05 18:16:19 -04:00
Adam Treat
4528f73479 Reorder and refresh our models2.json. 2023-10-05 18:16:19 -04:00
Cebtenzzre
1534df3e9f backend: do not use Vulkan with non-LLaMA models 2023-10-05 18:16:19 -04:00
Cebtenzzre
672cb850f9 differentiate between init failure and unsupported models 2023-10-05 18:16:19 -04:00
Cebtenzzre
a5b93cf095 more accurate fallback descriptions 2023-10-05 18:16:19 -04:00
Cebtenzzre
75deee9adb chat: make sure to clear fallback reason on success 2023-10-05 18:16:19 -04:00
Cebtenzzre
2eb83b9f2a chat: report reason for fallback to CPU 2023-10-05 18:16:19 -04:00
Adam Treat
ea66669cef Switch to new models2.json for new gguf release and bump our version to
2.5.0.
2023-10-05 18:16:19 -04:00
Adam Treat
12f943e966 Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf. 2023-10-05 18:16:19 -04:00
Cebtenzzre
a49a1dcdf4 chatllm: grammar fix 2023-10-05 18:16:19 -04:00
Cebtenzzre
31b20f093a modellist: fix the system prompt 2023-10-05 18:16:19 -04:00
Cebtenzzre
8f3abb37ca fix references to removed model types 2023-10-05 18:16:19 -04:00
Adam Treat
d90d003a1d Latest rebase on llama.cpp with gguf support. 2023-10-05 18:16:19 -04:00
Akarshan Biswas
5f3d739205 appdata: update software description 2023-10-05 10:12:43 -04:00
Akarshan Biswas
b4cf12e1bd Update to 2.4.19 2023-10-05 10:12:43 -04:00
Akarshan Biswas
21a5709b07 Remove unnecessary stuffs from manifest 2023-10-05 10:12:43 -04:00
Akarshan Biswas
4426640f44 Add flatpak manifest 2023-10-05 10:12:43 -04:00
Aaron Miller
6711bddc4c launch browser instead of maintenancetool from offline builds 2023-09-27 11:24:21 -07:00
Aaron Miller
7f979c8258 Build offline installers in CircleCI 2023-09-27 11:24:21 -07:00
Adam Treat
dc80d1e578 Fix up the offline installer. 2023-09-18 16:21:50 -04:00
Adam Treat
f47e698193 Release notes for v2.4.19 and bump the version. 2023-09-16 12:35:08 -04:00
Adam Treat
ecf014f03b Release notes for v2.4.18 and bump the version. 2023-09-16 10:21:50 -04:00
Adam Treat
e6e724d2dc Actually bump the version. 2023-09-16 10:07:20 -04:00
Adam Treat
06a833e652 Send actual and requested device info for those who have opt-in. 2023-09-16 09:42:22 -04:00
Adam Treat
045f6e6cdc Link against ggml in bin so we can get the available devices without loading a model. 2023-09-15 14:45:25 -04:00
Adam Treat
655372dbfa Release notes for v2.4.17 and bump the version. 2023-09-14 17:11:04 -04:00
Adam Treat
aa33419c6e Fallback to CPU more robustly. 2023-09-14 16:53:11 -04:00
Adam Treat
79843c269e Release notes for v2.4.16 and bump the version. 2023-09-14 11:24:25 -04:00
Adam Treat
3076e0bf26 Only show GPU when we're actually using it. 2023-09-14 09:59:19 -04:00
Adam Treat
1fa67a585c Report the actual device we're using. 2023-09-14 08:25:37 -04:00
Adam Treat
21a3244645 Fix a bug where we're not properly falling back to CPU. 2023-09-13 19:30:27 -04:00
Adam Treat
0458c9b4e6 Add version 2.4.15 and bump the version number. 2023-09-13 17:55:50 -04:00
Aaron Miller
6f038c136b init at most one vulkan device, submodule update
fixes issues w/ multiple of the same gpu
2023-09-13 12:49:53 -07:00
Adam Treat
86e862df7e Fix up the name and formatting. 2023-09-13 15:48:55 -04:00
Adam Treat
358ff2a477 Show the device we're currently using. 2023-09-13 15:24:33 -04:00
Adam Treat
891ddafc33 When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU. 2023-09-13 11:59:36 -04:00
Adam Treat
8f99dca70f Bring the vulkan backend to the GUI. 2023-09-13 11:26:10 -04:00
Adam Treat
987546c63b Nomic vulkan backend licensed under the Software for Open Models License (SOM), version 1.0. 2023-08-31 15:29:54 -04:00
Adam Treat
d55cbbee32 Update to newer llama.cpp and disable older forks. 2023-08-31 15:29:54 -04:00
Adam Treat
a63093554f Remove older models that rely upon soon to be no longer supported quantization formats. 2023-08-15 13:19:41 -04:00
Adam Treat
2c0ee50dce Add starcoder 7b. 2023-08-15 09:27:55 -04:00
Victor Tsaran
ca8baa294b
Updated README.md with a wishlist idea (#1315)
Signed-off-by: Victor Tsaran <vtsaran@yahoo.com>
2023-08-10 11:27:09 -04:00
Lakshay Kansal
0f2bb506a8
font size changer and updates (#1322) 2023-08-07 13:54:13 -04:00
Akarshan Biswas
c449b71b56
Add LLaMA2 7B model to model.json. (#1296)
* Add LLaMA2 7B model to model.json.

---------

Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-08-02 16:58:14 +02:00
Lakshay Kansal
cbdcde8b75
scrollbar fixed for main chat and chat drawer (#1301) 2023-07-31 12:18:38 -04:00
Lakshay Kansal
3d2db76070
fixed issue of text color changing for code blocks in light mode (#1299) 2023-07-31 12:18:19 -04:00
Aaron Miller
b9e2553995
remove trailing comma from models json (#1284) 2023-07-27 09:14:33 -07:00
Adam Treat
09a143228c New release notes and bump version. 2023-07-27 11:48:16 -04:00
Lakshay Kansal
fc1af4a234 light mode vs dark mode 2023-07-27 09:31:55 -04:00
Adam Treat
6d03b3e500 Add starcoder support. 2023-07-27 09:15:16 -04:00
Adam Treat
397f3ba2d7 Add a little size to the monospace font. 2023-07-27 09:15:16 -04:00
AMOGUS
4974ae917c Update default TopP to 0.4
TopP 0.1 was found to be somewhat too aggressive, so a more moderate default of 0.4 would be better suited for general use.

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
2023-07-19 10:36:23 -04:00
Lakshay Kansal
6c8669cad3 highlighting rules for html and php and latex 2023-07-14 11:36:01 -04:00
Adam Treat
0efdbfcffe Bert 2023-07-13 14:21:46 -04:00
Adam Treat
315a1f2aa2 Move it back as internal class. 2023-07-13 14:21:46 -04:00
Adam Treat
ae8eb297ac Add sbert backend. 2023-07-13 14:21:46 -04:00
Adam Treat
1f749d7633 Clean up backend code a bit and hide impl. details. 2023-07-13 14:21:46 -04:00
Adam Treat
a0dae86a95 Add bert to models.json 2023-07-13 13:37:12 -04:00
AT
18ca8901f0
Update README.md
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-07-12 16:30:56 -04:00
Adam Treat
e8b19b8e82 Bump version to 2.4.14 and provide release notes. 2023-07-12 14:58:45 -04:00
Adam Treat
8eb0844277 Check if the trimmed version is empty. 2023-07-12 14:31:43 -04:00
Adam Treat
be395c12cc Make all system prompts empty by default if model does not include in training data. 2023-07-12 14:31:43 -04:00
Aaron Miller
6a8fa27c8d Correctly find models in subdirs of model dir
QDirIterator doesn't seem particular subdir aware, its path() returns
the iterated dir. This was the simplest way I found to get this right.
2023-07-12 14:18:40 -04:00
Adam Treat
8893db5896 Add wizard model and rename orca to be more specific. 2023-07-12 14:12:46 -04:00
Adam Treat
60627bd41f Prefer 7b models in order of default model load. 2023-07-12 12:50:18 -04:00