Commit Graph

309 Commits

Author SHA1 Message Date
Adam Treat
f47e698193 Release notes for v2.4.19 and bump the version. 2023-09-16 12:35:08 -04:00
Adam Treat
ecf014f03b Release notes for v2.4.18 and bump the version. 2023-09-16 10:21:50 -04:00
Adam Treat
e6e724d2dc Actually bump the version. 2023-09-16 10:07:20 -04:00
Adam Treat
06a833e652 Send actual and requested device info for those who have opt-in. 2023-09-16 09:42:22 -04:00
Adam Treat
045f6e6cdc Link against ggml in bin so we can get the available devices without loading a model. 2023-09-15 14:45:25 -04:00
Adam Treat
655372dbfa Release notes for v2.4.17 and bump the version. 2023-09-14 17:11:04 -04:00
Adam Treat
aa33419c6e Fallback to CPU more robustly. 2023-09-14 16:53:11 -04:00
Adam Treat
79843c269e Release notes for v2.4.16 and bump the version. 2023-09-14 11:24:25 -04:00
Adam Treat
3076e0bf26 Only show GPU when we're actually using it. 2023-09-14 09:59:19 -04:00
Adam Treat
1fa67a585c Report the actual device we're using. 2023-09-14 08:25:37 -04:00
Adam Treat
21a3244645 Fix a bug where we're not properly falling back to CPU. 2023-09-13 19:30:27 -04:00
Adam Treat
0458c9b4e6 Add version 2.4.15 and bump the version number. 2023-09-13 17:55:50 -04:00
Aaron Miller
6f038c136b init at most one vulkan device, submodule update
fixes issues w/ multiple of the same gpu
2023-09-13 12:49:53 -07:00
Adam Treat
86e862df7e Fix up the name and formatting. 2023-09-13 15:48:55 -04:00
Adam Treat
358ff2a477 Show the device we're currently using. 2023-09-13 15:24:33 -04:00
Adam Treat
891ddafc33 When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU. 2023-09-13 11:59:36 -04:00
Adam Treat
8f99dca70f Bring the vulkan backend to the GUI. 2023-09-13 11:26:10 -04:00
Adam Treat
987546c63b Nomic vulkan backend licensed under the Software for Open Models License (SOM), version 1.0. 2023-08-31 15:29:54 -04:00
Adam Treat
d55cbbee32 Update to newer llama.cpp and disable older forks. 2023-08-31 15:29:54 -04:00
Adam Treat
a63093554f Remove older models that rely upon soon to be no longer supported quantization formats. 2023-08-15 13:19:41 -04:00
Adam Treat
2c0ee50dce Add starcoder 7b. 2023-08-15 09:27:55 -04:00
Victor Tsaran
ca8baa294b
Updated README.md with a wishlist idea (#1315)
Signed-off-by: Victor Tsaran <vtsaran@yahoo.com>
2023-08-10 11:27:09 -04:00
Lakshay Kansal
0f2bb506a8
font size changer and updates (#1322) 2023-08-07 13:54:13 -04:00
Akarshan Biswas
c449b71b56
Add LLaMA2 7B model to model.json. (#1296)
* Add LLaMA2 7B model to model.json.

---------

Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-08-02 16:58:14 +02:00
Lakshay Kansal
cbdcde8b75
scrollbar fixed for main chat and chat drawer (#1301) 2023-07-31 12:18:38 -04:00
Lakshay Kansal
3d2db76070
fixed issue of text color changing for code blocks in light mode (#1299) 2023-07-31 12:18:19 -04:00
Aaron Miller
b9e2553995
remove trailing comma from models json (#1284) 2023-07-27 09:14:33 -07:00
Adam Treat
09a143228c New release notes and bump version. 2023-07-27 11:48:16 -04:00
Lakshay Kansal
fc1af4a234 light mode vs dark mode 2023-07-27 09:31:55 -04:00
Adam Treat
6d03b3e500 Add starcoder support. 2023-07-27 09:15:16 -04:00
Adam Treat
397f3ba2d7 Add a little size to the monospace font. 2023-07-27 09:15:16 -04:00
AMOGUS
4974ae917c Update default TopP to 0.4
TopP 0.1 was found to be somewhat too aggressive, so a more moderate default of 0.4 would be better suited for general use.

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
2023-07-19 10:36:23 -04:00
Lakshay Kansal
6c8669cad3 highlighting rules for html and php and latex 2023-07-14 11:36:01 -04:00
Adam Treat
0efdbfcffe Bert 2023-07-13 14:21:46 -04:00
Adam Treat
315a1f2aa2 Move it back as internal class. 2023-07-13 14:21:46 -04:00
Adam Treat
ae8eb297ac Add sbert backend. 2023-07-13 14:21:46 -04:00
Adam Treat
1f749d7633 Clean up backend code a bit and hide impl. details. 2023-07-13 14:21:46 -04:00
Adam Treat
a0dae86a95 Add bert to models.json 2023-07-13 13:37:12 -04:00
AT
18ca8901f0
Update README.md
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-07-12 16:30:56 -04:00
Adam Treat
e8b19b8e82 Bump version to 2.4.14 and provide release notes. 2023-07-12 14:58:45 -04:00
Adam Treat
8eb0844277 Check if the trimmed version is empty. 2023-07-12 14:31:43 -04:00
Adam Treat
be395c12cc Make all system prompts empty by default if model does not include in training data. 2023-07-12 14:31:43 -04:00
Aaron Miller
6a8fa27c8d Correctly find models in subdirs of model dir
QDirIterator doesn't seem particular subdir aware, its path() returns
the iterated dir. This was the simplest way I found to get this right.
2023-07-12 14:18:40 -04:00
Adam Treat
8893db5896 Add wizard model and rename orca to be more specific. 2023-07-12 14:12:46 -04:00
Adam Treat
60627bd41f Prefer 7b models in order of default model load. 2023-07-12 12:50:18 -04:00
Aaron Miller
5df4f1bf8c codespell 2023-07-12 12:49:06 -04:00
Aaron Miller
10ca2c4475 center the spinner 2023-07-12 12:49:06 -04:00
Adam Treat
e9897518d1 Show busy if models.json download taking longer than expected. 2023-07-12 12:49:06 -04:00
Aaron Miller
ad0e7fd01f chatgpt: ensure no extra newline in header 2023-07-12 10:53:25 -04:00
Aaron Miller
f0faa23ad5
cmakelists: always export build commands (#1179)
friendly for using editors with clangd integration that don't also
manage the build themselves
2023-07-12 10:49:24 -04:00