cebtenzzre
f5dd74bcf0
models2.json: add tokenizer merges to mpt-7b-chat model ( #1563 )
2023-10-24 12:43:49 -04:00
cebtenzzre
e90263c23f
make scripts executable ( #1555 )
2023-10-24 09:28:21 -04:00
cebtenzzre
c25dc51935
chat: fix syntax error in main.qml
2023-10-21 21:22:37 -07:00
Victor Tsaran
721d854095
chat: improve accessibility fields ( #1532 )
...
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
2023-10-21 10:38:46 -04:00
Adam Treat
9e99cf937a
Add release notes for 2.5.0 and bump the version.
2023-10-19 16:25:55 -04:00
cebtenzzre
245c5ce5ea
update default model URLs ( #1538 )
2023-10-19 15:25:37 -04:00
cebtenzzre
4338e72a51
MPT: use upstream llama.cpp implementation ( #1515 )
2023-10-19 15:25:17 -04:00
cebtenzzre
fd3014016b
docs: clarify Vulkan dep in build instructions for bindings ( #1525 )
2023-10-18 12:09:52 -04:00
cebtenzzre
ac33bafb91
docs: improve build_and_run.md ( #1524 )
2023-10-18 11:37:28 -04:00
Aaron Miller
10f9b49313
update mini-orca 3b to gguf2, license
...
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-12 14:57:07 -04:00
niansa/tuxifan
a35f1ab784
Updated chat wishlist ( #1351 )
2023-10-12 14:01:44 -04:00
Adam Treat
908aec27fe
Always save chats to disk, but save them as text by default. This also changes
...
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.
2023-10-12 07:52:11 -04:00
cebtenzzre
04499d1c7d
chatllm: do not write uninitialized data to stream ( #1486 )
2023-10-11 11:31:34 -04:00
Adam Treat
f0742c22f4
Restore state from text if necessary.
2023-10-11 09:16:02 -04:00
Adam Treat
35f9cdb70a
Do not delete saved chats if we fail to serialize properly.
2023-10-11 09:16:02 -04:00
cebtenzzre
9fb135e020
cmake: install the GPT-J plugin ( #1487 )
2023-10-10 15:50:03 -04:00
Aaron Miller
3c25d81759
make codespell happy
2023-10-10 12:00:06 -04:00
Jan Philipp Harries
4f0cee9330
added EM German Mistral Model
2023-10-10 11:44:43 -04:00
Adam Treat
56c0d2898d
Update the language here to avoid misunderstanding.
2023-10-06 14:38:42 -04:00
Adam Treat
b2cd3bdb3f
Fix crasher with an empty string for prompt template.
2023-10-06 12:44:53 -04:00
Cebtenzzre
5fe685427a
chat: clearer CPU fallback messages
2023-10-06 11:35:14 -04:00
Aaron Miller
9325075f80
fix stray comma in models2.json
...
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-05 18:32:23 -04:00
Adam Treat
f028f67c68
Add starcoder, rift and sbert to our models2.json.
2023-10-05 18:16:19 -04:00
Adam Treat
4528f73479
Reorder and refresh our models2.json.
2023-10-05 18:16:19 -04:00
Cebtenzzre
1534df3e9f
backend: do not use Vulkan with non-LLaMA models
2023-10-05 18:16:19 -04:00
Cebtenzzre
672cb850f9
differentiate between init failure and unsupported models
2023-10-05 18:16:19 -04:00
Cebtenzzre
a5b93cf095
more accurate fallback descriptions
2023-10-05 18:16:19 -04:00
Cebtenzzre
75deee9adb
chat: make sure to clear fallback reason on success
2023-10-05 18:16:19 -04:00
Cebtenzzre
2eb83b9f2a
chat: report reason for fallback to CPU
2023-10-05 18:16:19 -04:00
Adam Treat
ea66669cef
Switch to new models2.json for new gguf release and bump our version to
...
2.5.0.
2023-10-05 18:16:19 -04:00
Adam Treat
12f943e966
Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf.
2023-10-05 18:16:19 -04:00
Cebtenzzre
a49a1dcdf4
chatllm: grammar fix
2023-10-05 18:16:19 -04:00
Cebtenzzre
31b20f093a
modellist: fix the system prompt
2023-10-05 18:16:19 -04:00
Cebtenzzre
8f3abb37ca
fix references to removed model types
2023-10-05 18:16:19 -04:00
Adam Treat
d90d003a1d
Latest rebase on llama.cpp with gguf support.
2023-10-05 18:16:19 -04:00
Akarshan Biswas
5f3d739205
appdata: update software description
2023-10-05 10:12:43 -04:00
Akarshan Biswas
b4cf12e1bd
Update to 2.4.19
2023-10-05 10:12:43 -04:00
Akarshan Biswas
21a5709b07
Remove unnecessary stuffs from manifest
2023-10-05 10:12:43 -04:00
Akarshan Biswas
4426640f44
Add flatpak manifest
2023-10-05 10:12:43 -04:00
Aaron Miller
6711bddc4c
launch browser instead of maintenancetool from offline builds
2023-09-27 11:24:21 -07:00
Aaron Miller
7f979c8258
Build offline installers in CircleCI
2023-09-27 11:24:21 -07:00
Adam Treat
dc80d1e578
Fix up the offline installer.
2023-09-18 16:21:50 -04:00
Adam Treat
f47e698193
Release notes for v2.4.19 and bump the version.
2023-09-16 12:35:08 -04:00
Adam Treat
ecf014f03b
Release notes for v2.4.18 and bump the version.
2023-09-16 10:21:50 -04:00
Adam Treat
e6e724d2dc
Actually bump the version.
2023-09-16 10:07:20 -04:00
Adam Treat
06a833e652
Send actual and requested device info for those who have opt-in.
2023-09-16 09:42:22 -04:00
Adam Treat
045f6e6cdc
Link against ggml in bin so we can get the available devices without loading a model.
2023-09-15 14:45:25 -04:00
Adam Treat
655372dbfa
Release notes for v2.4.17 and bump the version.
2023-09-14 17:11:04 -04:00
Adam Treat
aa33419c6e
Fallback to CPU more robustly.
2023-09-14 16:53:11 -04:00
Adam Treat
79843c269e
Release notes for v2.4.16 and bump the version.
2023-09-14 11:24:25 -04:00
Adam Treat
3076e0bf26
Only show GPU when we're actually using it.
2023-09-14 09:59:19 -04:00
Adam Treat
1fa67a585c
Report the actual device we're using.
2023-09-14 08:25:37 -04:00
Adam Treat
21a3244645
Fix a bug where we're not properly falling back to CPU.
2023-09-13 19:30:27 -04:00
Adam Treat
0458c9b4e6
Add version 2.4.15 and bump the version number.
2023-09-13 17:55:50 -04:00
Aaron Miller
6f038c136b
init at most one vulkan device, submodule update
...
fixes issues w/ multiple of the same gpu
2023-09-13 12:49:53 -07:00
Adam Treat
86e862df7e
Fix up the name and formatting.
2023-09-13 15:48:55 -04:00
Adam Treat
358ff2a477
Show the device we're currently using.
2023-09-13 15:24:33 -04:00
Adam Treat
891ddafc33
When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU.
2023-09-13 11:59:36 -04:00
Adam Treat
8f99dca70f
Bring the vulkan backend to the GUI.
2023-09-13 11:26:10 -04:00
Adam Treat
987546c63b
Nomic vulkan backend licensed under the Software for Open Models License (SOM), version 1.0.
2023-08-31 15:29:54 -04:00
Adam Treat
d55cbbee32
Update to newer llama.cpp and disable older forks.
2023-08-31 15:29:54 -04:00
Adam Treat
a63093554f
Remove older models that rely upon soon to be no longer supported quantization formats.
2023-08-15 13:19:41 -04:00
Adam Treat
2c0ee50dce
Add starcoder 7b.
2023-08-15 09:27:55 -04:00
Victor Tsaran
ca8baa294b
Updated README.md with a wishlist idea ( #1315 )
...
Signed-off-by: Victor Tsaran <vtsaran@yahoo.com>
2023-08-10 11:27:09 -04:00
Lakshay Kansal
0f2bb506a8
font size changer and updates ( #1322 )
2023-08-07 13:54:13 -04:00
Akarshan Biswas
c449b71b56
Add LLaMA2 7B model to model.json. ( #1296 )
...
* Add LLaMA2 7B model to model.json.
---------
Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-08-02 16:58:14 +02:00
Lakshay Kansal
cbdcde8b75
scrollbar fixed for main chat and chat drawer ( #1301 )
2023-07-31 12:18:38 -04:00
Lakshay Kansal
3d2db76070
fixed issue of text color changing for code blocks in light mode ( #1299 )
2023-07-31 12:18:19 -04:00
Aaron Miller
b9e2553995
remove trailing comma from models json ( #1284 )
2023-07-27 09:14:33 -07:00
Adam Treat
09a143228c
New release notes and bump version.
2023-07-27 11:48:16 -04:00
Lakshay Kansal
fc1af4a234
light mode vs dark mode
2023-07-27 09:31:55 -04:00
Adam Treat
6d03b3e500
Add starcoder support.
2023-07-27 09:15:16 -04:00
Adam Treat
397f3ba2d7
Add a little size to the monospace font.
2023-07-27 09:15:16 -04:00
AMOGUS
4974ae917c
Update default TopP to 0.4
...
TopP 0.1 was found to be somewhat too aggressive, so a more moderate default of 0.4 would be better suited for general use.
Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
2023-07-19 10:36:23 -04:00
Lakshay Kansal
6c8669cad3
highlighting rules for html and php and latex
2023-07-14 11:36:01 -04:00
Adam Treat
0efdbfcffe
Bert
2023-07-13 14:21:46 -04:00
Adam Treat
315a1f2aa2
Move it back as internal class.
2023-07-13 14:21:46 -04:00
Adam Treat
ae8eb297ac
Add sbert backend.
2023-07-13 14:21:46 -04:00
Adam Treat
1f749d7633
Clean up backend code a bit and hide impl. details.
2023-07-13 14:21:46 -04:00
Adam Treat
a0dae86a95
Add bert to models.json
2023-07-13 13:37:12 -04:00
AT
18ca8901f0
Update README.md
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-07-12 16:30:56 -04:00
Adam Treat
e8b19b8e82
Bump version to 2.4.14 and provide release notes.
2023-07-12 14:58:45 -04:00
Adam Treat
8eb0844277
Check if the trimmed version is empty.
2023-07-12 14:31:43 -04:00
Adam Treat
be395c12cc
Make all system prompts empty by default if model does not include in training data.
2023-07-12 14:31:43 -04:00
Aaron Miller
6a8fa27c8d
Correctly find models in subdirs of model dir
...
QDirIterator doesn't seem particular subdir aware, its path() returns
the iterated dir. This was the simplest way I found to get this right.
2023-07-12 14:18:40 -04:00
Adam Treat
8893db5896
Add wizard model and rename orca to be more specific.
2023-07-12 14:12:46 -04:00
Adam Treat
60627bd41f
Prefer 7b models in order of default model load.
2023-07-12 12:50:18 -04:00
Aaron Miller
5df4f1bf8c
codespell
2023-07-12 12:49:06 -04:00
Aaron Miller
10ca2c4475
center the spinner
2023-07-12 12:49:06 -04:00
Adam Treat
e9897518d1
Show busy if models.json download taking longer than expected.
2023-07-12 12:49:06 -04:00
Aaron Miller
ad0e7fd01f
chatgpt: ensure no extra newline in header
2023-07-12 10:53:25 -04:00
Aaron Miller
f0faa23ad5
cmakelists: always export build commands ( #1179 )
...
friendly for using editors with clangd integration that don't also
manage the build themselves
2023-07-12 10:49:24 -04:00
Adam Treat
0d726b22b8
When we explicitly cancel an operation we shouldn't throw an error.
2023-07-12 10:34:10 -04:00
Adam Treat
13b2d47be5
Provide an error dialog if for any reason we can't access the settings file.
2023-07-12 08:50:21 -04:00
Adam Treat
e9d42fba35
Don't show first start more than once.
2023-07-11 18:54:53 -04:00
Adam Treat
2679dc1521
Give note about gpt-4 and openai key access.
2023-07-11 15:35:10 -04:00
Adam Treat
806905f747
Explicitly set the color in MyTextField.
2023-07-11 15:27:26 -04:00
Adam Treat
9dccc96e70
Immediately signal when the model is in a new loading state.
2023-07-11 15:10:59 -04:00
Adam Treat
833a56fadd
Fix the tap handler on these buttons.
2023-07-11 14:58:54 -04:00
Adam Treat
18dbfddcb3
Fix default thread setting.
2023-07-11 13:07:41 -04:00
Adam Treat
34a3b9c857
Don't block on exit when not connected.
2023-07-11 12:37:21 -04:00
Adam Treat
88bbe30952
Provide a guardrail for OOM errors.
2023-07-11 12:09:33 -04:00
Adam Treat
9ef53163dd
Explicitly send the opt out because we were artificially lowering them with settings changes.
2023-07-11 10:53:19 -04:00
Adam Treat
4f9e489093
Don't use a local event loop which can lead to recursion and crashes.
2023-07-11 10:08:03 -04:00
Adam Treat
8467e69f24
Check that we're not null. This is necessary because the loop can make us recursive. Need to fix that.
2023-07-10 17:30:08 -04:00
Adam Treat
99cd555743
Provide some guardrails for thread count.
2023-07-10 17:29:51 -04:00
Lakshay Kansal
a190041c6e
json and c# highlighting rules ( #1163 )
2023-07-10 16:23:32 -04:00
Adam Treat
3e3b05a2a4
Don't process the system prompt when restoring state.
2023-07-10 16:20:19 -04:00
Adam Treat
98dd2ab4bc
Provide backup options if models.json does not download synchronously.
2023-07-10 16:14:57 -04:00
Adam Treat
c8d761a004
Add a nicer message.
2023-07-09 15:51:59 -04:00
Adam Treat
e120eb5008
Allow closing the download dialog and display a message to the user if no models are installed.
2023-07-09 15:08:14 -04:00
Adam Treat
fb172a2524
Don't prevent closing the model download dialog.
2023-07-09 14:58:55 -04:00
Adam Treat
15d04a7916
Fix new version dialog ui.
2023-07-09 14:56:54 -04:00
Adam Treat
12083fcdeb
When deleting chats we sometimes have to update our modelinfo.
2023-07-09 14:52:08 -04:00
Adam Treat
59f3c093cb
Stop generating anything on shutdown.
2023-07-09 14:42:11 -04:00
Adam Treat
e2458454d3
Bump to v2.4.12 and new release notes.
2023-07-09 13:33:07 -04:00
Adam Treat
d9f0245c1b
Fix problems with browse of folder in settings dialog.
2023-07-09 13:05:06 -04:00
Adam Treat
58d6f40f50
Fix broken installs.
2023-07-09 11:50:44 -04:00
Adam Treat
85626b3dab
Fix model path.
2023-07-09 11:33:58 -04:00
Adam Treat
ee73f1ab1d
Shrink the templates.
2023-07-06 17:10:57 -04:00
Akarshan Biswas
c987e56db7
Update CMakeLists.txt - change WaylandClient to WaylandCompositor
...
https://doc.qt.io/qt-6/qwaylandcompositor.html
Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-07-06 12:50:05 -04:00
Akarshan Biswas
16bd4a14d3
Add Qt6:WaylandClient only to Linux Build
...
Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-07-06 12:50:05 -04:00
Adam Treat
18316cde39
Bump to 2.4.12 and release notes.
2023-07-06 12:25:25 -04:00
Adam Treat
db528ef1b0
Add a close button for dialogs.
2023-07-06 10:53:56 -04:00
Adam Treat
27981c0d21
Fix broken download/remove/install.
2023-07-05 20:12:37 -04:00
Adam Treat
eab92a9d73
Fix typo and add new show references setting to localdocs.
2023-07-05 19:41:23 -04:00
Adam Treat
0638b45b47
Per model prompts / templates.
2023-07-05 16:30:41 -04:00
Adam Treat
1491c9fe49
Fix build on windows.
2023-07-05 15:51:42 -04:00
Adam Treat
6d9cdf228c
Huge change that completely revamps the settings dialog and implements
...
per model settings as well as the ability to clone a model into a "character."
This also implements system prompts as well as quite a few bugfixes for
instance this fixes chatgpt.
2023-07-05 15:51:42 -04:00
Adam Treat
2a6c673c25
Begin redesign of settings dialog.
2023-07-05 15:51:42 -04:00
Adam Treat
dedb0025be
Refactor the settings dialog so that it uses a set of components/abstractions
...
for all of the tabs and stacks
2023-07-05 15:51:42 -04:00
Lakshay Kansal
b3c29e4179
implemented support for bash and go highlighting rules ( #1138 )
...
* implemented support for bash and go
* add more commands to bash
* gave precedence to variables over strings in bash
2023-07-05 11:04:13 -04:00
matthew-gill
fd4081aed8
Update codeblock font
2023-07-05 09:44:25 -04:00
Lakshay Kansal
70cbff70cc
created highlighting rules for java using regex for the gpt4all chat interface
2023-06-29 13:11:37 -03:00
Adam Treat
1cd734efdc
Provide an abstraction to break up the settings dialog into managable pieces.
2023-06-29 09:59:54 -04:00
Adam Treat
7f252b4970
This completes the work of consolidating all settings that can be changed by the user on new settings object.
2023-06-29 00:44:48 -03:00
Adam Treat
285aa50b60
Consolidate generation and application settings on the new settings object.
2023-06-28 20:36:43 -03:00
Adam Treat
7f66c28649
Use the new settings for response generation.
2023-06-28 20:11:24 -03:00
Adam Treat
a8baa4da52
The sync for save should be after.
2023-06-28 20:11:24 -03:00
Adam Treat
705b480d72
Start moving toward a single authoritative class for all settings. This
...
is necessary to get rid of technical debt before we drastically increase
the complexity of settings by adding per model settings and mirostat and
other fun things. Right now the settings are divided between QML and C++
and some convenience methods to deal with settings sync and so on that are
in other singletons. This change consolidates all the logic for settings
into a single class with a single API for both C++ and QML.
2023-06-28 20:11:24 -03:00
Adam Treat
e70899a26c
Make the retrieval/parsing of models.json sync on startup. We were jumping to many hoops to mitigate the async behavior.
2023-06-28 12:32:22 -03:00
Adam Treat
9560336490
Match on the filename too for server mode.
2023-06-28 09:20:05 -04:00
Adam Treat
58cd346686
Bump release again and new release notes.
2023-06-27 18:01:23 -04:00
Adam Treat
0f8f364d76
Fix mac again for falcon.
2023-06-27 17:20:40 -04:00
Adam Treat
8aae4e52b3
Fix for falcon on mac.
2023-06-27 17:13:13 -04:00
Adam Treat
9375c71aa7
New release notes for 2.4.9 and bump version.
2023-06-27 17:01:49 -04:00
Adam Treat
71449bbc4b
Fix this correctly?
2023-06-27 16:01:11 -04:00
Adam Treat
07a5405618
Make it clear this is our finetune.
2023-06-27 15:33:38 -04:00
Adam Treat
189ac82277
Fix server mode.
2023-06-27 15:01:16 -04:00
Adam Treat
b56cc61ca2
Don't allow setting an invalid prompt template.
2023-06-27 14:52:44 -04:00
Adam Treat
0780393d00
Don't use local.
2023-06-27 14:13:42 -04:00
Adam Treat
924efd9e25
Add falcon to our models.json
2023-06-27 13:56:16 -04:00
Adam Treat
d3b8234106
Fix spelling.
2023-06-27 14:23:56 -03:00
Adam Treat
42c0a6673a
Don't persist the force metal setting.
2023-06-27 14:23:56 -03:00
Adam Treat
267601d670
Enable the force metal setting.
2023-06-27 14:23:56 -03:00
Aaron Miller
e22dd164d8
add falcon to chatllm::serialize
2023-06-27 14:06:39 -03:00
Aaron Miller
198b5e4832
add Falcon 7B model
...
Tested with https://huggingface.co/TheBloke/falcon-7b-instruct-GGML/blob/main/falcon7b-instruct.ggmlv3.q4_0.bin
2023-06-27 14:06:39 -03:00
Adam Treat
985d3bbfa4
Add Orca models to list.
2023-06-27 09:38:43 -04:00
Adam Treat
8558fb4297
Fix models.json for spanning multiple lines with string.
2023-06-26 21:35:56 -04:00
Adam Treat
c24ad02a6a
Wait just a bit to set the model name so that we can display the proper name instead of filename.
2023-06-26 21:00:09 -04:00
Adam Treat
57fa8644d6
Make spelling check happy.
2023-06-26 17:56:56 -04:00
Adam Treat
d0a3e82ffc
Restore feature I accidentally erased in modellist update.
2023-06-26 17:50:45 -04:00
Aaron Miller
b19a3e5b2c
add requiredMem method to llmodel impls
...
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
2023-06-26 18:27:58 -03:00
Adam Treat
dead954134
Fix save chats setting.
2023-06-26 16:43:37 -04:00
Adam Treat
26c9193227
Sigh. Windows.
2023-06-26 16:34:35 -04:00
Adam Treat
5deec2afe1
Change this back now that it is ready.
2023-06-26 16:21:09 -04:00
Adam Treat
676248fe8f
Update the language.
2023-06-26 14:14:49 -04:00
Adam Treat
ef92492d8c
Add better warnings and links.
2023-06-26 14:14:49 -04:00
Adam Treat
71c972f8fa
Provide a more stark warning for localdocs and add more size to dialogs.
2023-06-26 14:14:49 -04:00
Adam Treat
1b5aa4617f
Enable the add button always, but show an error in placeholder text.
2023-06-26 14:14:49 -04:00
Adam Treat
a0f80453e5
Use sysinfo in backend.
2023-06-26 14:14:49 -04:00
Adam Treat
5e520bb775
Fix so that models are searched in subdirectories.
2023-06-26 14:14:49 -04:00
Adam Treat
64e98b8ea9
Fix bug with model loading on initial load.
2023-06-26 14:14:49 -04:00
Adam Treat
3ca9e8692c
Don't try and load incomplete files.
2023-06-26 14:14:49 -04:00
Adam Treat
27f25d5878
Get rid of recursive mutex.
2023-06-26 14:14:49 -04:00
Adam Treat
7f01b153b3
Modellist temp
2023-06-26 14:14:46 -04:00
Adam Treat
c1794597a7
Revert "Enable Wayland in build"
...
This reverts commit d686a583f9
.
2023-06-26 14:10:27 -04:00
Akarshan Biswas
d686a583f9
Enable Wayland in build
...
# Describe your changes
The patch include support for running natively on a Linux Wayland display server/compositor which is successor to old Xorg.
Cmakelist was missing WaylandClient so added it back.
Will fix #1047 .
Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-06-26 14:58:23 -03:00
AMOGUS
3417a37c54
Change "web server" to "API server" for less confusion ( #1039 )
...
* Change "Web server" to "API server"
* Changed "API server" to "OpenAPI server"
* Reversed back to "API server" and updated tooltip
2023-06-23 16:28:52 -04:00
cosmic-snow
a423075403
Allow Cross-Origin Resource Sharing (CORS) ( #1008 )
2023-06-22 09:19:49 -07:00
Martin Mauch
af28173a25
Parse Org Mode files ( #1038 )
2023-06-22 09:09:39 -07:00
niansa/tuxifan
01acb8d250
Update download speed less often
...
To not show every little tiny network spike to the user
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-22 09:29:15 +02:00
Adam Treat
09ae04cee9
This needs to work even when localdocs and codeblocks are detected.
2023-06-20 19:07:02 -04:00
Adam Treat
ce7333029f
Make the copy button a little more tolerant.
2023-06-20 18:59:08 -04:00
Adam Treat
508993de75
Exit early when no chats are saved.
2023-06-20 18:30:17 -04:00
Adam Treat
85bc861835
Fix the alignment.
2023-06-20 17:40:02 -04:00
Adam Treat
eebfe642c4
Add an error message to download dialog if models.json can't be retrieved.
2023-06-20 17:31:36 -04:00
Adam Treat
968868415e
Move saving chats to a thread and display what we're doing to the user.
2023-06-20 17:18:33 -04:00
Adam Treat
c8a590bc6f
Get rid of last blocking operations and make the chat/llm thread safe.
2023-06-20 18:18:10 -03:00
Adam Treat
84ec4311e9
Remove duplicated state tracking for chatgpt.
2023-06-20 18:18:10 -03:00
Adam Treat
7d2ce06029
Start working on more thread safety and model load error handling.
2023-06-20 14:39:22 -03:00
Adam Treat
d5f56d3308
Forgot to add a signal handler.
2023-06-20 14:39:22 -03:00
Adam Treat
aa2c824258
Initialize these.
2023-06-19 15:38:01 -07:00
Adam Treat
d018b4c821
Make this atomic.
2023-06-19 15:38:01 -07:00
Adam Treat
a3a6a20146
Don't store db results in ChatLLM.
2023-06-19 15:38:01 -07:00
Adam Treat
0cfe225506
Remove this as unnecessary.
2023-06-19 15:38:01 -07:00
Adam Treat
7c28e79644
Fix regenerate response with references.
2023-06-19 17:52:14 -04:00
AT
f76df0deac
Typescript ( #1022 )
...
* Show token generation speed in gui.
* Add typescript/javascript to list of highlighted languages.
2023-06-19 16:12:37 -04:00
AT
2b6cc99a31
Show token generation speed in gui. ( #1020 )
2023-06-19 14:34:53 -04:00
cosmic-snow
fd419caa55
Minor models.json description corrections. ( #1013 )
...
Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-06-18 14:10:29 -04:00
Adam Treat
42e8049564
Bump version and new release notes for metal bugfix edition.
2023-06-16 17:43:10 -04:00
Adam Treat
e2c807d4df
Always install metal on apple.
2023-06-16 17:24:20 -04:00
Adam Treat
d5179ac0c0
Fix cmake build.
2023-06-16 17:18:17 -04:00
Adam Treat
d4283c0053
Fix metal and replit.
2023-06-16 17:13:49 -04:00
Adam Treat
0a0d4a714e
New release and bump the version.
2023-06-16 15:20:23 -04:00
Adam Treat
782e1e77a4
Fix up model names that don't begin with 'ggml-'
2023-06-16 14:43:14 -04:00
Adam Treat
b39a7d4fd9
Fix json.
2023-06-16 14:21:20 -04:00
Adam Treat
6690b49a9f
Converts the following to Q4_0
...
* Snoozy
* Nous Hermes
* Wizard 13b uncensored
Uses the filenames from actual download for these three.
2023-06-16 14:12:56 -04:00
AT
a576220b18
Support loading files if 'ggml' is found anywhere in the name not just at ( #1001 )
...
the beginning and add deprecated flag to models.json so older versions will
show a model, but later versions don't. This will allow us to transition
away from models < ggmlv2 and still allow older installs of gpt4all to work.
2023-06-16 11:09:33 -04:00
Adam Treat
8953b7f6a6
Fix regression in checked of db and network.
2023-06-13 20:08:46 -04:00
Aaron Miller
88616fde7f
llmodel: change tokenToString to not use string_view ( #968 )
...
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
2023-06-13 07:14:02 -04:00
Adam Treat
68ff7001ad
Bugfixes for prompt syntax highlighting.
2023-06-12 05:55:14 -07:00
Adam Treat
60d95cdd9b
Fix some bugs with bash syntax and add some C23 keywords.
2023-06-12 05:08:18 -07:00
Adam Treat
e986f18904
Add c++/c highighting support.
2023-06-12 05:08:18 -07:00
Adam Treat
ae46234261
Spelling error.
2023-06-11 14:20:05 -07:00
Adam Treat
318c51c141
Add code blocks and python syntax highlighting.
2023-06-11 14:20:05 -07:00
Adam Treat
b67cba19f0
Don't interfere with selection.
2023-06-11 14:20:05 -07:00
Adam Treat
50c5b82e57
Clean up the context links a bit.
2023-06-11 14:20:05 -07:00
AT
a9c2f47303
Add new solution for context links that does not force regular markdown ( #938 )
...
in responses which is disruptive to code completions in responses.
2023-06-10 10:15:38 -04:00
Aaron Miller
d3ba1295a7
Metal+LLama take two ( #929 )
...
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 16:48:46 -04:00
Adam Treat
b162b5c64e
Revert "llama on Metal ( #885 )"
...
This reverts commit c55f81b860
.
2023-06-09 15:08:46 -04:00
Aaron Miller
c55f81b860
llama on Metal ( #885 )
...
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 14:58:12 -04:00
pingpongching
0d0fae0ca8
Change the default values for generation in GUI
2023-06-09 08:51:09 -04:00
Adam Treat
8fb73c2114
Forgot to bump.
2023-06-09 08:45:31 -04:00
Richard Guo
be2310322f
update models json with replit model
2023-06-09 08:44:46 -04:00
Andriy Mulyar
eb26293205
Update CollectionsDialog.qml ( #856 )
...
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-08 13:44:17 -04:00
Richard Guo
c4706d0c14
Replit Model ( #713 )
...
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
2023-06-06 17:09:00 -04:00
Adam Treat
fdffad9efe
New release notes
2023-06-05 14:55:59 -04:00
Adam Treat
f5bdf7c94c
Bump the version.
2023-06-05 14:32:00 -04:00
Andriy Mulyar
d8e821134e
Revert "Fix bug with resetting context with chatgpt model." ( #859 )
...
This reverts commit 031d7149a7
.
2023-06-05 14:25:37 -04:00
Adam Treat
ecfeba2710
Revert "Speculative fix for windows llama models with installer."
...
This reverts commit c99e03e22e
.
2023-06-05 14:25:01 -04:00
Adam Treat
c99e03e22e
Speculative fix for windows llama models with installer.
2023-06-05 13:21:08 -04:00
AT
da757734ea
Release notes for version 2.4.5 ( #853 )
2023-06-05 12:10:17 -04:00
Adam Treat
969ff0ee6b
Fix installers for windows and linux.
2023-06-05 10:50:16 -04:00
Adam Treat
1d4c8e7091
These need to be installed for them to be packaged and work for both mac and windows.
2023-06-05 09:57:00 -04:00
Adam Treat
3a9cc329b1
Fix compile on mac.
2023-06-05 09:31:57 -04:00
Adam Treat
25eec33bda
Try and fix mac.
2023-06-05 09:30:50 -04:00
Adam Treat
91f20becef
Need this so the linux installer packages it as a dependency.
2023-06-05 09:23:43 -04:00
Adam Treat
812b2f4b29
Make installers work with mac/windows for big backend change.
2023-06-05 09:23:17 -04:00
Andriy Mulyar
2e5b114364
Update models.json
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:48:45 -04:00
Andriy Mulyar
0db6fd6867
Update models.json ( #838 )
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:36:12 -04:00
AT
d5cf584f8d
Remove older models that are not as popular. ( #837 )
...
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:26:43 -04:00
Adam Treat
301d2fdbea
Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
2023-06-04 19:31:20 -04:00
Adam Treat
bdba2e8de6
Allow for download of models hosted on third party hosts.
2023-06-04 19:02:43 -04:00
Adam Treat
5073630759
Try again with the url.
2023-06-04 18:39:36 -04:00
Adam Treat
6ba37f47c1
Trying out a new feature to download directly from huggingface.
2023-06-04 18:34:04 -04:00
AT
be3c63ffcd
Update build_and_run.md ( #834 )
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-06-04 15:39:32 -04:00
AT
5f95aa9fc6
We no longer have an avx_only repository and better error handling for minimum hardware requirements. ( #833 )
2023-06-04 15:28:58 -04:00
Adam Treat
9f590db98d
Better error handling when the model fails to load.
2023-06-04 14:55:05 -04:00
AT
bbe195ee02
Backend prompt dedup ( #822 )
...
* Deduplicated prompt() function code
2023-06-04 08:59:24 -04:00