Adam Treat
189ac82277
Fix server mode.
2023-06-27 15:01:16 -04:00
Adam Treat
b56cc61ca2
Don't allow setting an invalid prompt template.
2023-06-27 14:52:44 -04:00
Adam Treat
0780393d00
Don't use local.
2023-06-27 14:13:42 -04:00
Adam Treat
924efd9e25
Add falcon to our models.json
2023-06-27 13:56:16 -04:00
Adam Treat
d3b8234106
Fix spelling.
2023-06-27 14:23:56 -03:00
Adam Treat
42c0a6673a
Don't persist the force metal setting.
2023-06-27 14:23:56 -03:00
Adam Treat
267601d670
Enable the force metal setting.
2023-06-27 14:23:56 -03:00
Aaron Miller
e22dd164d8
add falcon to chatllm::serialize
2023-06-27 14:06:39 -03:00
Aaron Miller
198b5e4832
add Falcon 7B model
...
Tested with https://huggingface.co/TheBloke/falcon-7b-instruct-GGML/blob/main/falcon7b-instruct.ggmlv3.q4_0.bin
2023-06-27 14:06:39 -03:00
Adam Treat
985d3bbfa4
Add Orca models to list.
2023-06-27 09:38:43 -04:00
Adam Treat
8558fb4297
Fix models.json for spanning multiple lines with string.
2023-06-26 21:35:56 -04:00
Adam Treat
c24ad02a6a
Wait just a bit to set the model name so that we can display the proper name instead of filename.
2023-06-26 21:00:09 -04:00
Adam Treat
57fa8644d6
Make spelling check happy.
2023-06-26 17:56:56 -04:00
Adam Treat
d0a3e82ffc
Restore feature I accidentally erased in modellist update.
2023-06-26 17:50:45 -04:00
Aaron Miller
b19a3e5b2c
add requiredMem method to llmodel impls
...
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
2023-06-26 18:27:58 -03:00
Adam Treat
dead954134
Fix save chats setting.
2023-06-26 16:43:37 -04:00
Adam Treat
26c9193227
Sigh. Windows.
2023-06-26 16:34:35 -04:00
Adam Treat
5deec2afe1
Change this back now that it is ready.
2023-06-26 16:21:09 -04:00
Adam Treat
676248fe8f
Update the language.
2023-06-26 14:14:49 -04:00
Adam Treat
ef92492d8c
Add better warnings and links.
2023-06-26 14:14:49 -04:00
Adam Treat
71c972f8fa
Provide a more stark warning for localdocs and add more size to dialogs.
2023-06-26 14:14:49 -04:00
Adam Treat
1b5aa4617f
Enable the add button always, but show an error in placeholder text.
2023-06-26 14:14:49 -04:00
Adam Treat
a0f80453e5
Use sysinfo in backend.
2023-06-26 14:14:49 -04:00
Adam Treat
5e520bb775
Fix so that models are searched in subdirectories.
2023-06-26 14:14:49 -04:00
Adam Treat
64e98b8ea9
Fix bug with model loading on initial load.
2023-06-26 14:14:49 -04:00
Adam Treat
3ca9e8692c
Don't try and load incomplete files.
2023-06-26 14:14:49 -04:00
Adam Treat
27f25d5878
Get rid of recursive mutex.
2023-06-26 14:14:49 -04:00
Adam Treat
7f01b153b3
Modellist temp
2023-06-26 14:14:46 -04:00
Adam Treat
c1794597a7
Revert "Enable Wayland in build"
...
This reverts commit d686a583f9
.
2023-06-26 14:10:27 -04:00
Akarshan Biswas
d686a583f9
Enable Wayland in build
...
# Describe your changes
The patch include support for running natively on a Linux Wayland display server/compositor which is successor to old Xorg.
Cmakelist was missing WaylandClient so added it back.
Will fix #1047 .
Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-06-26 14:58:23 -03:00
AMOGUS
3417a37c54
Change "web server" to "API server" for less confusion ( #1039 )
...
* Change "Web server" to "API server"
* Changed "API server" to "OpenAPI server"
* Reversed back to "API server" and updated tooltip
2023-06-23 16:28:52 -04:00
cosmic-snow
a423075403
Allow Cross-Origin Resource Sharing (CORS) ( #1008 )
2023-06-22 09:19:49 -07:00
Martin Mauch
af28173a25
Parse Org Mode files ( #1038 )
2023-06-22 09:09:39 -07:00
niansa/tuxifan
01acb8d250
Update download speed less often
...
To not show every little tiny network spike to the user
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-22 09:29:15 +02:00
Adam Treat
09ae04cee9
This needs to work even when localdocs and codeblocks are detected.
2023-06-20 19:07:02 -04:00
Adam Treat
ce7333029f
Make the copy button a little more tolerant.
2023-06-20 18:59:08 -04:00
Adam Treat
508993de75
Exit early when no chats are saved.
2023-06-20 18:30:17 -04:00
Adam Treat
85bc861835
Fix the alignment.
2023-06-20 17:40:02 -04:00
Adam Treat
eebfe642c4
Add an error message to download dialog if models.json can't be retrieved.
2023-06-20 17:31:36 -04:00
Adam Treat
968868415e
Move saving chats to a thread and display what we're doing to the user.
2023-06-20 17:18:33 -04:00
Adam Treat
c8a590bc6f
Get rid of last blocking operations and make the chat/llm thread safe.
2023-06-20 18:18:10 -03:00
Adam Treat
84ec4311e9
Remove duplicated state tracking for chatgpt.
2023-06-20 18:18:10 -03:00
Adam Treat
7d2ce06029
Start working on more thread safety and model load error handling.
2023-06-20 14:39:22 -03:00
Adam Treat
d5f56d3308
Forgot to add a signal handler.
2023-06-20 14:39:22 -03:00
Adam Treat
aa2c824258
Initialize these.
2023-06-19 15:38:01 -07:00
Adam Treat
d018b4c821
Make this atomic.
2023-06-19 15:38:01 -07:00
Adam Treat
a3a6a20146
Don't store db results in ChatLLM.
2023-06-19 15:38:01 -07:00
Adam Treat
0cfe225506
Remove this as unnecessary.
2023-06-19 15:38:01 -07:00
Adam Treat
7c28e79644
Fix regenerate response with references.
2023-06-19 17:52:14 -04:00
AT
f76df0deac
Typescript ( #1022 )
...
* Show token generation speed in gui.
* Add typescript/javascript to list of highlighted languages.
2023-06-19 16:12:37 -04:00