Adam Treat
8558fb4297
Fix models.json for spanning multiple lines with string.
2023-06-26 21:35:56 -04:00
Adam Treat
c24ad02a6a
Wait just a bit to set the model name so that we can display the proper name instead of filename.
2023-06-26 21:00:09 -04:00
Aaron Miller
db34a2f670
llmodel: skip attempting Metal if model+kvcache > 53% of system ram
2023-06-26 19:46:49 -03:00
Adam Treat
57fa8644d6
Make spelling check happy.
2023-06-26 17:56:56 -04:00
Adam Treat
d0a3e82ffc
Restore feature I accidentally erased in modellist update.
2023-06-26 17:50:45 -04:00
Aaron Miller
b19a3e5b2c
add requiredMem method to llmodel impls
...
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
2023-06-26 18:27:58 -03:00
Adam Treat
dead954134
Fix save chats setting.
2023-06-26 16:43:37 -04:00
Adam Treat
26c9193227
Sigh. Windows.
2023-06-26 16:34:35 -04:00
Adam Treat
5deec2afe1
Change this back now that it is ready.
2023-06-26 16:21:09 -04:00
Adam Treat
676248fe8f
Update the language.
2023-06-26 14:14:49 -04:00
Adam Treat
ef92492d8c
Add better warnings and links.
2023-06-26 14:14:49 -04:00
Adam Treat
71c972f8fa
Provide a more stark warning for localdocs and add more size to dialogs.
2023-06-26 14:14:49 -04:00
Adam Treat
1b5aa4617f
Enable the add button always, but show an error in placeholder text.
2023-06-26 14:14:49 -04:00
Adam Treat
a0f80453e5
Use sysinfo in backend.
2023-06-26 14:14:49 -04:00
Adam Treat
5e520bb775
Fix so that models are searched in subdirectories.
2023-06-26 14:14:49 -04:00
Adam Treat
64e98b8ea9
Fix bug with model loading on initial load.
2023-06-26 14:14:49 -04:00
Adam Treat
3ca9e8692c
Don't try and load incomplete files.
2023-06-26 14:14:49 -04:00
Adam Treat
27f25d5878
Get rid of recursive mutex.
2023-06-26 14:14:49 -04:00
Adam Treat
7f01b153b3
Modellist temp
2023-06-26 14:14:46 -04:00
Adam Treat
c1794597a7
Revert "Enable Wayland in build"
...
This reverts commit d686a583f9
.
2023-06-26 14:10:27 -04:00
Akarshan Biswas
d686a583f9
Enable Wayland in build
...
# Describe your changes
The patch include support for running natively on a Linux Wayland display server/compositor which is successor to old Xorg.
Cmakelist was missing WaylandClient so added it back.
Will fix #1047 .
Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-06-26 14:58:23 -03:00
niansa/tuxifan
47323f8591
Update replit.cpp
...
replit_tokenizer_detokenize returnins std::string now
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-26 14:49:58 -03:00
niansa
0855c0df1d
Fixed Replit implementation compile warnings
2023-06-26 14:49:58 -03:00
Aaron Miller
1290b32451
update to latest mainline llama.cpp
...
add max_size param to ggml_metal_add_buffer - introduced in https://github.com/ggerganov/llama.cpp/pull/1826
2023-06-26 14:40:52 -03:00
AMOGUS
3417a37c54
Change "web server" to "API server" for less confusion ( #1039 )
...
* Change "Web server" to "API server"
* Changed "API server" to "OpenAPI server"
* Reversed back to "API server" and updated tooltip
2023-06-23 16:28:52 -04:00
cosmic-snow
ee26e8f271
CLI Improvements ( #1021 )
...
* Add gpt4all-bindings/cli/README.md
* Unify version information
- Was previously split; base one on the other
- Add VERSION_INFO as the "source of truth":
- Modelled after sys.version_info.
- Implemented as a tuple, because it's much easier for (partial)
programmatic comparison.
- Previous API is kept intact.
* Add gpt4all-bindings/cli/developer_notes.md
- A few notes on what's what, especially regarding docs
* Add gpt4all-bindings/python/docs/gpt4all_cli.md
- The CLI user documentation
* Bump CLI version to 0.3.5
* Finalise docs & add to index.md
- Amend where necessary
- Fix typo in gpt4all_cli.md
- Mention and add link to CLI doc in index.md
* Add docstings to gpt4all-bindings/cli/app.py
* Better 'groovy' link & fix typo
- Documentation: point to the Hugging Face model card for 'groovy'
- Correct typo in app.py
2023-06-23 12:09:31 -07:00
EKal-aa
aed7b43143
set n_threads in GPT4All python bindings ( #1042 )
...
* set n_threads in GPT4All
* changed default n_threads to None
2023-06-23 01:16:35 -07:00
Michael Mior
ae3d91476c
Improve grammar in Java bindings README ( #1045 )
...
Signed-off-by: Michael Mior <michael.mior@gmail.com>
2023-06-22 18:49:58 -04:00
Aaron Miller
c252fd93fd
CI: apt update before apt install ( #1046 )
...
avoid spurious failures due to apt cache being out of date
2023-06-22 15:21:11 -07:00
cosmic-snow
a423075403
Allow Cross-Origin Resource Sharing (CORS) ( #1008 )
2023-06-22 09:19:49 -07:00
Martin Mauch
af28173a25
Parse Org Mode files ( #1038 )
2023-06-22 09:09:39 -07:00
Max Vincent Goldgamer
5a1d22804e
Fix Typos on macOS ( #1018 )
...
Signed-off-by: Max Vincent Goldgamer <11319871+iMonZ@users.noreply.github.com>
2023-06-22 11:48:12 -04:00
niansa/tuxifan
01acb8d250
Update download speed less often
...
To not show every little tiny network spike to the user
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-22 09:29:15 +02:00
niansa/tuxifan
5eee16c97c
Do not specify "success" as error for unsupported models
...
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-22 09:28:40 +02:00
Adam Treat
09ae04cee9
This needs to work even when localdocs and codeblocks are detected.
2023-06-20 19:07:02 -04:00
Adam Treat
ce7333029f
Make the copy button a little more tolerant.
2023-06-20 18:59:08 -04:00
Adam Treat
508993de75
Exit early when no chats are saved.
2023-06-20 18:30:17 -04:00
Adam Treat
bd58c46da0
Initialize these to nullptr to prevent double deletion when a model fails to load.
2023-06-20 18:23:45 -04:00
Adam Treat
85bc861835
Fix the alignment.
2023-06-20 17:40:02 -04:00
Adam Treat
eebfe642c4
Add an error message to download dialog if models.json can't be retrieved.
2023-06-20 17:31:36 -04:00
Adam Treat
968868415e
Move saving chats to a thread and display what we're doing to the user.
2023-06-20 17:18:33 -04:00
Adam Treat
c8a590bc6f
Get rid of last blocking operations and make the chat/llm thread safe.
2023-06-20 18:18:10 -03:00
Adam Treat
84ec4311e9
Remove duplicated state tracking for chatgpt.
2023-06-20 18:18:10 -03:00
Adam Treat
7d2ce06029
Start working on more thread safety and model load error handling.
2023-06-20 14:39:22 -03:00
Adam Treat
d5f56d3308
Forgot to add a signal handler.
2023-06-20 14:39:22 -03:00
Richard Guo
a39a897e34
0.3.5 bump
2023-06-20 10:21:51 -04:00
Richard Guo
25ce8c6a1e
revert version
2023-06-20 10:21:51 -04:00
Richard Guo
282a3b5498
setup.py update
2023-06-20 10:21:51 -04:00
Adam Treat
aa2c824258
Initialize these.
2023-06-19 15:38:01 -07:00
Adam Treat
d018b4c821
Make this atomic.
2023-06-19 15:38:01 -07:00