Commit Graph

171 Commits

Author SHA1 Message Date
Martin Mauch
af28173a25
Parse Org Mode files (#1038) 2023-06-22 09:09:39 -07:00
niansa/tuxifan
01acb8d250 Update download speed less often
To not show every little tiny network spike to the user

Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-22 09:29:15 +02:00
Adam Treat
09ae04cee9 This needs to work even when localdocs and codeblocks are detected. 2023-06-20 19:07:02 -04:00
Adam Treat
ce7333029f Make the copy button a little more tolerant. 2023-06-20 18:59:08 -04:00
Adam Treat
508993de75 Exit early when no chats are saved. 2023-06-20 18:30:17 -04:00
Adam Treat
85bc861835 Fix the alignment. 2023-06-20 17:40:02 -04:00
Adam Treat
eebfe642c4 Add an error message to download dialog if models.json can't be retrieved. 2023-06-20 17:31:36 -04:00
Adam Treat
968868415e Move saving chats to a thread and display what we're doing to the user. 2023-06-20 17:18:33 -04:00
Adam Treat
c8a590bc6f Get rid of last blocking operations and make the chat/llm thread safe. 2023-06-20 18:18:10 -03:00
Adam Treat
84ec4311e9 Remove duplicated state tracking for chatgpt. 2023-06-20 18:18:10 -03:00
Adam Treat
7d2ce06029 Start working on more thread safety and model load error handling. 2023-06-20 14:39:22 -03:00
Adam Treat
d5f56d3308 Forgot to add a signal handler. 2023-06-20 14:39:22 -03:00
Adam Treat
aa2c824258 Initialize these. 2023-06-19 15:38:01 -07:00
Adam Treat
d018b4c821 Make this atomic. 2023-06-19 15:38:01 -07:00
Adam Treat
a3a6a20146 Don't store db results in ChatLLM. 2023-06-19 15:38:01 -07:00
Adam Treat
0cfe225506 Remove this as unnecessary. 2023-06-19 15:38:01 -07:00
Adam Treat
7c28e79644 Fix regenerate response with references. 2023-06-19 17:52:14 -04:00
AT
f76df0deac
Typescript (#1022)
* Show token generation speed in gui.

* Add typescript/javascript to list of highlighted languages.
2023-06-19 16:12:37 -04:00
AT
2b6cc99a31
Show token generation speed in gui. (#1020) 2023-06-19 14:34:53 -04:00
cosmic-snow
fd419caa55
Minor models.json description corrections. (#1013)
Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-06-18 14:10:29 -04:00
Adam Treat
42e8049564 Bump version and new release notes for metal bugfix edition. 2023-06-16 17:43:10 -04:00
Adam Treat
e2c807d4df Always install metal on apple. 2023-06-16 17:24:20 -04:00
Adam Treat
d5179ac0c0 Fix cmake build. 2023-06-16 17:18:17 -04:00
Adam Treat
d4283c0053 Fix metal and replit. 2023-06-16 17:13:49 -04:00
Adam Treat
0a0d4a714e New release and bump the version. 2023-06-16 15:20:23 -04:00
Adam Treat
782e1e77a4 Fix up model names that don't begin with 'ggml-' 2023-06-16 14:43:14 -04:00
Adam Treat
b39a7d4fd9 Fix json. 2023-06-16 14:21:20 -04:00
Adam Treat
6690b49a9f Converts the following to Q4_0
* Snoozy
* Nous Hermes
* Wizard 13b uncensored

Uses the filenames from actual download for these three.
2023-06-16 14:12:56 -04:00
AT
a576220b18
Support loading files if 'ggml' is found anywhere in the name not just at (#1001)
the beginning and add deprecated flag to models.json so older versions will
show a model, but later versions don't. This will allow us to transition
away from models < ggmlv2 and still allow older installs of gpt4all to work.
2023-06-16 11:09:33 -04:00
Adam Treat
8953b7f6a6 Fix regression in checked of db and network. 2023-06-13 20:08:46 -04:00
Aaron Miller
88616fde7f
llmodel: change tokenToString to not use string_view (#968)
fixes a definite use-after-free and likely avoids some other
potential ones - std::string will convert to a std::string_view
automatically but as soon as the std::string in question goes out of
scope it is already freed and the string_view is pointing at freed
memory - this is *mostly* fine if its returning a reference to the
tokenizer's internal vocab table but it's, imo, too easy to return a
reference to a dynamically constructed string with this as replit is
doing (and unfortunately needs to do to convert the internal whitespace
replacement symbol back to a space)
2023-06-13 07:14:02 -04:00
Adam Treat
68ff7001ad Bugfixes for prompt syntax highlighting. 2023-06-12 05:55:14 -07:00
Adam Treat
60d95cdd9b Fix some bugs with bash syntax and add some C23 keywords. 2023-06-12 05:08:18 -07:00
Adam Treat
e986f18904 Add c++/c highighting support. 2023-06-12 05:08:18 -07:00
Adam Treat
ae46234261 Spelling error. 2023-06-11 14:20:05 -07:00
Adam Treat
318c51c141 Add code blocks and python syntax highlighting. 2023-06-11 14:20:05 -07:00
Adam Treat
b67cba19f0 Don't interfere with selection. 2023-06-11 14:20:05 -07:00
Adam Treat
50c5b82e57 Clean up the context links a bit. 2023-06-11 14:20:05 -07:00
AT
a9c2f47303
Add new solution for context links that does not force regular markdown (#938)
in responses which is disruptive to code completions in responses.
2023-06-10 10:15:38 -04:00
Aaron Miller
d3ba1295a7
Metal+LLama take two (#929)
Support latest llama with Metal
---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 16:48:46 -04:00
Adam Treat
b162b5c64e Revert "llama on Metal (#885)"
This reverts commit c55f81b860.
2023-06-09 15:08:46 -04:00
Aaron Miller
c55f81b860
llama on Metal (#885)
Support latest llama with Metal

---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 14:58:12 -04:00
pingpongching
0d0fae0ca8 Change the default values for generation in GUI 2023-06-09 08:51:09 -04:00
Adam Treat
8fb73c2114 Forgot to bump. 2023-06-09 08:45:31 -04:00
Richard Guo
be2310322f update models json with replit model 2023-06-09 08:44:46 -04:00
Andriy Mulyar
eb26293205
Update CollectionsDialog.qml (#856)
Phrasing for localdocs

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-08 13:44:17 -04:00
Richard Guo
c4706d0c14
Replit Model (#713)
* porting over replit code model to gpt4all

* replaced memory with kv_self struct

* continuing debug

* welp it built but lot of sus things

* working model loading and somewhat working generate.. need to format response?

* revert back to semi working version

* finally got rid of weird formatting

* figured out problem is with python bindings - this is good to go for testing

* addressing PR feedback

* output refactor

* fixed prompt reponse collection

* cleanup

* addressing PR comments

* building replit backend with new ggmlver code

* chatllm replit and clean python files

* cleanup

* updated replit to match new llmodel api

* match llmodel api and change size_t to Token

* resolve PR comments

* replit model commit comment
2023-06-06 17:09:00 -04:00
Adam Treat
fdffad9efe New release notes 2023-06-05 14:55:59 -04:00
Adam Treat
f5bdf7c94c Bump the version. 2023-06-05 14:32:00 -04:00
Andriy Mulyar
d8e821134e
Revert "Fix bug with resetting context with chatgpt model." (#859)
This reverts commit 031d7149a7.
2023-06-05 14:25:37 -04:00