Adam Treat
908aec27fe
Always save chats to disk, but save them as text by default. This also changes
...
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.
2023-10-12 07:52:11 -04:00
cebtenzzre
aed2068342
python: always check status code of HTTP responses ( #1502 )
2023-10-11 18:11:28 -04:00
Aaron Miller
afaa291eab
python bindings should be quiet by default
...
* disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is
nonempty
* make verbose flag for retrieve_model default false (but also be
overridable via gpt4all constructor)
should be able to run a basic test:
```python
import gpt4all
model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf')
print(model.generate('def fib(n):'))
```
and see no non-model output when successful
2023-10-11 14:14:36 -07:00
cebtenzzre
7b611b49f2
llmodel: print an error if the CPU does not support AVX ( #1499 )
2023-10-11 15:09:40 -04:00
cebtenzzre
f81b4b45bf
python: support Path in GPT4All.__init__ ( #1462 )
2023-10-11 14:12:40 -04:00
Aaron Miller
043617168e
do not process prompts on gpu yet
2023-10-11 13:15:50 -04:00
Aaron Miller
64001a480a
mat*mat for q4_0, q8_0
2023-10-11 13:15:50 -04:00
cebtenzzre
04499d1c7d
chatllm: do not write uninitialized data to stream ( #1486 )
2023-10-11 11:31:34 -04:00
cebtenzzre
7a19047329
llmodel: do not call magic_match unless build variant is correct ( #1488 )
2023-10-11 11:30:48 -04:00
Adam Treat
df8528df73
Another codespell attempted fix.
2023-10-11 09:17:38 -04:00
Adam Treat
f0742c22f4
Restore state from text if necessary.
2023-10-11 09:16:02 -04:00
Adam Treat
35f9cdb70a
Do not delete saved chats if we fail to serialize properly.
2023-10-11 09:16:02 -04:00
cebtenzzre
9fb135e020
cmake: install the GPT-J plugin ( #1487 )
2023-10-10 15:50:03 -04:00
Cebtenzzre
df66226f7d
issue template: remove "Related Components" section
2023-10-10 10:39:28 -07:00
Aaron Miller
3c25d81759
make codespell happy
2023-10-10 12:00:06 -04:00
Jan Philipp Harries
4f0cee9330
added EM German Mistral Model
2023-10-10 11:44:43 -04:00
Adam Treat
56c0d2898d
Update the language here to avoid misunderstanding.
2023-10-06 14:38:42 -04:00
Adam Treat
b2cd3bdb3f
Fix crasher with an empty string for prompt template.
2023-10-06 12:44:53 -04:00
Cebtenzzre
5fe685427a
chat: clearer CPU fallback messages
2023-10-06 11:35:14 -04:00
Adam Treat
eec906aa05
Speculative fix for build on mac.
2023-10-05 18:37:33 -04:00
Aaron Miller
9325075f80
fix stray comma in models2.json
...
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-05 18:32:23 -04:00
Adam Treat
a9acdd25de
Push a new version number for llmodel backend now that it is based on gguf.
2023-10-05 18:18:07 -04:00
Adam Treat
f028f67c68
Add starcoder, rift and sbert to our models2.json.
2023-10-05 18:16:19 -04:00
Aaron Miller
a10f3aea5e
python/embed4all: use gguf model, allow passing kwargs/overriding model
2023-10-05 18:16:19 -04:00
Cebtenzzre
8bb6a6c201
rebase on newer llama.cpp
2023-10-05 18:16:19 -04:00
Adam Treat
4528f73479
Reorder and refresh our models2.json.
2023-10-05 18:16:19 -04:00
Cebtenzzre
d87573ea75
remove old llama.cpp submodules
2023-10-05 18:16:19 -04:00
Cebtenzzre
cc6db61c93
backend: fix build with Visual Studio generator
...
Use the $<CONFIG> generator expression instead of CMAKE_BUILD_TYPE. This
is needed because Visual Studio is a multi-configuration generator, so
we do not know what the build type will be until `cmake --build` is
called.
Fixes #1470
2023-10-05 18:16:19 -04:00
Adam Treat
f605a5b686
Add q8_0 kernels to kompute shaders and bump to latest llama/gguf.
2023-10-05 18:16:19 -04:00
Cebtenzzre
1534df3e9f
backend: do not use Vulkan with non-LLaMA models
2023-10-05 18:16:19 -04:00
Cebtenzzre
672cb850f9
differentiate between init failure and unsupported models
2023-10-05 18:16:19 -04:00
Cebtenzzre
a5b93cf095
more accurate fallback descriptions
2023-10-05 18:16:19 -04:00
Cebtenzzre
75deee9adb
chat: make sure to clear fallback reason on success
2023-10-05 18:16:19 -04:00
Cebtenzzre
2eb83b9f2a
chat: report reason for fallback to CPU
2023-10-05 18:16:19 -04:00
Adam Treat
906699e8e9
Bump to latest llama/gguf branch.
2023-10-05 18:16:19 -04:00
Adam Treat
ea66669cef
Switch to new models2.json for new gguf release and bump our version to
...
2.5.0.
2023-10-05 18:16:19 -04:00
Cebtenzzre
088afada49
llamamodel: fix static vector in LLamaModel::endTokens
2023-10-05 18:16:19 -04:00
Adam Treat
b4d82ea289
Bump to the latest fixes for vulkan in llama.
2023-10-05 18:16:19 -04:00
Adam Treat
12f943e966
Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf.
2023-10-05 18:16:19 -04:00
Cebtenzzre
40c78d2f78
python binding: print debug message to stderr
2023-10-05 18:16:19 -04:00
Adam Treat
5d346e13d7
Add q6_k kernels for vulkan.
2023-10-05 18:16:19 -04:00
Adam Treat
4eefd386d0
Refactor for subgroups on mat * vec kernel.
2023-10-05 18:16:19 -04:00
Cebtenzzre
3c2aa299d8
gptj: remove unused variables
2023-10-05 18:16:19 -04:00
Cebtenzzre
f9deb87d20
convert scripts: add feed-forward length for better compatiblilty
...
This GGUF key is used by all llama.cpp models with upstream support.
2023-10-05 18:16:19 -04:00
Cebtenzzre
cc7675d432
convert scripts: make gptj script executable
2023-10-05 18:16:19 -04:00
Cebtenzzre
0493e6eb07
convert scripts: use bytes_to_unicode from transformers
2023-10-05 18:16:19 -04:00
Cebtenzzre
a49a1dcdf4
chatllm: grammar fix
2023-10-05 18:16:19 -04:00
Cebtenzzre
d5d72f0361
gpt-j: update inference to match latest llama.cpp insights
...
- Use F16 KV cache
- Store transposed V in the cache
- Avoid unnecessary Q copy
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
ggml upstream commit 0265f0813492602fec0e1159fe61de1bf0ccaf78
2023-10-05 18:16:19 -04:00
Cebtenzzre
050e7f076e
backend: port GPT-J to GGUF
2023-10-05 18:16:19 -04:00
Cebtenzzre
31b20f093a
modellist: fix the system prompt
2023-10-05 18:16:19 -04:00