Aaron Miller
b14953e136
sampling: remove incorrect offset for n_vocab ( #900 )
...
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
2023-06-08 11:08:10 -07:00
Andriy Mulyar
eb26293205
Update CollectionsDialog.qml ( #856 )
...
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-08 13:44:17 -04:00
Claudius Ellsel
39a7c35d03
Update README.md ( #906 )
...
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-08 13:43:31 -04:00
Adam Treat
010a04d96f
Revert "Synced llama.cpp.cmake with upstream ( #887 )"
...
This reverts commit 89910c7ca8
.
2023-06-08 07:23:41 -04:00
Adam Treat
7e304106cc
Fix for windows.
2023-06-07 12:58:51 -04:00
niansa/tuxifan
89910c7ca8
Synced llama.cpp.cmake with upstream ( #887 )
2023-06-07 09:18:22 -07:00
Richard Guo
c4706d0c14
Replit Model ( #713 )
...
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
2023-06-06 17:09:00 -04:00
Andriy Mulyar
ef35eb496f
Supports downloading officially supported models not hosted on gpt4all R2
2023-06-06 16:21:02 -04:00
Andriy Mulyar
266f13aee9
Update gpt4all_faq.md ( #861 )
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 15:41:30 -04:00
Ettore Di Giacinto
44dc1ade62
Set thread counts after loading model ( #836 )
2023-06-05 21:35:40 +02:00
Adam Treat
fdffad9efe
New release notes
2023-06-05 14:55:59 -04:00
Adam Treat
f5bdf7c94c
Bump the version.
2023-06-05 14:32:00 -04:00
Adam Treat
c5de9634c9
Fix llama models on linux and windows.
2023-06-05 14:31:15 -04:00
Andriy Mulyar
d8e821134e
Revert "Fix bug with resetting context with chatgpt model." ( #859 )
...
This reverts commit 031d7149a7
.
2023-06-05 14:25:37 -04:00
Adam Treat
ecfeba2710
Revert "Speculative fix for windows llama models with installer."
...
This reverts commit c99e03e22e
.
2023-06-05 14:25:01 -04:00
Adam Treat
c99e03e22e
Speculative fix for windows llama models with installer.
2023-06-05 13:21:08 -04:00
Andriy Mulyar
01071efc9c
Documentation for model sideloading ( #851 )
...
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 12:35:02 -04:00
AT
ec8618628c
Update README.md ( #854 )
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-06-05 12:25:51 -04:00
AT
da757734ea
Release notes for version 2.4.5 ( #853 )
2023-06-05 12:10:17 -04:00
Richard Guo
f5f9f28f74
updated pypi version
2023-06-05 12:02:25 -04:00
Adam Treat
8a9ad258f4
Fix symbol resolution on windows.
2023-06-05 11:19:02 -04:00
Adam Treat
969ff0ee6b
Fix installers for windows and linux.
2023-06-05 10:50:16 -04:00
Adam Treat
1d4c8e7091
These need to be installed for them to be packaged and work for both mac and windows.
2023-06-05 09:57:00 -04:00
Adam Treat
3a9cc329b1
Fix compile on mac.
2023-06-05 09:31:57 -04:00
Adam Treat
25eec33bda
Try and fix mac.
2023-06-05 09:30:50 -04:00
Adam Treat
91f20becef
Need this so the linux installer packages it as a dependency.
2023-06-05 09:23:43 -04:00
Adam Treat
812b2f4b29
Make installers work with mac/windows for big backend change.
2023-06-05 09:23:17 -04:00
Andriy Mulyar
2e5b114364
Update models.json
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:48:45 -04:00
Andriy Mulyar
0db6fd6867
Update models.json ( #838 )
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:36:12 -04:00
AT
d5cf584f8d
Remove older models that are not as popular. ( #837 )
...
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:26:43 -04:00
Adam Treat
f73333c6a1
Update to latest llama.cpp
2023-06-04 19:57:34 -04:00
Adam Treat
301d2fdbea
Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
2023-06-04 19:31:20 -04:00
Adam Treat
bdba2e8de6
Allow for download of models hosted on third party hosts.
2023-06-04 19:02:43 -04:00
Adam Treat
5073630759
Try again with the url.
2023-06-04 18:39:36 -04:00
Adam Treat
6ba37f47c1
Trying out a new feature to download directly from huggingface.
2023-06-04 18:34:04 -04:00
AT
be3c63ffcd
Update build_and_run.md ( #834 )
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-06-04 15:39:32 -04:00
AT
5f95aa9fc6
We no longer have an avx_only repository and better error handling for minimum hardware requirements. ( #833 )
2023-06-04 15:28:58 -04:00
Adam Treat
9f590db98d
Better error handling when the model fails to load.
2023-06-04 14:55:05 -04:00
AT
bbe195ee02
Backend prompt dedup ( #822 )
...
* Deduplicated prompt() function code
2023-06-04 08:59:24 -04:00
Ikko Eltociear Ashimine
945297d837
Update README.md
...
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
2023-06-04 08:46:37 -04:00
Adam Treat
bc624f5389
Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
2023-06-03 10:09:17 -04:00
Peter Gagarinov
23391d44e0
Only default mlock on macOS where swap seems to be a problem
...
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by 48275d0dcc
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
2023-06-03 07:51:18 -04:00
Adam Treat
55055ca983
Add the ability to change the directory via text field not just 'browse' button.
2023-06-02 22:52:55 -04:00
Adam Treat
25ee51e2ca
Actually use the theme dark color for window background.
2023-06-02 20:19:50 -04:00
Adam Treat
d9ddd373d6
Prevent flashing of white on resize.
2023-06-02 20:16:11 -04:00
Adam Treat
8aba76ad05
Min constraints on about dialog.
2023-06-02 20:05:47 -04:00
Adam Treat
a7f74e9d01
Some tweaks to UI to make window resizing smooth and flow nicely.
2023-06-02 20:00:28 -04:00
niansa/tuxifan
f3564ac6b9
Fixed tons of warnings and clazy findings ( #811 )
2023-06-02 15:46:41 -04:00
niansa/tuxifan
d6a70ddb5f
Fixed model type for GPT-J ( #815 )
...
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-02 15:46:33 -04:00
Richard Guo
9d2b20f6cd
small typo fix
2023-06-02 12:32:26 -04:00