Jared Van Bortel
d1c56b8b28
Implement configurable context length ( #1749 )
2023-12-16 17:58:15 -05:00
Aaron Miller
ad0e7fd01f
chatgpt: ensure no extra newline in header
2023-07-12 10:53:25 -04:00
Adam Treat
0d726b22b8
When we explicitly cancel an operation we shouldn't throw an error.
2023-07-12 10:34:10 -04:00
Adam Treat
34a3b9c857
Don't block on exit when not connected.
2023-07-11 12:37:21 -04:00
Adam Treat
4f9e489093
Don't use a local event loop which can lead to recursion and crashes.
2023-07-11 10:08:03 -04:00
Adam Treat
8467e69f24
Check that we're not null. This is necessary because the loop can make us recursive. Need to fix that.
2023-07-10 17:30:08 -04:00
Aaron Miller
b19a3e5b2c
add requiredMem method to llmodel impls
...
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
2023-06-26 18:27:58 -03:00
Juuso Alasuutari
81fdc28e58
llmodel: constify LLModel::threadCount()
2023-05-22 08:54:46 -04:00
Adam Treat
79d6243fe1
Use the default for max_tokens to avoid errors.
2023-05-16 10:31:55 -04:00
Adam Treat
f931de21c5
Add save/restore to chatgpt chats and allow serialize/deseralize from disk.
2023-05-16 10:31:55 -04:00
Adam Treat
dd27c10f54
Preliminary support for chatgpt models.
2023-05-16 10:31:55 -04:00