Adam Treat
5f372bd881
Gracefully handle when we have a previous chat where the model that it used has gone away.
2023-05-08 20:51:03 -04:00
Adam Treat
8c4b8f215f
Fix gptj to have lower memory requirements for kv cache and add versioning to the internal state to smoothly handle such a fix in the future.
2023-05-08 17:23:02 -04:00
Adam Treat
3c30310539
Convert the old format properly.
2023-05-08 05:53:16 -04:00
Adam Treat
f291853e51
First attempt at providing a persistent chat list experience.
...
Limitations:
1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
data the llama.cpp backend tries to persist. Need to investigate how
we can shrink this.
2023-05-04 15:31:41 -04:00
Adam Treat
081d32bd97
Restore the model when switching chats.
2023-05-03 12:45:14 -04:00
Adam Treat
4a09f0f0ec
More extensive usage stats to help diagnose errors and problems in the ui.
2023-05-02 20:31:17 -04:00
Adam Treat
21dc522200
Don't block the GUI when reloading via combobox.
2023-05-02 15:02:25 -04:00
Adam Treat
f13f4f4700
Generate names via llm.
2023-05-02 11:19:17 -04:00
Adam Treat
412cad99f2
Hot swapping of conversations. Destroys context for now.
2023-05-01 20:27:07 -04:00
Adam Treat
c0d4a9d426
Continue to shrink the API space for qml and the backend.
2023-05-01 12:30:54 -04:00
Adam Treat
ed59190e48
Consolidate these into single api from qml to backend.
2023-05-01 12:24:51 -04:00
Adam Treat
4d87c46948
Major refactor in prep for multiple conversations.
2023-05-01 09:10:05 -04:00
Adam Treat
d1e3198b65
Add new C++ version of the chat model. Getting ready for chat history.
2023-04-30 20:28:43 -04:00