gpt4all/llmodel
2023-05-08 17:23:02 -04:00
..
llama.cpp@03ceb39c1e Update to the alibi version that Zach made. 2023-05-08 12:27:01 -04:00
CMakeLists.txt Scaffolding for the mpt <-> ggml project. 2023-05-08 12:21:30 -04:00
gptj.cpp Fix gptj to have lower memory requirements for kv cache and add versioning to the internal state to smoothly handle such a fix in the future. 2023-05-08 17:23:02 -04:00
gptj.h Persistent state for gpt-j models too. 2023-05-05 10:00:17 -04:00
llamamodel.cpp First attempt at providing a persistent chat list experience. 2023-05-04 15:31:41 -04:00
llamamodel.h First attempt at providing a persistent chat list experience. 2023-05-04 15:31:41 -04:00
llmodel_c.cpp feat: load model 2023-05-08 12:21:30 -04:00
llmodel_c.h feat: load model 2023-05-08 12:21:30 -04:00
llmodel.h include <cstdint> in llmodel.h 2023-05-04 20:36:19 -04:00
mpt.cpp Match Helly's impl of kv cache. 2023-05-08 12:21:30 -04:00
mpt.h Scaffolding for the mpt <-> ggml project. 2023-05-08 12:21:30 -04:00
utils.cpp Move the backend code into own subdirectory and make it a shared library. Begin fleshing out the C api wrapper that bindings can use. 2023-04-26 08:22:38 -04:00
utils.h Move the backend code into own subdirectory and make it a shared library. Begin fleshing out the C api wrapper that bindings can use. 2023-04-26 08:22:38 -04:00