* new skeleton
Signed-off-by: Max Cembalest <max@nomic.ai>
* v3 docs
Signed-off-by: Max Cembalest <max@nomic.ai>
---------
Signed-off-by: Max Cembalest <max@nomic.ai>
* rebase onto llama.cpp commit ggerganov/llama.cpp@d46dbc76f
* support for CUDA backend (enabled by default)
* partial support for Occam's Vulkan backend (disabled by default)
* partial support for HIP/ROCm backend (disabled by default)
* sync llama.cpp.cmake with upstream llama.cpp CMakeLists.txt
* changes to GPT4All backend, bindings, and chat UI to handle choice of llama.cpp backend (Kompute or CUDA)
* ship CUDA runtime with installed version
* make device selection in the UI on macOS actually do something
* model whitelist: remove dbrx, mamba, persimmon, plamo; add internlm and starcoder2
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* llamamodel: only print device used in verbose mode
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* python: expose backend and device via GPT4All properties
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* backend: const correctness fixes
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* python: bump version
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* python: typing fixups
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* python: fix segfault with closed GPT4All
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
---------
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* actually submit larger batches with increased n_ctx
* fix crash when llama_tokenize returns no tokens
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Other changes:
* fix memory leak in llmodel_available_gpu_devices
* drop model argument from llmodel_available_gpu_devices
* breaking: make GPT4All/Embed4All arguments past model_name keyword-only
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* make sure encoding is identity for Range requests
* use a .part file for partial downloads
* verify using file size and MD5 from models3.json
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* Makefiles, black, isort
* Black and isort
* unit tests and generation method
* chat context provider
* context does not reset
* Current state
* Fixup
* Python bindings with unit tests
* GPT4All Python Bindings: chat contexts, tests
* New python bindings and backend fixes
* Black and Isort
* Documentation error
* preserved n_predict for backwords compat with langchain
---------
Co-authored-by: Adam Treat <treat.adam@gmail.com>
* Update gpt4all_chat.md
Cleaned up and made the sideloading part more readable, also moved Replit architecture to supported ones. (+ renamed all "ggML" to "GGML" because who calls it "ggML"??)
Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
* Removed the prefixing part
Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
* Bump version
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt