most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
* Add gpt4all-bindings/cli/README.md
* Unify version information
- Was previously split; base one on the other
- Add VERSION_INFO as the "source of truth":
- Modelled after sys.version_info.
- Implemented as a tuple, because it's much easier for (partial)
programmatic comparison.
- Previous API is kept intact.
* Add gpt4all-bindings/cli/developer_notes.md
- A few notes on what's what, especially regarding docs
* Add gpt4all-bindings/python/docs/gpt4all_cli.md
- The CLI user documentation
* Bump CLI version to 0.3.5
* Finalise docs & add to index.md
- Amend where necessary
- Fix typo in gpt4all_cli.md
- Mention and add link to CLI doc in index.md
* Add docstings to gpt4all-bindings/cli/app.py
* Better 'groovy' link & fix typo
- Documentation: point to the Hugging Face model card for 'groovy'
- Correct typo in app.py
- Add some notes about common Windows problems when trying to make a local build (MinGW and MSVC).
Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment