update readme with stablecode refs

This commit is contained in:
James Ravenscroft 2023-08-10 08:54:16 +01:00
parent 84869ff0f3
commit 18faa3e5f6
2 changed files with 17 additions and 6 deletions

View File

@ -1,10 +1,20 @@
# Models Directory
## StableCode Instruct State-of-the-art for low Spec machines(Released 8th August 2023)
[StableCode](https://stability.ai/blog/stablecode-llm-generative-ai-coding) Instruct is a new model from [Stability.ai](https://stability.ai/) which provides reasonable autocomplete suggestions in approx 3GiB of RAM.
| Model Name | RAM Requirement | Direct Download | HF Project Link |
|---------------------|-----------------|-----------------|-----------------|
| StarCoder | ~3GiB | [:arrow_down:](https://huggingface.co/TheBloke/stablecode-instruct-alpha-3b-GGML/blob/main/stablecode-instruct-alpha-3b.ggmlv1.q4_0.bin) | [:hugs:](https://huggingface.co/TheBloke/stablecode-instruct-alpha-3b-GGML/) |
## "Coder" family models
WizardCoder, StarCoder and SantaCoder are current "state-of-the-art" autocomplete models
### SantaCoder (Best Small model)
### SantaCoder (Small Model, Reasonable on lower spec machines - Released 13/4/2023)
[SantaCoder](https://huggingface.co/bigcode/santacoder) is a smaller version of the StarCoder and WizardCoder family with only 1.1 Billion parameters. The model is trained with fill-in-the-middle objective allowing it to be used to auto-complete function parameters.
@ -18,7 +28,7 @@ This model is primarily trained on Python, Java and Javscript.
To run in Turbopilot set model type `-m starcoder`
### WizardCoder (Best Autocomplete Performance, Compute-Hungry)
### WizardCoder 15B Best Autocomplete Performance, Compute-Hungry (Released 15/6/2023)
[WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder) is the current SOTA auto complete model, it is an updated version of StarCoder that achieves 57.1 pass@1 on HumanEval benchmarks (essentially in 57% of cases it correctly solves a given challenge. Read more about how this metric works in the scientific paper [here](https://arxiv.org/pdf/2107.03374.pdf) ).
@ -32,7 +42,7 @@ Even when quantized, WizardCoder is a large model that takes up a significant am
To run in Turbopilot set model type `-m starcoder`
### StarCoder
### StarCoder (Released 4/5/2023)
[StarCoder](https://huggingface.co/blog/starcoder) held the previous title of state-of-the-art coding model back in May 2023. It is still a reasonably good model by comparison but it is a similar size and has similar RAM and compute requirements to WizardCoder so you may be better off just running that. Links below provided for posterity.

View File

@ -9,10 +9,11 @@ TurboPilot is a self-hosted [copilot](https://github.com/features/copilot) clone
![a screen recording of turbopilot running through fauxpilot plugin](assets/vscode-status.gif)
**✨ Now Supports [StableCode 3B Instruct](https://huggingface.co/stabilityai/stablecode-instruct-alpha-3b)** simply use [TheBloke's Quantized GGML models](https://huggingface.co/TheBloke/stablecode-instruct-alpha-3b-GGML) and set `-m stablecode`.
**New: Refactored + Simplified**: The source code has been improved to make it easier to extend and add new models to Turbopilot. The system now supports multiple flavours of model
**New: Refactored + Simplified**: The source code has been improved to make it easier to extend and add new models to Turbopilot. The system now supports multiple flavours of model
**New: Wizardcoder, Starcoder, Santacoder support** - Turbopilot now supports state of the art local code completion models which provide more programming languages and "fill in the middle" support.
**New: Wizardcoder, Starcoder, Santacoder support** - Turbopilot now supports state of the art local code completion models which provide more programming languages and "fill in the middle" support.
## 🤝 Contributing
@ -34,7 +35,7 @@ You have 2 options for getting the model
You can download the pre-converted, pre-quantized models from Huggingface.
For low RAM users (4-8 GiB), I recommend [SantaCoder](https://huggingface.co/mike-ravkine/gpt_bigcode-santacoder-GGML/resolve/main/santacoder-q4_0.bin) and for high power users (16+ GiB RAM, discrete GPU or apple silicon) I recomnmend [WizardCoder](https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GGML/resolve/main/WizardCoder-15B-1.0.ggmlv3.q4_0.bin).
For low RAM users (4-8 GiB), I recommend [StableCode](https://huggingface.co/TheBloke/stablecode-instruct-alpha-3b-GGML) and for high power users (16+ GiB RAM, discrete GPU or apple silicon) I recomnmend [WizardCoder](https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GGML/resolve/main/WizardCoder-15B-1.0.ggmlv3.q4_0.bin).
Turbopilot still supports the first generation codegen models from `v0.0.5` and earlier builds. Although old models do need to be requantized.