oobabooga
|
3e7c624f8e
|
Add a template for OpenOrca-Platypus2
|
2023-08-17 15:03:08 -07:00 |
|
cal066
|
991bb57e43
|
ctransformers: Fix up model_type name consistency (#3567)
|
2023-08-14 15:17:24 -03:00 |
|
Eve
|
66c04c304d
|
Various ctransformers fixes (#3556)
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
|
2023-08-13 23:09:03 -03:00 |
|
Gennadij
|
e12a1852d9
|
Add Vicuna-v1.5 detection (#3524)
|
2023-08-10 13:42:24 -03:00 |
|
oobabooga
|
a3295dd666
|
Detect n_gqa and prompt template for wizardlm-70b
|
2023-08-09 10:51:16 -07:00 |
|
GiganticPrime
|
5bfcfcfc5a
|
Added the logic for starchat model series (#3185)
|
2023-08-09 09:26:12 -03:00 |
|
oobabooga
|
4ba30f6765
|
Add OpenChat template
|
2023-08-08 14:10:04 -07:00 |
|
matatonic
|
32e7cbb635
|
More models: +StableBeluga2 (#3415)
|
2023-08-03 16:02:54 -03:00 |
|
oobabooga
|
c8a59d79be
|
Add a template for NewHope
|
2023-08-01 13:27:29 -07:00 |
|
oobabooga
|
de5de045e0
|
Set rms_norm_eps to 5e-6 for every llama-2 ggml model, not just 70b
|
2023-07-26 08:26:56 -07:00 |
|
oobabooga
|
7bc408b472
|
Change rms_norm_eps to 5e-6 for llama-2-70b ggml
Based on https://github.com/ggerganov/llama.cpp/pull/2384
|
2023-07-25 14:54:57 -07:00 |
|
oobabooga
|
08c622df2e
|
Autodetect rms_norm_eps and n_gqa for llama-2-70b
|
2023-07-24 15:27:34 -07:00 |
|
oobabooga
|
e0631e309f
|
Create instruction template for Llama-v2 (#3194)
|
2023-07-18 17:19:18 -03:00 |
|
oobabooga
|
656b457795
|
Add Airoboros-v1.2 template
|
2023-07-17 07:27:42 -07:00 |
|
matatonic
|
3778816b8d
|
models/config.yaml: +platypus/gplatty, +longchat, +vicuna-33b, +Redmond-Hermes-Coder, +wizardcoder, +more (#2928)
* +platypus/gplatty
* +longchat, +vicuna-33b, +Redmond-Hermes-Coder
* +wizardcoder
* +superplatty
* +Godzilla, +WizardLM-V1.1, +rwkv 8k,
+wizard-mega fix </s>
---------
Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
|
2023-07-11 18:53:48 -03:00 |
|
oobabooga
|
31c297d7e0
|
Various changes
|
2023-07-04 18:50:01 -07:00 |
|
Honkware
|
3147f0b8f8
|
xgen config
|
2023-06-29 01:32:53 -05:00 |
|
matatonic
|
da0ea9e0f3
|
set +landmark, +superhot-8k to 8k length (#2903)
|
2023-06-27 22:05:52 -03:00 |
|
oobabooga
|
c52290de50
|
ExLlama with long context (#2875)
|
2023-06-25 22:49:26 -03:00 |
|
matatonic
|
68ae5d8262
|
more models: +orca_mini (#2859)
|
2023-06-25 01:54:53 -03:00 |
|
matatonic
|
8c36c19218
|
8k size only for minotaur-15B (#2815)
Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
|
2023-06-24 10:14:19 -03:00 |
|
matatonic
|
d94ea31d54
|
more models. +minotaur 8k (#2806)
|
2023-06-21 21:05:08 -03:00 |
|
matatonic
|
90be1d9fe1
|
More models (match more) & templates (starchat-beta, tulu) (#2790)
|
2023-06-21 12:30:44 -03:00 |
|
matatonic
|
2220b78e7a
|
models/config.yaml: +alpacino, +alpasta, +hippogriff, +gpt4all-snoozy, +lazarus, +based, -airoboros 4k (#2580)
|
2023-06-17 19:14:25 -03:00 |
|
oobabooga
|
8a7a8343be
|
Detect TheBloke_WizardLM-30B-GPTQ
|
2023-06-09 00:26:34 -03:00 |
|
oobabooga
|
db2cbe7b5a
|
Detect WizardLM-30B-V1.0 instruction format
|
2023-06-08 11:43:40 -03:00 |
|
oobabooga
|
6a75bda419
|
Assign some 4096 seq lengths
|
2023-06-05 12:07:52 -03:00 |
|
oobabooga
|
e61316ce0b
|
Detect airoboros and Nous-Hermes
|
2023-06-05 11:52:13 -03:00 |
|
oobabooga
|
f344ccdddb
|
Add a template for bluemoon
|
2023-06-01 14:42:12 -03:00 |
|
Carl Kenner
|
c86231377b
|
Wizard Mega, Ziya, KoAlpaca, OpenBuddy, Chinese-Vicuna, Vigogne, Bactrian, H2O support, fix Baize (#2159)
|
2023-05-19 11:42:41 -03:00 |
|
oobabooga
|
499c2e009e
|
Remove problematic regex from models/config.yaml
|
2023-05-19 11:20:35 -03:00 |
|
oobabooga
|
fcb46282c5
|
Add a rule to config.yaml
|
2023-05-12 06:11:58 -03:00 |
|
oobabooga
|
5eaa914e1b
|
Fix settings.json being ignored because of config.yaml
|
2023-05-12 06:09:45 -03:00 |
|
matatonic
|
309b72e549
|
[extension/openai] add edits & image endpoints & fix prompt return in non --chat modes (#1935)
|
2023-05-11 11:06:39 -03:00 |
|
oobabooga
|
dfd9ba3e90
|
Remove duplicate code
|
2023-05-10 02:07:22 -03:00 |
|
minipasila
|
334486f527
|
Added instruct-following template for Metharme (#1679)
|
2023-05-09 22:29:22 -03:00 |
|
Carl Kenner
|
814f754451
|
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596)
|
2023-05-09 20:37:31 -03:00 |
|
Matthew McAllister
|
06c7db017d
|
Add config for pygmalion-7b and metharme-7b (#1887)
|
2023-05-09 20:31:27 -03:00 |
|
oobabooga
|
b5260b24f1
|
Add support for custom chat styles (#1917)
|
2023-05-08 12:35:03 -03:00 |
|
oobabooga
|
00e333d790
|
Add MOSS support
|
2023-05-04 23:20:34 -03:00 |
|
oobabooga
|
97a6a50d98
|
Use oasst tokenizer instead of universal tokenizer
|
2023-05-04 15:55:39 -03:00 |
|
oobabooga
|
dbddedca3f
|
Detect oasst-sft-6-llama-30b
|
2023-05-04 15:13:37 -03:00 |
|
oobabooga
|
91745f63c3
|
Use Vicuna-v0 by default for Vicuna models
|
2023-04-26 17:45:38 -03:00 |
|
TiagoGF
|
a941c19337
|
Fixing Vicuna text generation (#1579)
|
2023-04-26 16:20:27 -03:00 |
|
oobabooga
|
d87ca8f2af
|
LLaVA fixes
|
2023-04-26 03:47:34 -03:00 |
|
oobabooga
|
a777c058af
|
Precise prompts for instruct mode
|
2023-04-26 03:21:53 -03:00 |
|
Wojtab
|
12212cf6be
|
LLaVA support (#1487)
|
2023-04-23 20:32:22 -03:00 |
|
Forkoz
|
c6fe1ced01
|
Add ChatGLM support (#1256)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-04-16 19:15:03 -03:00 |
|
oobabooga
|
cb95a2432c
|
Add Koala support
|
2023-04-16 14:41:06 -03:00 |
|
oobabooga
|
b937c9d8c2
|
Add skip_special_tokens checkbox for Dolly model (#1218)
|
2023-04-16 14:24:49 -03:00 |
|