Commit Graph

377 Commits

Author SHA1 Message Date
oobabooga
383c50f05b
Replace old presets with the results of Preset Arena (#2830) 2023-06-23 01:48:29 -03:00
LarryVRH
580c1ee748
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. (#2777) 2023-06-21 15:31:42 -03:00
oobabooga
a1cac88c19
Update README.md 2023-06-19 01:28:23 -03:00
oobabooga
5f392122fd Add gpu_split param to ExLlama
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga
9f40032d32
Add ExLlama support (#2444) 2023-06-16 20:35:38 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely (#2720) 2023-06-16 19:00:37 -03:00
oobabooga
57be2eecdf
Update README.md 2023-06-16 15:04:16 -03:00
Tom Jobbins
646b0c889f
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP (#2648) 2023-06-15 23:59:54 -03:00
oobabooga
8936160e54
Add WSL installer to README (thanks jllllll) 2023-06-13 00:07:34 -03:00
oobabooga
eda224c92d Update README 2023-06-05 17:04:09 -03:00
oobabooga
bef94b9ebb Update README 2023-06-05 17:01:13 -03:00
oobabooga
f276d88546 Use AutoGPTQ by default for GPTQ models 2023-06-05 15:41:48 -03:00
oobabooga
632571a009 Update README 2023-06-05 15:16:06 -03:00
oobabooga
2f6631195a Add desc_act checkbox to the UI 2023-06-02 01:45:46 -03:00
oobabooga
ee99a87330
Update README.md 2023-06-01 12:08:44 -03:00
oobabooga
146505a16b
Update README.md 2023-06-01 12:04:58 -03:00
oobabooga
3347395944
Update README.md 2023-06-01 12:01:20 -03:00
oobabooga
aba56de41b
Update README.md 2023-06-01 11:46:28 -03:00
oobabooga
df18ae7d6c
Update README.md 2023-06-01 11:27:33 -03:00
Morgan Schweers
1aed2b9e52
Make it possible to download protected HF models from the command line. (#2408) 2023-06-01 00:11:21 -03:00
jllllll
412e7a6a96
Update README.md to include missing flags (#2449) 2023-05-31 11:07:56 -03:00
Atinoda
bfbd13ae89
Update docker repo link (#2340) 2023-05-30 22:14:49 -03:00
oobabooga
962d05ca7e
Update README.md 2023-05-29 14:56:55 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
oobabooga
f27135bdd3 Add Eta Sampling preset
Also remove some presets that I do not consider relevant
2023-05-28 22:44:35 -03:00
oobabooga
00ebea0b2a Use YAML for presets and settings 2023-05-28 22:34:12 -03:00
jllllll
07a4f0569f
Update README.md to account for BnB Windows wheel (#2341) 2023-05-25 18:44:26 -03:00
oobabooga
231305d0f5
Update README.md 2023-05-25 12:05:08 -03:00
oobabooga
37d4ad012b Add a button for rendering markdown for any model 2023-05-25 11:59:27 -03:00
oobabooga
9a43656a50
Add bitsandbytes note 2023-05-25 11:21:52 -03:00
DGdev91
cf088566f8
Make llama.cpp read prompt size and seed from settings (#2299) 2023-05-25 10:29:31 -03:00
oobabooga
a04266161d
Update README.md 2023-05-25 01:23:46 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter (#2320) 2023-05-25 01:14:13 -03:00
Gabriel Terrien
7aed53559a
Support of the --gradio-auth flag (#2283) 2023-05-23 20:39:26 -03:00
Atinoda
4155aaa96a
Add mention to alternative docker repository (#2145) 2023-05-23 20:35:53 -03:00
Carl Kenner
c86231377b
Wizard Mega, Ziya, KoAlpaca, OpenBuddy, Chinese-Vicuna, Vigogne, Bactrian, H2O support, fix Baize (#2159) 2023-05-19 11:42:41 -03:00
Alex "mcmonkey" Goodwin
1f50dbe352
Experimental jank multiGPU inference that's 2x faster than native somehow (#2100) 2023-05-17 10:41:09 -03:00
Andrei
e657dd342d
Add in-memory cache support for llama.cpp (#1936) 2023-05-15 20:19:55 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option (#2060) 2023-05-14 22:58:11 -03:00
oobabooga
23d3f6909a
Update README.md 2023-05-11 10:21:20 -03:00
oobabooga
2930e5a895
Update README.md 2023-05-11 10:04:38 -03:00
oobabooga
0ff38c994e
Update README.md 2023-05-11 09:58:58 -03:00
oobabooga
e6959a5d9a
Update README.md 2023-05-11 09:54:22 -03:00
oobabooga
dcfd09b61e
Update README.md 2023-05-11 09:49:57 -03:00
oobabooga
7a49ceab29
Update README.md 2023-05-11 09:42:39 -03:00
oobabooga
57dc44a995
Update README.md 2023-05-10 12:48:25 -03:00
oobabooga
181b102521
Update README.md 2023-05-10 12:09:47 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596) 2023-05-09 20:37:31 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
oobabooga
00e333d790 Add MOSS support 2023-05-04 23:20:34 -03:00