Commit Graph

103 Commits

Author SHA1 Message Date
oobabooga
77d2e9f060 Remove flexgen 2 2023-07-25 15:18:25 -07:00
oobabooga
75c2dd38cf Remove flexgen support 2023-07-25 15:15:29 -07:00
Eve
f653546484
README updates and improvements (#3198) 2023-07-25 18:58:13 -03:00
oobabooga
ef8637e32d
Add extension example, replace input_hijack with chat_input_modifier (#3307) 2023-07-25 18:49:56 -03:00
oobabooga
a2918176ea Update LLaMA-v2-model.md (thanks Panchovix) 2023-07-18 13:21:18 -07:00
oobabooga
603c596616 Add LLaMA-v2 conversion instructions 2023-07-18 10:29:56 -07:00
oobabooga
60a3e70242 Update LLaMA links and info 2023-07-17 12:51:01 -07:00
oobabooga
8705eba830 Remove universal llama tokenizer support
Instead replace it with a warning if the tokenizer files look off
2023-07-04 19:43:19 -07:00
oobabooga
55457549cd Add information about presets to the UI 2023-07-03 22:39:01 -07:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support (#2991) 2023-07-04 00:03:30 -03:00
oobabooga
63770c0643 Update docs/Extensions.md 2023-06-27 22:25:05 -03:00
oobabooga
ebfcfa41f2
Update ExLlama.md 2023-06-24 20:25:34 -03:00
oobabooga
a70a2ac3be
Update ExLlama.md 2023-06-24 20:23:01 -03:00
oobabooga
0d9d70ec7e Update docs 2023-06-19 12:52:23 -03:00
oobabooga
f6a602861e Update docs 2023-06-19 12:51:30 -03:00
oobabooga
5d4b4d15a5
Update Using-LoRAs.md 2023-06-19 12:43:57 -03:00
oobabooga
05a743d6ad Make llama.cpp use tfs parameter 2023-06-17 19:08:25 -03:00
Jonathan Yankovich
a1ca1c04a1
Update ExLlama.md (#2729)
Add details for configuring exllama
2023-06-16 23:46:25 -03:00
oobabooga
cb9be5db1c
Update ExLlama.md 2023-06-16 20:40:12 -03:00
oobabooga
9f40032d32
Add ExLlama support (#2444) 2023-06-16 20:35:38 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely (#2720) 2023-06-16 19:00:37 -03:00
Meng-Yuan Huang
772d4080b2
Update llama.cpp-models.md for macOS (#2711) 2023-06-16 00:00:24 -03:00
Amine Djeghri
8275dbc68c
Update WSL-installation-guide.md (#2626) 2023-06-11 12:30:34 -03:00
oobabooga
c6552785af Minor cleanup 2023-06-09 00:30:22 -03:00
zaypen
084b006cfe
Update LLaMA-model.md (#2460)
Better approach of converting LLaMA model
2023-06-07 15:34:50 -03:00
oobabooga
00b94847da Remove softprompt support 2023-06-06 07:42:23 -03:00
oobabooga
99d701994a Update GPTQ-models-(4-bit-mode).md 2023-06-05 15:55:00 -03:00
oobabooga
d0aca83b53 Add AutoGPTQ wheels to requirements.txt 2023-06-02 00:47:11 -03:00
oobabooga
aa83fc21d4
Update Low-VRAM-guide.md 2023-06-01 12:14:27 -03:00
oobabooga
756e3afbcc
Update llama.cpp-models.md 2023-06-01 12:04:31 -03:00
oobabooga
74bf2f05b1
Update llama.cpp-models.md 2023-06-01 11:58:33 -03:00
oobabooga
90dc8a91ae
Update llama.cpp-models.md 2023-06-01 11:57:57 -03:00
oobabooga
c9ac45d4cf
Update Using-LoRAs.md 2023-06-01 11:34:04 -03:00
oobabooga
9aad6d07de
Update Using-LoRAs.md 2023-06-01 11:32:41 -03:00
oobabooga
e52b43c934
Update GPTQ-models-(4-bit-mode).md 2023-06-01 01:17:13 -03:00
oobabooga
419c34eca4
Update GPTQ-models-(4-bit-mode).md 2023-05-31 23:49:00 -03:00
oobabooga
a160230893 Update GPTQ-models-(4-bit-mode).md 2023-05-31 23:38:15 -03:00
AlpinDale
6627f7feb9
Add notice about downgrading gcc and g++ (#2446) 2023-05-30 22:28:53 -03:00
oobabooga
e763ace593
Update GPTQ-models-(4-bit-mode).md 2023-05-29 22:35:49 -03:00
oobabooga
86ef695d37
Update GPTQ-models-(4-bit-mode).md 2023-05-29 22:20:55 -03:00
oobabooga
540a161a08
Update GPTQ-models-(4-bit-mode).md 2023-05-29 15:45:40 -03:00
oobabooga
166a0d9893
Update GPTQ-models-(4-bit-mode).md 2023-05-29 15:07:59 -03:00
oobabooga
4a190a98fd
Update GPTQ-models-(4-bit-mode).md 2023-05-29 14:56:05 -03:00
oobabooga
1490c0af68 Remove RWKV from requirements.txt 2023-05-23 20:49:20 -03:00
Atinoda
4155aaa96a
Add mention to alternative docker repository (#2145) 2023-05-23 20:35:53 -03:00
oobabooga
c2d2ef7c13
Update Generation-parameters.md 2023-05-23 02:11:28 -03:00
oobabooga
b0845ae4e8
Update RWKV-model.md 2023-05-23 02:10:08 -03:00
oobabooga
cd3618d7fb Add support for RWKV in Hugging Face format 2023-05-23 02:07:28 -03:00
oobabooga
c0fd7f3257
Add mirostat parameters for llama.cpp (#2287) 2023-05-22 19:37:24 -03:00
oobabooga
1e5821bd9e Fix silero tts autoplay (attempt #2) 2023-05-21 13:25:11 -03:00