oobabooga
|
ddca6948b2
|
Update 12 - OpenAI API.md
|
2023-11-07 12:39:59 -03:00 |
|
oobabooga
|
40e73aafce
|
Update 12 - OpenAI API.md
|
2023-11-07 12:38:39 -03:00 |
|
oobabooga
|
6ec997f195
|
Update 12 - OpenAI API.md
|
2023-11-07 12:36:52 -03:00 |
|
oobabooga
|
15d4ea180d
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2023-11-07 07:35:36 -08:00 |
|
oobabooga
|
b2afdda4e8
|
Add more API examples
|
2023-11-07 07:35:04 -08:00 |
|
Morgan Cheng
|
349604458b
|
Update 12 - OpenAI API.md (#4501)
Fix the typo in argument. It should be `--api-port` instead of `--port`.
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-11-07 11:22:17 -03:00 |
|
oobabooga
|
ceb8c92dfc
|
Update 12 - OpenAI API.md
|
2023-11-06 10:38:22 -03:00 |
|
oobabooga
|
ec17a5d2b7
|
Make OpenAI API the default API (#4430)
|
2023-11-06 02:38:29 -03:00 |
|
oobabooga
|
fb3bd0203d
|
Update docs
|
2023-11-04 11:02:24 -07:00 |
|
oobabooga
|
1d8c7c1fc4
|
Update docs
|
2023-11-04 11:01:15 -07:00 |
|
tdrussell
|
72f6fc6923
|
Rename additive_repetition_penalty to presence_penalty, add frequency_penalty (#4376)
|
2023-10-25 12:10:28 -03:00 |
|
oobabooga
|
82c11be067
|
Update 04 - Model Tab.md
|
2023-10-23 12:49:07 -07:00 |
|
tdrussell
|
4440f87722
|
Add additive_repetition_penalty sampler setting. (#3627)
|
2023-10-23 02:28:07 -03:00 |
|
oobabooga
|
b8183148cf
|
Update 04 ‐ Model Tab.md
|
2023-10-22 17:15:55 -03:00 |
|
oobabooga
|
df90d03e0b
|
Replace --mul_mat_q with --no_mul_mat_q
|
2023-10-22 12:23:03 -07:00 |
|
Googulator
|
d0c3b407b3
|
transformers loader: multi-LoRAs support (#3120)
|
2023-10-22 16:06:22 -03:00 |
|
oobabooga
|
7a3f885ea8
|
Update 03 ‐ Parameters Tab.md
|
2023-10-22 14:52:23 -03:00 |
|
oobabooga
|
b1f33b55fd
|
Update 01 ‐ Chat Tab.md
|
2023-10-21 20:17:56 -03:00 |
|
oobabooga
|
6efb990b60
|
Add a proper documentation (#3885)
|
2023-10-21 19:15:54 -03:00 |
|
oobabooga
|
31f2815a04
|
Update Generation-Parameters.md
|
2023-09-25 16:30:52 -03:00 |
|
oobabooga
|
c8952cce55
|
Move documentation from UI to docs/
|
2023-09-25 12:28:28 -07:00 |
|
oobabooga
|
6903af33dd
|
Update One-Click-Installers.md
|
2023-09-23 15:32:24 -03:00 |
|
oobabooga
|
c33a94e381
|
Rename doc file
|
2023-09-22 12:17:47 -07:00 |
|
oobabooga
|
b4b5f45558
|
Join the installation instructions
|
2023-09-22 10:28:22 -07:00 |
|
oobabooga
|
ed86878f02
|
Remove GGML support
|
2023-09-11 07:44:00 -07:00 |
|
John Smith
|
cc7b7ba153
|
fix lora training with alpaca_lora_4bit (#3853)
|
2023-09-11 01:22:20 -03:00 |
|
q5sys (JT)
|
cdb854db9e
|
Update llama.cpp.md instructions (#3702)
|
2023-08-29 17:56:50 -03:00 |
|
oobabooga
|
a1a9ec895d
|
Unify the 3 interface modes (#3554)
|
2023-08-13 01:12:15 -03:00 |
|
oobabooga
|
c7f52bbdc1
|
Revert "Remove GPTQ-for-LLaMa monkey patch support"
This reverts commit e3d3565b2a .
|
2023-08-10 08:39:41 -07:00 |
|
oobabooga
|
16e2b117b4
|
Minor doc change
|
2023-08-10 08:38:10 -07:00 |
|
jllllll
|
d6765bebc4
|
Update installation documentation
|
2023-08-10 00:53:48 -05:00 |
|
jllllll
|
e3d3565b2a
|
Remove GPTQ-for-LLaMa monkey patch support
AutoGPTQ will be the preferred GPTQ LoRa loader in the future.
|
2023-08-09 23:59:04 -05:00 |
|
oobabooga
|
9773534181
|
Update Chat-mode.md
|
2023-08-01 08:03:22 -07:00 |
|
oobabooga
|
b2207f123b
|
Update docs
|
2023-07-31 19:20:48 -07:00 |
|
oobabooga
|
ca4188aabc
|
Update the example extension
|
2023-07-29 18:57:22 -07:00 |
|
oobabooga
|
f24f87cfb0
|
Change a comment
|
2023-07-26 09:38:13 -07:00 |
|
oobabooga
|
b553c33dd0
|
Add a link to the gradio docs
|
2023-07-26 07:49:22 -07:00 |
|
oobabooga
|
517d40cffe
|
Update Extensions.md
|
2023-07-26 07:01:35 -07:00 |
|
oobabooga
|
b11f63cb18
|
update extensions docs
|
2023-07-26 07:00:33 -07:00 |
|
oobabooga
|
4a24849715
|
Revert changes
|
2023-07-25 21:09:32 -07:00 |
|
oobabooga
|
d3abe7caa8
|
Update llama.cpp.md
|
2023-07-25 15:33:16 -07:00 |
|
oobabooga
|
863d2f118f
|
Update llama.cpp.md
|
2023-07-25 15:31:05 -07:00 |
|
oobabooga
|
77d2e9f060
|
Remove flexgen 2
|
2023-07-25 15:18:25 -07:00 |
|
oobabooga
|
75c2dd38cf
|
Remove flexgen support
|
2023-07-25 15:15:29 -07:00 |
|
Eve
|
f653546484
|
README updates and improvements (#3198)
|
2023-07-25 18:58:13 -03:00 |
|
oobabooga
|
ef8637e32d
|
Add extension example, replace input_hijack with chat_input_modifier (#3307)
|
2023-07-25 18:49:56 -03:00 |
|
oobabooga
|
a2918176ea
|
Update LLaMA-v2-model.md (thanks Panchovix)
|
2023-07-18 13:21:18 -07:00 |
|
oobabooga
|
603c596616
|
Add LLaMA-v2 conversion instructions
|
2023-07-18 10:29:56 -07:00 |
|
oobabooga
|
60a3e70242
|
Update LLaMA links and info
|
2023-07-17 12:51:01 -07:00 |
|
oobabooga
|
8705eba830
|
Remove universal llama tokenizer support
Instead replace it with a warning if the tokenizer files look off
|
2023-07-04 19:43:19 -07:00 |
|
oobabooga
|
55457549cd
|
Add information about presets to the UI
|
2023-07-03 22:39:01 -07:00 |
|
oobabooga
|
4b1804a438
|
Implement sessions + add basic multi-user support (#2991)
|
2023-07-04 00:03:30 -03:00 |
|
oobabooga
|
63770c0643
|
Update docs/Extensions.md
|
2023-06-27 22:25:05 -03:00 |
|
oobabooga
|
ebfcfa41f2
|
Update ExLlama.md
|
2023-06-24 20:25:34 -03:00 |
|
oobabooga
|
a70a2ac3be
|
Update ExLlama.md
|
2023-06-24 20:23:01 -03:00 |
|
oobabooga
|
0d9d70ec7e
|
Update docs
|
2023-06-19 12:52:23 -03:00 |
|
oobabooga
|
f6a602861e
|
Update docs
|
2023-06-19 12:51:30 -03:00 |
|
oobabooga
|
5d4b4d15a5
|
Update Using-LoRAs.md
|
2023-06-19 12:43:57 -03:00 |
|
oobabooga
|
05a743d6ad
|
Make llama.cpp use tfs parameter
|
2023-06-17 19:08:25 -03:00 |
|
Jonathan Yankovich
|
a1ca1c04a1
|
Update ExLlama.md (#2729)
Add details for configuring exllama
|
2023-06-16 23:46:25 -03:00 |
|
oobabooga
|
cb9be5db1c
|
Update ExLlama.md
|
2023-06-16 20:40:12 -03:00 |
|
oobabooga
|
9f40032d32
|
Add ExLlama support (#2444)
|
2023-06-16 20:35:38 -03:00 |
|
oobabooga
|
7ef6a50e84
|
Reorganize model loading UI completely (#2720)
|
2023-06-16 19:00:37 -03:00 |
|
Meng-Yuan Huang
|
772d4080b2
|
Update llama.cpp-models.md for macOS (#2711)
|
2023-06-16 00:00:24 -03:00 |
|
Amine Djeghri
|
8275dbc68c
|
Update WSL-installation-guide.md (#2626)
|
2023-06-11 12:30:34 -03:00 |
|
oobabooga
|
c6552785af
|
Minor cleanup
|
2023-06-09 00:30:22 -03:00 |
|
zaypen
|
084b006cfe
|
Update LLaMA-model.md (#2460)
Better approach of converting LLaMA model
|
2023-06-07 15:34:50 -03:00 |
|
oobabooga
|
00b94847da
|
Remove softprompt support
|
2023-06-06 07:42:23 -03:00 |
|
oobabooga
|
99d701994a
|
Update GPTQ-models-(4-bit-mode).md
|
2023-06-05 15:55:00 -03:00 |
|
oobabooga
|
d0aca83b53
|
Add AutoGPTQ wheels to requirements.txt
|
2023-06-02 00:47:11 -03:00 |
|
oobabooga
|
aa83fc21d4
|
Update Low-VRAM-guide.md
|
2023-06-01 12:14:27 -03:00 |
|
oobabooga
|
756e3afbcc
|
Update llama.cpp-models.md
|
2023-06-01 12:04:31 -03:00 |
|
oobabooga
|
74bf2f05b1
|
Update llama.cpp-models.md
|
2023-06-01 11:58:33 -03:00 |
|
oobabooga
|
90dc8a91ae
|
Update llama.cpp-models.md
|
2023-06-01 11:57:57 -03:00 |
|
oobabooga
|
c9ac45d4cf
|
Update Using-LoRAs.md
|
2023-06-01 11:34:04 -03:00 |
|
oobabooga
|
9aad6d07de
|
Update Using-LoRAs.md
|
2023-06-01 11:32:41 -03:00 |
|
oobabooga
|
e52b43c934
|
Update GPTQ-models-(4-bit-mode).md
|
2023-06-01 01:17:13 -03:00 |
|
oobabooga
|
419c34eca4
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-31 23:49:00 -03:00 |
|
oobabooga
|
a160230893
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-31 23:38:15 -03:00 |
|
AlpinDale
|
6627f7feb9
|
Add notice about downgrading gcc and g++ (#2446)
|
2023-05-30 22:28:53 -03:00 |
|
oobabooga
|
e763ace593
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-29 22:35:49 -03:00 |
|
oobabooga
|
86ef695d37
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-29 22:20:55 -03:00 |
|
oobabooga
|
540a161a08
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-29 15:45:40 -03:00 |
|
oobabooga
|
166a0d9893
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-29 15:07:59 -03:00 |
|
oobabooga
|
4a190a98fd
|
Update GPTQ-models-(4-bit-mode).md
|
2023-05-29 14:56:05 -03:00 |
|
oobabooga
|
1490c0af68
|
Remove RWKV from requirements.txt
|
2023-05-23 20:49:20 -03:00 |
|
Atinoda
|
4155aaa96a
|
Add mention to alternative docker repository (#2145)
|
2023-05-23 20:35:53 -03:00 |
|
oobabooga
|
c2d2ef7c13
|
Update Generation-parameters.md
|
2023-05-23 02:11:28 -03:00 |
|
oobabooga
|
b0845ae4e8
|
Update RWKV-model.md
|
2023-05-23 02:10:08 -03:00 |
|
oobabooga
|
cd3618d7fb
|
Add support for RWKV in Hugging Face format
|
2023-05-23 02:07:28 -03:00 |
|
oobabooga
|
c0fd7f3257
|
Add mirostat parameters for llama.cpp (#2287)
|
2023-05-22 19:37:24 -03:00 |
|
oobabooga
|
1e5821bd9e
|
Fix silero tts autoplay (attempt #2)
|
2023-05-21 13:25:11 -03:00 |
|
oobabooga
|
159eccac7e
|
Update Audio-Notification.md
|
2023-05-19 23:20:42 -03:00 |
|
HappyWorldGames
|
a3e9769e31
|
Added an audible notification after text generation in web. (#1277)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-19 23:16:06 -03:00 |
|
Alex "mcmonkey" Goodwin
|
50c70e28f0
|
Lora Trainer improvements, part 6 - slightly better raw text inputs (#2108)
|
2023-05-19 12:58:54 -03:00 |
|
oobabooga
|
10cf7831f7
|
Update Extensions.md
|
2023-05-17 10:45:29 -03:00 |
|
Alex "mcmonkey" Goodwin
|
1f50dbe352
|
Experimental jank multiGPU inference that's 2x faster than native somehow (#2100)
|
2023-05-17 10:41:09 -03:00 |
|
oobabooga
|
c9c6aa2b6e
|
Update docs/Extensions.md
|
2023-05-17 02:04:37 -03:00 |
|
oobabooga
|
9e558cba9b
|
Update docs/Extensions.md
|
2023-05-17 01:43:32 -03:00 |
|
oobabooga
|
687f21f965
|
Update docs/Extensions.md
|
2023-05-17 01:41:01 -03:00 |
|