Commit Graph

374 Commits

Author SHA1 Message Date
oobabooga
dea90c7b67 Bump exllamav2 to 0.0.8 2023-11-13 10:34:10 -08:00
oobabooga
2af7e382b1 Revert "Bump llama-cpp-python to 0.2.14"
This reverts commit 5c3eb22ce6.

The new version has issues:

https://github.com/oobabooga/text-generation-webui/issues/4540
https://github.com/abetlen/llama-cpp-python/issues/893
2023-11-09 10:02:13 -08:00
oobabooga
5c3eb22ce6 Bump llama-cpp-python to 0.2.14 2023-11-07 14:20:43 -08:00
dependabot[bot]
fd893baba1
Bump optimum from 1.13.1 to 1.14.0 (#4492) 2023-11-07 00:13:41 -03:00
dependabot[bot]
18739c8b3a
Update peft requirement from ==0.5.* to ==0.6.* (#4494) 2023-11-07 00:12:59 -03:00
Orang
2081f43ac2
Bump transformers to 4.35.* (#4474) 2023-11-04 14:00:24 -03:00
Casper
cfbd108826
Bump AWQ to 0.1.6 (#4470) 2023-11-04 13:09:41 -03:00
Orang
6b7fa45cc3
Update exllamav2 version (#4417) 2023-10-31 19:12:14 -03:00
Casper
41e159e88f
Bump AutoAWQ to v0.1.5 (#4410) 2023-10-31 19:11:22 -03:00
James Braza
f481ce3dd8
Adding platform_system to autoawq (#4390) 2023-10-27 01:02:28 -03:00
oobabooga
839a87bac8 Fix is_ccl_available & is_xpu_available imports 2023-10-26 20:27:04 -07:00
oobabooga
6086768309 Bump gradio to 3.50.* 2023-10-22 21:21:26 -07:00
Brian Dashore
3345da2ea4
Add flash-attention 2 for windows (#4235) 2023-10-21 03:46:23 -03:00
mjbogusz
8f6405d2fa
Python 3.11, 3.9, 3.8 support (#4233)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-20 21:13:33 -03:00
Johan
2706394bfe
Relax numpy version requirements (#4291) 2023-10-15 12:05:06 -03:00
jllllll
1f5a2c5597
Use Pytorch 2.1 exllama wheels (#4285) 2023-10-14 15:27:59 -03:00
oobabooga
cd1cad1b47 Bump exllamav2 2023-10-14 11:23:07 -07:00
oobabooga
fae8062d39
Bump to latest gradio (3.47) (#4258) 2023-10-10 22:20:49 -03:00
dependabot[bot]
520cbb2ab1
Bump safetensors from 0.3.2 to 0.4.0 (#4249) 2023-10-10 17:41:09 -03:00
jllllll
0eda9a0549
Use GPTQ wheels compatible with Pytorch 2.1 (#4210) 2023-10-07 00:35:41 -03:00
oobabooga
d33facc9fe
Bump to pytorch 11.8 (#4209) 2023-10-07 00:23:49 -03:00
Casper
0aa853f575
Bump AutoAWQ to v0.1.4 (#4203) 2023-10-06 15:30:01 -03:00
oobabooga
7d3201923b Bump AutoAWQ 2023-10-05 15:14:15 -07:00
turboderp
8a98646a21
Bump ExLlamaV2 to 0.0.5 (#4186) 2023-10-05 19:12:22 -03:00
cal066
cc632c3f33
AutoAWQ: initial support (#3999) 2023-10-05 13:19:18 -03:00
oobabooga
3f56151f03 Bump to transformers 4.34 2023-10-05 08:55:14 -07:00
oobabooga
ae4ba3007f
Add grammar to transformers and _HF loaders (#4091) 2023-10-05 10:01:36 -03:00
jllllll
41a2de96e5
Bump llama-cpp-python to 0.2.11 2023-10-01 18:08:10 -05:00
oobabooga
92a39c619b Add Mistral support 2023-09-28 15:41:03 -07:00
oobabooga
f46ba12b42 Add flash-attn wheels for Linux 2023-09-28 14:45:52 -07:00
jllllll
2bd23c29cb
Bump llama-cpp-python to 0.2.7 (#4110) 2023-09-27 23:45:36 -03:00
jllllll
13a54729b1
Bump exllamav2 to 0.0.4 and use pre-built wheels (#4095) 2023-09-26 21:36:14 -03:00
oobabooga
2e7b6b0014
Create alternative requirements.txt with AMD and Metal wheels (#4052) 2023-09-24 09:58:29 -03:00
oobabooga
05c4a4f83c Bump exllamav2 2023-09-21 14:56:01 -07:00
jllllll
b7c55665c1
Bump llama-cpp-python to 0.2.6 (#3982) 2023-09-18 14:08:37 -03:00
dependabot[bot]
661bfaac8e
Update accelerate from ==0.22.* to ==0.23.* (#3981) 2023-09-17 22:42:12 -03:00
Thireus ☠
45335fa8f4
Bump ExLlamav2 to v0.0.2 (#3970) 2023-09-17 19:24:40 -03:00
dependabot[bot]
eb9ebabec7
Bump exllamav2 from 0.0.0 to 0.0.1 (#3896) 2023-09-13 02:13:51 -03:00
cal066
a4e4e887d7
Bump ctransformers to 0.2.27 (#3893) 2023-09-13 00:37:31 -03:00
jllllll
1a5d68015a
Bump llama-cpp-python to 0.1.85 (#3887) 2023-09-12 19:41:41 -03:00
oobabooga
833bc59f1b Remove ninja from requirements.txt
It's installed with exllamav2 automatically
2023-09-12 15:12:56 -07:00
dependabot[bot]
0efbe5ef76
Bump optimum from 1.12.0 to 1.13.1 (#3872) 2023-09-12 15:53:21 -03:00
oobabooga
c2a309f56e
Add ExLlamaV2 and ExLlamav2_HF loaders (#3881) 2023-09-12 14:33:07 -03:00
oobabooga
ed86878f02 Remove GGML support 2023-09-11 07:44:00 -07:00
jllllll
859b4fd737
Bump exllama to 0.1.17 (#3847) 2023-09-11 01:13:14 -03:00
dependabot[bot]
1d6b384828
Update transformers requirement from ==4.32.* to ==4.33.* (#3865) 2023-09-11 01:12:22 -03:00
jllllll
e8f234ca8f
Bump llama-cpp-python to 0.1.84 (#3854) 2023-09-11 01:11:33 -03:00
oobabooga
66d5caba1b Pin pydantic version (closes #3850) 2023-09-10 21:09:04 -07:00
oobabooga
0576691538 Add optimum to requirements (for GPTQ LoRA training)
See https://github.com/oobabooga/text-generation-webui/issues/3655
2023-08-31 08:45:38 -07:00
jllllll
9626f57721
Bump exllama to 0.0.14 (#3758) 2023-08-30 13:43:38 -03:00
jllllll
dac5f4b912
Bump llama-cpp-python to 0.1.83 (#3745) 2023-08-29 22:35:59 -03:00
VishwasKukreti
a9a1784420
Update accelerate to 0.22 in requirements.txt (#3725) 2023-08-29 17:47:37 -03:00
jllllll
fe1f7c6513
Bump ctransformers to 0.2.25 (#3740) 2023-08-29 17:24:36 -03:00
jllllll
22b2a30ec7
Bump llama-cpp-python to 0.1.82 (#3730) 2023-08-28 18:02:24 -03:00
jllllll
7d3a0b5387
Bump llama-cpp-python to 0.1.81 (#3716) 2023-08-27 22:38:41 -03:00
oobabooga
7f5370a272 Minor fixes/cosmetics 2023-08-26 22:11:07 -07:00
jllllll
4a999e3bcd
Use separate llama-cpp-python packages for GGML support 2023-08-26 10:40:08 -05:00
oobabooga
6e6431e73f Update requirements.txt 2023-08-26 01:07:28 -07:00
cal066
960980247f
ctransformers: gguf support (#3685) 2023-08-25 11:33:04 -03:00
oobabooga
26c5e5e878 Bump autogptq 2023-08-24 19:23:08 -07:00
oobabooga
2b675533f7 Un-bump safetensors
The newest one doesn't work on Windows yet
2023-08-23 14:36:03 -07:00
oobabooga
335c49cc7e Bump peft and transformers 2023-08-22 13:14:59 -07:00
tkbit
df165fe6c4
Use numpy==1.24 in requirements.txt (#3651)
The whisper extension needs numpy 1.24 to work properly
2023-08-22 16:55:17 -03:00
cal066
e042bf8624
ctransformers: add mlock and no-mmap options (#3649) 2023-08-22 16:51:34 -03:00
oobabooga
b96fd22a81
Refactor the training tab (#3619) 2023-08-18 16:58:38 -03:00
jllllll
1a71ab58a9
Bump llama_cpp_python_cuda to 0.1.78 (#3614) 2023-08-18 12:04:01 -03:00
oobabooga
6170b5ba31 Bump llama-cpp-python 2023-08-17 21:41:02 -07:00
oobabooga
ccfc02a28d
Add the --disable_exllama option for AutoGPTQ (#3545 from clefever/disable-exllama) 2023-08-14 15:15:55 -03:00
oobabooga
8294eadd38 Bump AutoGPTQ wheel 2023-08-14 11:13:46 -07:00
jllllll
73421b1fed
Bump ctransformers wheel version (#3558) 2023-08-12 23:02:47 -03:00
cal066
7a4fcee069
Add ctransformers support (#3313)
---------

Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
2023-08-11 14:41:33 -03:00
jllllll
bee73cedbd
Streamline GPTQ-for-LLaMa support 2023-08-09 23:42:34 -05:00
oobabooga
a4e48cbdb6 Bump AutoGPTQ 2023-08-09 08:31:17 -07:00
oobabooga
7c1300fab5 Pin aiofiles version to fix statvfs issue 2023-08-09 08:07:55 -07:00
oobabooga
2d0634cd07 Bump transformers commit for positive prompts 2023-08-07 08:57:19 -07:00
oobabooga
0af10ab49b
Add Classifier Free Guidance (CFG) for Transformers/ExLlama (#3325) 2023-08-06 17:22:48 -03:00
jllllll
5ee95d126c
Bump exllama wheels to 0.0.10 (#3467) 2023-08-05 13:46:14 -03:00
jllllll
6e30f76ba5
Bump bitsandbytes to 0.41.1 (#3457) 2023-08-04 19:28:59 -03:00
jllllll
c4e14a757c
Bump exllama module to 0.0.9 (#3338) 2023-07-29 22:16:23 -03:00
oobabooga
77d2e9f060 Remove flexgen 2 2023-07-25 15:18:25 -07:00
oobabooga
a07d070b6c
Add llama-2-70b GGML support (#3285) 2023-07-24 16:37:03 -03:00
oobabooga
6f4830b4d3 Bump peft commit 2023-07-24 09:49:57 -07:00
jllllll
eb105b0495
Bump llama-cpp-python to 0.1.74 (#3257) 2023-07-24 11:15:42 -03:00
jllllll
152cf1e8ef
Bump bitsandbytes to 0.41.0 (#3258)
e229fbce66...a06a0f6a08
2023-07-24 11:06:18 -03:00
jllllll
8d31d20c9a
Bump exllama module to 0.0.8 (#3256)
39b3541cdd...3f83ebb378
2023-07-24 11:05:54 -03:00
oobabooga
63ece46213 Merge branch 'main' into dev 2023-07-20 07:06:41 -07:00
oobabooga
4b19b74e6c Add CUDA wheels for llama-cpp-python by jllllll 2023-07-19 19:33:43 -07:00
jllllll
87926d033d
Bump exllama module to 0.0.7 (#3211) 2023-07-19 22:24:47 -03:00
oobabooga
08c23b62c7 Bump llama-cpp-python and transformers 2023-07-19 07:19:12 -07:00
jllllll
c535f14e5f
Bump bitsandbytes Windows wheel to 0.40.2 (#3186) 2023-07-18 11:39:43 -03:00
dependabot[bot]
234c58ccd1
Bump bitsandbytes from 0.40.1.post1 to 0.40.2 (#3178) 2023-07-17 21:24:51 -03:00
oobabooga
49a5389bd3
Bump accelerate from 0.20.3 to 0.21.0 2023-07-17 21:23:59 -03:00
dependabot[bot]
02a5fe6aa2
Bump accelerate from 0.20.3 to 0.21.0
Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.20.3 to 0.21.0.
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](https://github.com/huggingface/accelerate/compare/v0.20.3...v0.21.0)

---
updated-dependencies:
- dependency-name: accelerate
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-17 20:18:31 +00:00
oobabooga
4ce766414b Bump AutoGPTQ version 2023-07-17 10:02:12 -07:00
ofirkris
780a2f2e16
Bump llama cpp version (#3160)
Bump llama cpp version to support better 8K RoPE scaling

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-16 01:54:56 -03:00
jllllll
ed3ffd212d
Bump bitsandbytes to 0.40.1.post1 (#3156)
817bdf6325...6ec4f0c374
2023-07-16 01:53:32 -03:00
jllllll
32f12b8bbf
Bump bitsandbytes Windows wheel to 0.40.0.post4 (#3135) 2023-07-13 17:32:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training (#2624) 2023-07-12 11:53:31 -03:00
oobabooga
a12dae51b9 Bump bitsandbytes 2023-07-11 18:29:08 -07:00
jllllll
fdd596f98f
Bump bitsandbytes Windows wheel (#3097) 2023-07-11 18:41:24 -03:00
ofirkris
a81cdd1367
Bump cpp llama version (#3081)
Bump cpp llama version to 0.1.70
2023-07-10 19:36:15 -03:00
jllllll
f8dbd7519b
Bump exllama module version (#3087)
d769533b6f...e61d4d31d4
2023-07-10 19:35:59 -03:00
ofirkris
161d984e80
Bump llama-cpp-python version (#3072)
Bump llama-cpp-python version to 0.1.69
2023-07-09 17:22:24 -03:00
oobabooga
79679b3cfd Pin fastapi version (for #3042) 2023-07-07 16:40:57 -07:00
ofirkris
b67c362735
Bump llama-cpp-python (#3011)
Bump llama-cpp-python to V0.1.68
2023-07-05 11:33:28 -03:00
jllllll
1610d5ffb2
Bump exllama module to 0.0.5 (#2993) 2023-07-04 00:15:55 -03:00
oobabooga
c6cae106e7 Bump llama-cpp-python 2023-06-28 18:14:45 -03:00
jllllll
7b048dcf67
Bump exllama module version to 0.0.4 (#2915) 2023-06-28 18:09:58 -03:00
jllllll
bef67af23c
Use pre-compiled python module for ExLlama (#2770) 2023-06-24 20:24:17 -03:00
jllllll
a06acd6d09
Update bitsandbytes to 0.39.1 (#2799) 2023-06-21 15:04:45 -03:00
oobabooga
c623e142ac Bump llama-cpp-python 2023-06-20 00:49:38 -03:00
oobabooga
490a1795f0 Bump peft commit 2023-06-18 16:42:11 -03:00
dependabot[bot]
909d8c6ae3
Bump transformers from 4.30.0 to 4.30.2 (#2695) 2023-06-14 19:56:28 -03:00
oobabooga
ea0eabd266 Bump llama-cpp-python version 2023-06-10 21:59:29 -03:00
oobabooga
0f8140e99d Bump transformers/accelerate/peft/autogptq 2023-06-09 00:25:13 -03:00
oobabooga
5d515eeb8c Bump llama-cpp-python wheel 2023-06-06 13:01:15 -03:00
dependabot[bot]
97f3fa843f
Bump llama-cpp-python from 0.1.56 to 0.1.57 (#2537) 2023-06-05 23:45:58 -03:00
oobabooga
4e9937aa99 Bump gradio 2023-06-05 17:29:21 -03:00
jllllll
5216117a63
Fix MacOS incompatibility in requirements.txt (#2485) 2023-06-02 01:46:16 -03:00
oobabooga
b4ad060c1f Use cuda 11.7 instead of 11.8 2023-06-02 01:04:44 -03:00
oobabooga
d0aca83b53 Add AutoGPTQ wheels to requirements.txt 2023-06-02 00:47:11 -03:00
oobabooga
2cdf525d3b Bump llama-cpp-python version 2023-05-31 23:29:02 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
jllllll
78dbec4c4e
Add 'scipy' to requirements.txt #2335 (#2343)
Unlisted dependency of bitsandbytes
2023-05-25 23:26:25 -03:00
oobabooga
548f05e106 Add windows bitsandbytes wheel by jllllll 2023-05-25 10:48:22 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter (#2320) 2023-05-25 01:14:13 -03:00
eiery
9967e08b1f
update llama-cpp-python to v0.1.53 for ggml v3, fixes #2245 (#2264) 2023-05-24 10:25:28 -03:00
oobabooga
1490c0af68 Remove RWKV from requirements.txt 2023-05-23 20:49:20 -03:00
dependabot[bot]
baf75356d4
Bump transformers from 4.29.1 to 4.29.2 (#2268) 2023-05-22 02:50:18 -03:00
jllllll
2aa01e2303
Fix broken version of peft (#2229) 2023-05-20 17:54:51 -03:00
oobabooga
511470a89b Bump llama-cpp-python version 2023-05-19 12:13:25 -03:00
oobabooga
259020a0be Bump gradio to 3.31.0
This fixes Google Colab lagging.
2023-05-16 22:21:15 -03:00
dependabot[bot]
ae54d83455
Bump transformers from 4.28.1 to 4.29.1 (#2089) 2023-05-15 19:25:24 -03:00
feeelX
eee986348c
Update llama-cpp-python from 0.1.45 to 0.1.50 (#2058) 2023-05-14 22:41:14 -03:00
dependabot[bot]
a5bb278631
Bump accelerate from 0.18.0 to 0.19.0 (#1925) 2023-05-09 02:17:27 -03:00
oobabooga
b040b4110d Bump llama-cpp-python version 2023-05-08 00:21:17 -03:00
oobabooga
81be7c2dd4 Specify gradio_client version 2023-05-06 21:50:04 -03:00
oobabooga
60be76f0fc Revert gradio bump (gallery is broken) 2023-05-03 11:53:30 -03:00
oobabooga
d016c38640 Bump gradio version 2023-05-02 19:19:33 -03:00
dependabot[bot]
280c2f285f
Bump safetensors from 0.3.0 to 0.3.1 (#1720) 2023-05-02 00:42:39 -03:00
oobabooga
56b13d5d48 Bump llama-cpp-python version 2023-05-02 00:41:54 -03:00
oobabooga
2f6e2ddeac Bump llama-cpp-python version 2023-04-24 03:42:03 -03:00
oobabooga
c4f4f41389
Add an "Evaluate" tab to calculate the perplexities of models (#1322) 2023-04-21 00:20:33 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support (#1200) 2023-04-16 23:26:52 -03:00
dependabot[bot]
4cd2a9d824
Bump transformers from 4.28.0 to 4.28.1 (#1288) 2023-04-16 21:12:57 -03:00
oobabooga
d2ea925fa5 Bump llama-cpp-python to use LlamaCache 2023-04-16 00:53:40 -03:00
catalpaaa
94700cc7a5
Bump gradio to 3.25 (#1089) 2023-04-14 23:45:25 -03:00
Alex "mcmonkey" Goodwin
64e3b44e0f
initial multi-lora support (#1103)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-14 14:52:06 -03:00
dependabot[bot]
852a5aa13d
Bump bitsandbytes from 0.37.2 to 0.38.1 (#1158) 2023-04-13 21:23:14 -03:00
dependabot[bot]
84576a80d2
Bump llama-cpp-python from 0.1.30 to 0.1.33 (#1157) 2023-04-13 21:17:59 -03:00
oobabooga
2908a51587 Settle for transformers 4.28.0 2023-04-13 21:07:00 -03:00
oobabooga
32d078487e Add llama-cpp-python to requirements.txt 2023-04-10 10:45:51 -03:00
oobabooga
d272ac46dd Add Pillow as a requirement 2023-04-08 18:48:46 -03:00
oobabooga
58ed87e5d9
Update requirements.txt 2023-04-06 18:42:54 -03:00
dependabot[bot]
21be80242e
Bump rwkv from 0.7.2 to 0.7.3 (#842) 2023-04-06 17:52:27 -03:00
oobabooga
113f94b61e Bump transformers (16-bit llama must be reconverted/redownloaded) 2023-04-06 16:04:03 -03:00
oobabooga
59058576b5 Remove unused requirement 2023-04-06 13:28:21 -03:00
oobabooga
03cb44fc8c Add new llama.cpp library (2048 context, temperature, etc now work) 2023-04-06 13:12:14 -03:00
oobabooga
b2ce7282a1
Use past transformers version #773 2023-04-04 16:11:42 -03:00
dependabot[bot]
ad37f396fc
Bump rwkv from 0.7.1 to 0.7.2 (#747) 2023-04-03 14:29:57 -03:00
dependabot[bot]
18f756ada6
Bump gradio from 3.24.0 to 3.24.1 (#746) 2023-04-03 14:29:37 -03:00
TheTerrasque
2157bb4319
New yaml character format (#337 from TheTerrasque/feature/yaml-characters)
This doesn't break backward compatibility with JSON characters.
2023-04-02 20:34:25 -03:00
oobabooga
a5c9b7d977 Bump llamacpp version 2023-03-31 15:08:01 -03:00
oobabooga
4d98623041
Merge branch 'main' into feature/llamacpp 2023-03-31 14:37:04 -03:00
oobabooga
9d1dcf880a General improvements 2023-03-31 14:27:01 -03:00
oobabooga
f27a66b014 Bump gradio version (make sure to update)
This fixes the textbox shrinking vertically once it reaches
a certain number of lines.
2023-03-31 00:42:26 -03:00
Thomas Antony
8953a262cb Add llamacpp to requirements.txt 2023-03-30 11:22:38 +01:00
Alex "mcmonkey" Goodwin
b0f05046b3 remove duplicate import 2023-03-27 22:50:37 -07:00
Alex "mcmonkey" Goodwin
31f04dc615 Merge branch 'main' into add-train-lora-tab 2023-03-27 20:03:30 -07:00
dependabot[bot]
1e02f75f2b
Bump accelerate from 0.17.1 to 0.18.0
Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.17.1 to 0.18.0.
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](https://github.com/huggingface/accelerate/compare/v0.17.1...v0.18.0)

---
updated-dependencies:
- dependency-name: accelerate
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-28 01:19:34 +00:00
oobabooga
37f11803e3
Merge pull request #603 from oobabooga/dependabot/pip/rwkv-0.7.1
Bump rwkv from 0.7.0 to 0.7.1
2023-03-27 22:19:08 -03:00
dependabot[bot]
e9c0226b09
Bump rwkv from 0.7.0 to 0.7.1
Bumps [rwkv](https://github.com/BlinkDL/ChatRWKV) from 0.7.0 to 0.7.1.
- [Release notes](https://github.com/BlinkDL/ChatRWKV/releases)
- [Commits](https://github.com/BlinkDL/ChatRWKV/commits)

---
updated-dependencies:
- dependency-name: rwkv
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-27 21:05:35 +00:00
dependabot[bot]
9c96919121
Bump bitsandbytes from 0.37.1 to 0.37.2
Bumps [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) from 0.37.1 to 0.37.2.
- [Release notes](https://github.com/TimDettmers/bitsandbytes/releases)
- [Changelog](https://github.com/TimDettmers/bitsandbytes/blob/main/CHANGELOG.md)
- [Commits](https://github.com/TimDettmers/bitsandbytes/commits)

---
updated-dependencies:
- dependency-name: bitsandbytes
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-27 21:05:19 +00:00
Alex "mcmonkey" Goodwin
e439228ed8 Merge branch 'main' into add-train-lora-tab 2023-03-27 08:21:19 -07:00
oobabooga
9ff6a538b6 Bump gradio version
Make sure to upgrade with

`pip install -r requirements.txt --upgrade`
2023-03-26 22:11:19 -03:00
Alex "mcmonkey" Goodwin
566898a79a initial lora training tab 2023-03-25 12:08:26 -07:00
oobabooga
7073e96093 Add back RWKV dependency #98 2023-03-19 12:05:28 -03:00
oobabooga
86b99006d9
Remove rwkv dependency 2023-03-18 10:27:52 -03:00
oobabooga
104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga
23a5e886e1 The LLaMA PR has been merged into transformers
https://github.com/huggingface/transformers/pull/21955

The tokenizer class has been changed from

"LLaMATokenizer"

to

"LlamaTokenizer"

It is necessary to edit this change in every tokenizer_config.json
that you had for LLaMA so far.
2023-03-16 11:18:32 -03:00
oobabooga
29b7c5ac0c Sort the requirements 2023-03-15 12:40:03 -03:00
oobabooga
693b53d957 Merge branch 'main' into HideLord-main 2023-03-15 12:08:56 -03:00
dependabot[bot]
02d407542c
Bump accelerate from 0.17.0 to 0.17.1
Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.17.0 to 0.17.1.
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](https://github.com/huggingface/accelerate/compare/v0.17.0...v0.17.1)

---
updated-dependencies:
- dependency-name: accelerate
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-14 01:40:42 +00:00
oobabooga
d685332c10
Merge pull request #307 from oobabooga/dependabot/pip/bitsandbytes-0.37.1
Bump bitsandbytes from 0.37.0 to 0.37.1
2023-03-13 22:39:59 -03:00
dependabot[bot]
df83088593
Bump bitsandbytes from 0.37.0 to 0.37.1
Bumps [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) from 0.37.0 to 0.37.1.
- [Release notes](https://github.com/TimDettmers/bitsandbytes/releases)
- [Changelog](https://github.com/TimDettmers/bitsandbytes/blob/main/CHANGELOG.md)
- [Commits](https://github.com/TimDettmers/bitsandbytes/commits)

---
updated-dependencies:
- dependency-name: bitsandbytes
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-14 01:36:18 +00:00
dependabot[bot]
715c3ecba6
Bump rwkv from 0.3.1 to 0.4.2
Bumps [rwkv](https://github.com/BlinkDL/ChatRWKV) from 0.3.1 to 0.4.2.
- [Release notes](https://github.com/BlinkDL/ChatRWKV/releases)
- [Commits](https://github.com/BlinkDL/ChatRWKV/commits)

---
updated-dependencies:
- dependency-name: rwkv
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-14 01:36:02 +00:00
Alexander Hristov Hristov
63c5a139a2
Merge branch 'main' into main 2023-03-13 19:50:08 +02:00
Luis Cosio
435a69e357
Fix for issue #282
RuntimeError: Tensors must have same number of dimensions: got 3 and 4
2023-03-13 11:41:35 -06:00
HideLord
683556f411 Adding markdown support and slight refactoring. 2023-03-12 21:34:09 +02:00
oobabooga
441e993c51 Bump accelerate, RWKV and safetensors 2023-03-12 14:25:14 -03:00
oobabooga
3c25557ef0 Add tqdm to requirements.txt 2023-03-12 08:48:16 -03:00
oobabooga
501afbc234 Add requests to requirements.txt 2023-03-11 14:47:30 -03:00
oobabooga
fd540b8930 Use new LLaMA implementation (this will break stuff. I am sorry)
https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model
2023-03-09 17:59:15 -03:00
oobabooga
8660227e1b Add top_k to RWKV 2023-03-07 17:24:28 -03:00
oobabooga
153dfeb4dd Add --rwkv-cuda-on parameter, bump rwkv version 2023-03-06 20:12:54 -03:00
oobabooga
145c725c39 Bump RWKV version 2023-03-05 16:28:21 -03:00
oobabooga
5492e2e9f8 Add sentencepiece 2023-03-05 10:02:24 -03:00
oobabooga
c33715ad5b Move towards HF LLaMA implementation 2023-03-05 01:20:31 -03:00
oobabooga
bcea196c9d Bump flexgen version 2023-03-02 12:03:57 -03:00
oobabooga
7a9b4407b0 Settle for 0.0.6 for now 2023-03-01 17:37:14 -03:00