dependabot[bot]
|
520cbb2ab1
|
Bump safetensors from 0.3.2 to 0.4.0 (#4249)
|
2023-10-10 17:41:09 -03:00 |
|
jllllll
|
0eda9a0549
|
Use GPTQ wheels compatible with Pytorch 2.1 (#4210)
|
2023-10-07 00:35:41 -03:00 |
|
oobabooga
|
d33facc9fe
|
Bump to pytorch 11.8 (#4209)
|
2023-10-07 00:23:49 -03:00 |
|
Casper
|
0aa853f575
|
Bump AutoAWQ to v0.1.4 (#4203)
|
2023-10-06 15:30:01 -03:00 |
|
oobabooga
|
7d3201923b
|
Bump AutoAWQ
|
2023-10-05 15:14:15 -07:00 |
|
turboderp
|
8a98646a21
|
Bump ExLlamaV2 to 0.0.5 (#4186)
|
2023-10-05 19:12:22 -03:00 |
|
cal066
|
cc632c3f33
|
AutoAWQ: initial support (#3999)
|
2023-10-05 13:19:18 -03:00 |
|
oobabooga
|
3f56151f03
|
Bump to transformers 4.34
|
2023-10-05 08:55:14 -07:00 |
|
oobabooga
|
ae4ba3007f
|
Add grammar to transformers and _HF loaders (#4091)
|
2023-10-05 10:01:36 -03:00 |
|
jllllll
|
41a2de96e5
|
Bump llama-cpp-python to 0.2.11
|
2023-10-01 18:08:10 -05:00 |
|
oobabooga
|
92a39c619b
|
Add Mistral support
|
2023-09-28 15:41:03 -07:00 |
|
oobabooga
|
f46ba12b42
|
Add flash-attn wheels for Linux
|
2023-09-28 14:45:52 -07:00 |
|
jllllll
|
2bd23c29cb
|
Bump llama-cpp-python to 0.2.7 (#4110)
|
2023-09-27 23:45:36 -03:00 |
|
jllllll
|
13a54729b1
|
Bump exllamav2 to 0.0.4 and use pre-built wheels (#4095)
|
2023-09-26 21:36:14 -03:00 |
|
oobabooga
|
2e7b6b0014
|
Create alternative requirements.txt with AMD and Metal wheels (#4052)
|
2023-09-24 09:58:29 -03:00 |
|
oobabooga
|
05c4a4f83c
|
Bump exllamav2
|
2023-09-21 14:56:01 -07:00 |
|
jllllll
|
b7c55665c1
|
Bump llama-cpp-python to 0.2.6 (#3982)
|
2023-09-18 14:08:37 -03:00 |
|
dependabot[bot]
|
661bfaac8e
|
Update accelerate from ==0.22.* to ==0.23.* (#3981)
|
2023-09-17 22:42:12 -03:00 |
|
Thireus ☠
|
45335fa8f4
|
Bump ExLlamav2 to v0.0.2 (#3970)
|
2023-09-17 19:24:40 -03:00 |
|
dependabot[bot]
|
eb9ebabec7
|
Bump exllamav2 from 0.0.0 to 0.0.1 (#3896)
|
2023-09-13 02:13:51 -03:00 |
|
cal066
|
a4e4e887d7
|
Bump ctransformers to 0.2.27 (#3893)
|
2023-09-13 00:37:31 -03:00 |
|
jllllll
|
1a5d68015a
|
Bump llama-cpp-python to 0.1.85 (#3887)
|
2023-09-12 19:41:41 -03:00 |
|
oobabooga
|
833bc59f1b
|
Remove ninja from requirements.txt
It's installed with exllamav2 automatically
|
2023-09-12 15:12:56 -07:00 |
|
dependabot[bot]
|
0efbe5ef76
|
Bump optimum from 1.12.0 to 1.13.1 (#3872)
|
2023-09-12 15:53:21 -03:00 |
|
oobabooga
|
c2a309f56e
|
Add ExLlamaV2 and ExLlamav2_HF loaders (#3881)
|
2023-09-12 14:33:07 -03:00 |
|
oobabooga
|
ed86878f02
|
Remove GGML support
|
2023-09-11 07:44:00 -07:00 |
|
jllllll
|
859b4fd737
|
Bump exllama to 0.1.17 (#3847)
|
2023-09-11 01:13:14 -03:00 |
|
dependabot[bot]
|
1d6b384828
|
Update transformers requirement from ==4.32.* to ==4.33.* (#3865)
|
2023-09-11 01:12:22 -03:00 |
|
jllllll
|
e8f234ca8f
|
Bump llama-cpp-python to 0.1.84 (#3854)
|
2023-09-11 01:11:33 -03:00 |
|
oobabooga
|
66d5caba1b
|
Pin pydantic version (closes #3850)
|
2023-09-10 21:09:04 -07:00 |
|
oobabooga
|
0576691538
|
Add optimum to requirements (for GPTQ LoRA training)
See https://github.com/oobabooga/text-generation-webui/issues/3655
|
2023-08-31 08:45:38 -07:00 |
|
jllllll
|
9626f57721
|
Bump exllama to 0.0.14 (#3758)
|
2023-08-30 13:43:38 -03:00 |
|
jllllll
|
dac5f4b912
|
Bump llama-cpp-python to 0.1.83 (#3745)
|
2023-08-29 22:35:59 -03:00 |
|
VishwasKukreti
|
a9a1784420
|
Update accelerate to 0.22 in requirements.txt (#3725)
|
2023-08-29 17:47:37 -03:00 |
|
jllllll
|
fe1f7c6513
|
Bump ctransformers to 0.2.25 (#3740)
|
2023-08-29 17:24:36 -03:00 |
|
jllllll
|
22b2a30ec7
|
Bump llama-cpp-python to 0.1.82 (#3730)
|
2023-08-28 18:02:24 -03:00 |
|
jllllll
|
7d3a0b5387
|
Bump llama-cpp-python to 0.1.81 (#3716)
|
2023-08-27 22:38:41 -03:00 |
|
oobabooga
|
7f5370a272
|
Minor fixes/cosmetics
|
2023-08-26 22:11:07 -07:00 |
|
jllllll
|
4a999e3bcd
|
Use separate llama-cpp-python packages for GGML support
|
2023-08-26 10:40:08 -05:00 |
|
oobabooga
|
6e6431e73f
|
Update requirements.txt
|
2023-08-26 01:07:28 -07:00 |
|
cal066
|
960980247f
|
ctransformers: gguf support (#3685)
|
2023-08-25 11:33:04 -03:00 |
|
oobabooga
|
26c5e5e878
|
Bump autogptq
|
2023-08-24 19:23:08 -07:00 |
|
oobabooga
|
2b675533f7
|
Un-bump safetensors
The newest one doesn't work on Windows yet
|
2023-08-23 14:36:03 -07:00 |
|
oobabooga
|
335c49cc7e
|
Bump peft and transformers
|
2023-08-22 13:14:59 -07:00 |
|
tkbit
|
df165fe6c4
|
Use numpy==1.24 in requirements.txt (#3651)
The whisper extension needs numpy 1.24 to work properly
|
2023-08-22 16:55:17 -03:00 |
|
cal066
|
e042bf8624
|
ctransformers: add mlock and no-mmap options (#3649)
|
2023-08-22 16:51:34 -03:00 |
|
oobabooga
|
b96fd22a81
|
Refactor the training tab (#3619)
|
2023-08-18 16:58:38 -03:00 |
|
jllllll
|
1a71ab58a9
|
Bump llama_cpp_python_cuda to 0.1.78 (#3614)
|
2023-08-18 12:04:01 -03:00 |
|
oobabooga
|
6170b5ba31
|
Bump llama-cpp-python
|
2023-08-17 21:41:02 -07:00 |
|
oobabooga
|
ccfc02a28d
|
Add the --disable_exllama option for AutoGPTQ (#3545 from clefever/disable-exllama)
|
2023-08-14 15:15:55 -03:00 |
|