oobabooga
188d20e9e5
Reduce the evaluation table height
2023-10-16 10:53:42 -07:00
oobabooga
2d44adbb76
Clear the torch cache while evaluating
2023-10-16 10:52:50 -07:00
oobabooga
388d1864a6
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-10-15 21:58:16 -07:00
oobabooga
71cac7a1b2
Increase the height of the evaluation table
2023-10-15 21:56:40 -07:00
oobabooga
e14bde4946
Minor improvements to evaluation logs
2023-10-15 20:51:43 -07:00
oobabooga
b88b2b74a6
Experimental Intel Arc transformers support (untested)
2023-10-15 20:51:11 -07:00
Sam
d331501ebc
Fix for using Torch with CUDA 11.8 ( #4298 )
2023-10-15 19:27:19 -03:00
oobabooga
3bb4046fad
Update auto-release.yml
2023-10-15 17:27:16 -03:00
oobabooga
45fa803943
Create auto-release.yml
2023-10-15 17:25:29 -03:00
Johan
2706394bfe
Relax numpy version requirements ( #4291 )
2023-10-15 12:05:06 -03:00
Forkoz
8cce1f1126
Exllamav2 lora support ( #4229 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-14 16:12:41 -03:00
jllllll
1f5a2c5597
Use Pytorch 2.1 exllama wheels ( #4285 )
2023-10-14 15:27:59 -03:00
oobabooga
cd1cad1b47
Bump exllamav2
2023-10-14 11:23:07 -07:00
Eve
6e2dec82f1
add chatml support + mistral-openorca ( #4275 )
2023-10-13 11:49:17 -03:00
Jesus Alvarez
ed66ca3cdf
Add HTTPS support to APIs (openai and default) ( #4270 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-13 01:31:13 -03:00
oobabooga
43be1be598
Manually install CUDA runtime libraries
2023-10-12 21:02:44 -07:00
oobabooga
faf5c4dd58
Fix code blocks in instruct mode
2023-10-11 12:18:46 -07:00
oobabooga
773c17faec
Fix a warning
2023-10-10 20:53:38 -07:00
oobabooga
f63361568c
Fix safetensors kwarg usage in AutoAWQ
2023-10-10 19:03:09 -07:00
oobabooga
39f16ff83d
Fix default/notebook tabs css
2023-10-10 18:45:12 -07:00
oobabooga
fae8062d39
Bump to latest gradio (3.47) ( #4258 )
2023-10-10 22:20:49 -03:00
Haotian Liu
2b75d725e6
Initial support for LLaVA-LLaMA-2. ( #3377 )
2023-10-10 18:40:52 -03:00
oobabooga
9fab9a1ca6
Minor fix
2023-10-10 14:08:11 -07:00
oobabooga
a49cc69a4a
Ignore rope_freq_base if value is 10000
2023-10-10 13:57:40 -07:00
oobabooga
3a9d90c3a1
Download models with 4 threads by default
2023-10-10 13:52:10 -07:00
dependabot[bot]
520cbb2ab1
Bump safetensors from 0.3.2 to 0.4.0 ( #4249 )
2023-10-10 17:41:09 -03:00
Forkoz
35695e18c7
Remove import. ( #4247 )
...
For real this time.
2023-10-09 18:06:11 -03:00
Forkoz
2e471071af
Update llama_attn_hijack.py ( #4231 )
2023-10-08 15:16:48 -03:00
oobabooga
2e8b5f7c80
Update ROCm command
2023-10-08 10:12:13 -03:00
oobabooga
00187d641a
Note about pytorch 2.1 breaking change
2023-10-08 10:10:38 -03:00
oobabooga
1c6e57dd68
Note about pytorch 2.1 breaking change
2023-10-08 10:09:22 -03:00
oobabooga
cf4d89ee65
Lint the javascript code
2023-10-07 19:07:57 -07:00
James Braza
8614c9d085
README for superboogav2 ( #4212 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-07 19:50:39 -03:00
Brian Dashore
98fa73a974
Text Generation: stop if EOS token is reached ( #4213 )
2023-10-07 19:46:42 -03:00
Brian Dashore
7743b5e9de
Llamacpp_HF: Fix CFG cache init ( #4219 )
...
Documentation says that model.context_params should be sent when
a new context is created. The current code uses model.params which
doesn't exist.
Signed-off-by: kingbri <bdashore3@proton.me>
2023-10-07 19:38:29 -03:00
oobabooga
2a7cb346dd
Update the whisper_stt requirements
2023-10-06 21:01:26 -07:00
jllllll
0eda9a0549
Use GPTQ wheels compatible with Pytorch 2.1 ( #4210 )
2023-10-07 00:35:41 -03:00
oobabooga
d33facc9fe
Bump to pytorch 11.8 ( #4209 )
2023-10-07 00:23:49 -03:00
AG-w
06fff3b2e9
Fix python wheels for avx requirements ( #4189 )
2023-10-06 15:42:44 -03:00
Casper
0aa853f575
Bump AutoAWQ to v0.1.4 ( #4203 )
2023-10-06 15:30:01 -03:00
oobabooga
7d3201923b
Bump AutoAWQ
2023-10-05 15:14:15 -07:00
turboderp
8a98646a21
Bump ExLlamaV2 to 0.0.5 ( #4186 )
2023-10-05 19:12:22 -03:00
oobabooga
7ffb424c7b
Add AutoAWQ to README
2023-10-05 09:22:37 -07:00
cal066
cc632c3f33
AutoAWQ: initial support ( #3999 )
2023-10-05 13:19:18 -03:00
oobabooga
3f56151f03
Bump to transformers 4.34
2023-10-05 08:55:14 -07:00
tdrussell
cb26163a20
Fix off-by-one error in exllama_hf caching logic ( #4145 )
2023-10-05 12:20:56 -03:00
Gennadij
b04c08378d
Add CMD_FLAGS.txt to .gitignore ( #4181 )
2023-10-05 10:02:38 -03:00
oobabooga
ae4ba3007f
Add grammar to transformers and _HF loaders ( #4091 )
2023-10-05 10:01:36 -03:00
oobabooga
0197fdddf1
Merge pull request #4142 from jllllll/llamacpp-0.2.11
...
Bump llama-cpp-python to 0.2.11
2023-10-02 01:31:14 -03:00
oobabooga
b6fe6acf88
Add threads_batch parameter
2023-10-01 21:28:00 -07:00