Commit Graph

3066 Commits

Author SHA1 Message Date
FartyPants (FP HAM)
6a61158adf
Training PRO a month worth of updates (#4345) 2023-10-22 12:38:09 -03:00
mongolu
c18504f369
USE_CUDA118 from ENV remains null one_click.py + cuda-toolkit (#4352) 2023-10-22 12:37:24 -03:00
oobabooga
cd45635f53 tqdm improvement for colab 2023-10-21 22:00:29 -07:00
oobabooga
ae79c510cc Merge remote-tracking branch 'refs/remotes/origin/main' 2023-10-21 21:46:15 -07:00
oobabooga
2d1b3332e4 Ignore warnings on Colab 2023-10-21 21:45:25 -07:00
oobabooga
caf6db07ad
Update README.md 2023-10-22 01:22:17 -03:00
oobabooga
1a34927314 Make API URLs more visible 2023-10-21 21:11:07 -07:00
oobabooga
09f807af83 Use ExLlama_HF for GPTQ models by default 2023-10-21 20:45:38 -07:00
oobabooga
619093483e Add Colab notebook 2023-10-21 20:27:52 -07:00
oobabooga
506d05aede Organize command-line arguments 2023-10-21 18:52:59 -07:00
oobabooga
b1f33b55fd
Update 01 ‐ Chat Tab.md 2023-10-21 20:17:56 -03:00
oobabooga
ac6d5d50b7
Update README.md 2023-10-21 20:03:43 -03:00
oobabooga
6efb990b60
Add a proper documentation (#3885) 2023-10-21 19:15:54 -03:00
Adam White
5a5bc135e9
Docker: Remove explicit CUDA 11.8 Reference (#4343) 2023-10-21 15:09:34 -03:00
oobabooga
b98fbe0afc Add download link 2023-10-20 23:58:05 -07:00
oobabooga
fbac6d21ca Add missing exception 2023-10-20 23:53:24 -07:00
Brian Dashore
3345da2ea4
Add flash-attention 2 for windows (#4235) 2023-10-21 03:46:23 -03:00
oobabooga
258d046218 More robust way of initializing empty .git folder 2023-10-20 23:13:09 -07:00
Johan
1d5a015ce7
Enable special token support for exllamav2 (#4314) 2023-10-21 01:54:06 -03:00
mjbogusz
8f6405d2fa
Python 3.11, 3.9, 3.8 support (#4233)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-20 21:13:33 -03:00
oobabooga
9be74fb57c Change 2 margins 2023-10-20 14:04:14 -07:00
oobabooga
e208128d68 Lint the CSS files 2023-10-20 13:02:18 -07:00
oobabooga
dedbdb46c2 Chat CSS improvements 2023-10-20 12:49:36 -07:00
Haotian Liu
32984ea2f0
Support LLaVA v1.5 (#4305) 2023-10-20 02:28:14 -03:00
oobabooga
bb71272903 Detect WizardCoder-Python-34B & Phind-CodeLlama-34B 2023-10-19 14:35:56 -07:00
oobabooga
eda7126b25 Organize the .gitignore 2023-10-19 14:33:44 -07:00
turboderp
ae8cd449ae
ExLlamav2_HF: Convert logits to FP32 (#4310) 2023-10-18 23:16:05 -03:00
missionfloyd
c0ffb77fd8
More silero languages (#3950) 2023-10-16 17:12:32 -03:00
hronoas
db7ecdd274
openai: fix empty models list on query present in url (#4139) 2023-10-16 17:02:47 -03:00
oobabooga
f17f7a6913 Increase the evaluation table height 2023-10-16 12:55:35 -07:00
oobabooga
8ea554bc19 Check for torch.xpu.is_available() 2023-10-16 12:53:40 -07:00
oobabooga
188d20e9e5 Reduce the evaluation table height 2023-10-16 10:53:42 -07:00
oobabooga
2d44adbb76 Clear the torch cache while evaluating 2023-10-16 10:52:50 -07:00
oobabooga
388d1864a6 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-10-15 21:58:16 -07:00
oobabooga
71cac7a1b2 Increase the height of the evaluation table 2023-10-15 21:56:40 -07:00
oobabooga
e14bde4946 Minor improvements to evaluation logs 2023-10-15 20:51:43 -07:00
oobabooga
b88b2b74a6 Experimental Intel Arc transformers support (untested) 2023-10-15 20:51:11 -07:00
Sam
d331501ebc
Fix for using Torch with CUDA 11.8 (#4298) 2023-10-15 19:27:19 -03:00
oobabooga
3bb4046fad
Update auto-release.yml 2023-10-15 17:27:16 -03:00
oobabooga
45fa803943
Create auto-release.yml 2023-10-15 17:25:29 -03:00
Johan
2706394bfe
Relax numpy version requirements (#4291) 2023-10-15 12:05:06 -03:00
Forkoz
8cce1f1126
Exllamav2 lora support (#4229)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-14 16:12:41 -03:00
jllllll
1f5a2c5597
Use Pytorch 2.1 exllama wheels (#4285) 2023-10-14 15:27:59 -03:00
oobabooga
cd1cad1b47 Bump exllamav2 2023-10-14 11:23:07 -07:00
Eve
6e2dec82f1
add chatml support + mistral-openorca (#4275) 2023-10-13 11:49:17 -03:00
Jesus Alvarez
ed66ca3cdf
Add HTTPS support to APIs (openai and default) (#4270)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-13 01:31:13 -03:00
oobabooga
43be1be598 Manually install CUDA runtime libraries 2023-10-12 21:02:44 -07:00
oobabooga
faf5c4dd58 Fix code blocks in instruct mode 2023-10-11 12:18:46 -07:00
oobabooga
773c17faec Fix a warning 2023-10-10 20:53:38 -07:00
oobabooga
f63361568c Fix safetensors kwarg usage in AutoAWQ 2023-10-10 19:03:09 -07:00