Commit Graph

57 Commits

Author SHA1 Message Date
oobabooga
89f6036e98
Bump llama-cpp-python, remove python 3.8/3.9, cuda 11.7 (#5397) 2024-01-30 13:19:20 -03:00
oobabooga
d921f80322 one-click: minor fix after 5e87678fea 2024-01-28 06:14:15 -08:00
Evgenii
26c3ab367e
one-click: use f-strings to improve readability and unify with the rest code (#5068) 2024-01-27 17:31:22 -03:00
Andrew C. Dvorak
5e87678fea
Support running as a git submodule. (#5227) 2024-01-27 17:18:50 -03:00
oobabooga
c4c7fc4ab3 Lint 2024-01-07 09:36:56 -08:00
Yilong Guo
d93db3b486
Refine ipex setup (#5191) 2024-01-07 10:40:30 -03:00
oobabooga
9e86bea8e9 Use requirements_cpu.txt for intel 2024-01-04 18:52:14 -08:00
oobabooga
3d854ee516
Pin PyTorch version to 2.1 (#5056) 2024-01-04 23:50:23 -03:00
Matthew Raaff
c9c31f71b8
Various one-click installer improvements (#4994)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-01-04 23:41:54 -03:00
oobabooga
0e54a09bcb
Remove exllamav1 loaders (#5128) 2023-12-31 01:57:06 -03:00
Song Fuchang
127c71a22a
Update IPEX to 2.1.10+xpu (#4931)
* This will require Intel oneAPI Toolkit 2024.0
2023-12-15 03:19:01 -03:00
oobabooga
dde7921057 One-click installer: minor message change 2023-12-14 17:27:32 -08:00
oobabooga
fd1449de20 One-click installer: fix minor bug introduced in previous commit 2023-12-14 16:52:44 -08:00
oobabooga
4ae2dcebf5 One-click installer: more friendly progress messages 2023-12-14 16:48:00 -08:00
Song Fuchang
e16e5997ef
Update IPEX install URL. (#4825)
* Old pip url no longer works. Use the latest url from
  * https://intel.github.io/intel-extension-for-pytorch/index.html#installation
2023-12-06 21:07:01 -03:00
erew123
f786aa3caa
Clean-up Ctrl+C Shutdown (#4802) 2023-12-05 02:16:16 -03:00
oobabooga
8d811a4d58 one-click: move on instead of crashing if extension fails to install 2023-11-21 16:09:44 -08:00
oobabooga
0047d9f5e0 Do not install coqui_tts requirements by default
It breaks the one-click installer on Windows.
2023-11-21 15:13:42 -08:00
oobabooga
fb124ab6e2 Bump to flash-attention 2.3.4 + switch to Github Actions wheels on Windows (#4700) 2023-11-21 15:07:17 -08:00
oobabooga
9d6f79db74 Revert "Bump llama-cpp-python to 0.2.18 (#4611)"
This reverts commit 923c8e25fb.
2023-11-17 05:14:25 -08:00
oobabooga
b2ce8dc7ee Update a message 2023-11-16 18:46:26 -08:00
oobabooga
780b00e1cf Minor bug fix 2023-11-16 18:39:39 -08:00
oobabooga
923c8e25fb
Bump llama-cpp-python to 0.2.18 (#4611) 2023-11-16 22:55:14 -03:00
oobabooga
e7d460d932 Make sure that API requirements are installed 2023-11-16 10:08:41 -08:00
oobabooga
cbf2b47476 Strip trailing "\" characters in CMD_FLAGS.txt 2023-11-16 09:33:36 -08:00
oobabooga
4f9bc63edf Installer: update a message for clarity 2023-11-10 09:43:02 -08:00
Abhilash Majumder
778a010df8
Intel Gpu support initialization (#4340) 2023-10-26 23:39:51 -03:00
oobabooga
2d97897a25 Don't install flash-attention on windows + cuda 11 2023-10-25 11:21:18 -07:00
mongolu
c18504f369
USE_CUDA118 from ENV remains null one_click.py + cuda-toolkit (#4352) 2023-10-22 12:37:24 -03:00
oobabooga
6efb990b60
Add a proper documentation (#3885) 2023-10-21 19:15:54 -03:00
Brian Dashore
3345da2ea4
Add flash-attention 2 for windows (#4235) 2023-10-21 03:46:23 -03:00
oobabooga
258d046218 More robust way of initializing empty .git folder 2023-10-20 23:13:09 -07:00
oobabooga
43be1be598 Manually install CUDA runtime libraries 2023-10-12 21:02:44 -07:00
jllllll
0eda9a0549
Use GPTQ wheels compatible with Pytorch 2.1 (#4210) 2023-10-07 00:35:41 -03:00
oobabooga
d33facc9fe
Bump to pytorch 11.8 (#4209) 2023-10-07 00:23:49 -03:00
oobabooga
771e936769 Fix extensions install (2nd attempt) 2023-09-28 14:33:49 -07:00
oobabooga
822ba7fcbb Better error handling during install/update 2023-09-28 13:57:59 -07:00
oobabooga
85f45cafa1 Fix extensions install 2023-09-28 13:54:36 -07:00
Nathan Thomas
e145d9a0da
Update one_click.py to initialize site_packages_path variable (#4118) 2023-09-28 08:31:29 -03:00
HideLord
0845724a89
Supercharging superbooga (#3272) 2023-09-26 21:30:19 -03:00
jllllll
ad00b8eb26
Check '--model-dir' for no models warning (#4067) 2023-09-26 10:56:57 -03:00
oobabooga
44438c60e5 Add INSTALL_EXTENSIONS environment variable 2023-09-25 13:12:35 -07:00
jllllll
c0fca23cb9
Avoid importing torch in one-click-installer (#4064) 2023-09-24 22:16:59 -03:00
oobabooga
d5952cb540 Don't assume that py-cpuinfo is installed 2023-09-24 08:10:45 -07:00
oobabooga
2e7b6b0014
Create alternative requirements.txt with AMD and Metal wheels (#4052) 2023-09-24 09:58:29 -03:00
oobabooga
30d7c4eaa1 Forward --help to server.py 2023-09-23 07:27:27 -07:00
oobabooga
c2ae01fb04 Improved readability 2023-09-23 07:10:01 -07:00
oobabooga
fc351ff3e5 Improved readability 2023-09-23 06:48:09 -07:00
oobabooga
e6f445f3eb Improved readability of one_click.py 2023-09-23 06:28:58 -07:00
oobabooga
639723845a Make N the "None" install option 2023-09-23 05:25:06 -07:00