Commit Graph

107 Commits

Author SHA1 Message Date
oobabooga
bcd5786a47
Add files via upload 2023-04-24 16:53:04 -03:00
oobabooga
d66059d95a
Update INSTRUCTIONS.TXT 2023-04-24 16:50:03 -03:00
oobabooga
a4f6724b88
Add a comment 2023-04-24 16:47:22 -03:00
oobabooga
9a8487097b
Remove --auto-devices 2023-04-24 16:43:52 -03:00
oobabooga
dfbb18610f
Update INSTRUCTIONS.TXT 2023-04-23 12:58:14 -03:00
oobabooga
1ba0082410
Add files via upload 2023-04-18 02:30:47 -03:00
oobabooga
a5f7d98cf3
Rename environment_windows.bat to cmd_windows.bat 2023-04-18 02:30:23 -03:00
oobabooga
316aaff348
Rename environment_macos.sh to cmd_macos.sh 2023-04-18 02:30:08 -03:00
oobabooga
647f7bca36
Rename environment_linux.sh to cmd_linux.sh 2023-04-18 02:29:55 -03:00
Blake Wyatt
6d2c72b593
Add support for MacOS, Linux, and WSL (#21)
* Initial commit

* Initial commit with new code

* Add comments

* Move GPTQ out of if

* Fix install on Arch Linux

* Fix case where install was aborted

If the install was aborted before a model was downloaded, webui wouldn't run.

* Update start_windows.bat

Add necessary flags to Miniconda installer
Disable Start Menu shortcut creation
Disable ssl on Conda
Change Python version to latest 3.10,
I've noticed that explicitly specifying 3.10.9 can break the included Python installation

* Update bitsandbytes wheel link to 0.38.1

Disable ssl on Conda

* Add check for spaces in path

Installation of Miniconda will fail in this case

* Mirror changes to mac and linux scripts

* Start with model-menu

* Add updaters

* Fix line endings

* Add check for path with spaces

* Fix one-click updating

* Fix one-click updating

* Clean up update scripts

* Add environment scripts

---------

Co-authored-by: jllllll <3887729+jllllll@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-18 02:23:09 -03:00
jllllll
254609daca
Update llama-cpp-python link to official wheel (#19) 2023-04-10 10:48:56 -03:00
jllllll
c3e1a58cb3
Correct llama-cpp-python wheel link (#17) 2023-04-09 23:46:54 -03:00
oobabooga
97840c92f9
Add working llamaa-cpp-python install from wheel. (#13 from Loufe/oobabooga-windows) 2023-04-09 23:23:27 -03:00
Lou Bernardi
0818bc93ad Add working llamaa-cpp-python install from wheel. 2023-04-08 22:44:55 -04:00
oobabooga
ef0f748618
Prevent CPU version of Torch from being installed (#10 from jllllll/oobabooga-windows) 2023-04-06 13:54:14 -03:00
jllllll
1e656bef25
Specifically target cuda 11.7 ver. of torch 2.0.0
Move conda-forge channel to global list of channels
Hopefully prevents missing or incorrect packages
2023-04-05 16:52:05 -05:00
jllllll
5aaf771c7d
Add additional sanity check
Add environment creation error
Improve error visibility
2023-04-04 12:31:26 -05:00
oobabooga
9b4e9a98f0
Merge pull request #9 from jllllll/oobabooga-windows
Add -k flag to curl command
2023-04-03 00:31:14 -03:00
jllllll
c86d3e9c74
Add -k flag to curl command
Disables SSL certificate verification which was causing curl to fail on some systems.
https://github.com/oobabooga/text-generation-webui/issues/644#issuecomment-1493518391
2023-04-02 21:28:04 -05:00
oobabooga
e3c348e42b
Add .git 2023-04-02 01:11:05 -03:00
oobabooga
b704fe7878
Use my fork of GPTQ-for-LLaMa for stability 2023-04-02 01:10:22 -03:00
oobabooga
75465fa041
Merge pull request #6 from jllllll/oobabooga-windows
Attempt to Improve Reliability
2023-03-31 11:27:23 -03:00
jllllll
e4e3c9095d
Add warning for long paths 2023-03-30 20:48:40 -05:00
jllllll
172035d2e1
Minor Correction 2023-03-30 20:44:56 -05:00
jllllll
0b4ee14edc
Attempt to Improve Reliability
Have pip directly download and install backup GPTQ wheel instead of first downloading through curl.
Install bitsandbytes from wheel compiled for Windows from modified source.
Add clarification of minor, intermittent issue to instructions.
Add system32 folder to end of PATH rather than beginning.
Add warning when installed under a path containing spaces.
2023-03-30 20:04:16 -05:00
oobabooga
85e4ec6e6b
Download the cuda branch directly 2023-03-30 18:22:48 -03:00
oobabooga
78c0da4a18
Use the cuda branch of gptq-for-llama
Did I do this right @jllllll? This is because the current default branch (triton) is not compatible with Windows.
2023-03-30 18:04:05 -03:00
oobabooga
0de4f24b12
Merge pull request #4 from jllllll/oobabooga-windows
Change Micromamba download link
2023-03-29 09:49:32 -03:00
jllllll
ed0e593161
Change Micromamba download
Changed link to previous version.
This will provide a stable source for Micromamba so that new versions don't cause issues.
2023-03-29 02:47:19 -05:00
oobabooga
da3aa8fbda
Merge pull request #2 from jllllll/oobabooga-windows
Update one-click-installer for Windows
2023-03-29 01:55:47 -03:00
jllllll
cb5dff0087
Update installer to use official micromamba url 2023-03-26 23:40:46 -05:00
jllllll
bdf85ffcf9
Remove explicit pytorch installation
Fixes an issue some people were having: https://github.com/oobabooga/text-generation-webui/issues/15
I did not experience this issue on my system. Not everyone does for some reason.
2023-03-26 21:56:16 -05:00
jllllll
6f89242094
Remove temporary fix for GPTQ-for-LLaMa
No longer necessary.
2023-03-26 03:29:14 -05:00
jllllll
6dcfcf4fed
Amended fix for GPTQ-for-LLaMa
Prevents breaking 3-bit support
2023-03-26 01:00:52 -05:00
jllllll
12baa0e84b
Update for latest GPTQ-for-LLaMa 2023-03-26 00:46:07 -05:00
jllllll
247e8e5b79
Fix for issue in current GPTQ-for-LLaMa. 2023-03-26 00:24:00 -05:00
jllllll
ce9a5e3b53
Update install.bat
Minor fixes
2023-03-25 02:22:02 -05:00
jllllll
2e02d42682 Changed things around to allow Micromamba to work with paths containing spaces. 2023-03-25 01:26:25 -05:00
jllllll
1e260544cd
Update install.bat
Added C:\Windows\System32 to PATH to avoid issues with broken? Windows installs.
2023-03-24 21:25:14 -05:00
jllllll
fa916aa1de
Update INSTRUCTIONS.txt
Added clarification on new variable added to download-model.bat.
2023-03-24 18:28:46 -05:00
jllllll
586775ad47
Update download-model.bat
Removed redundant %ModelName% variable.
2023-03-24 18:25:49 -05:00
jllllll
bddbc2f898
Update start-webui.bat
Updated virtual environment handling to use Micromamba.
2023-03-24 18:19:23 -05:00
jllllll
2604e3f7ac
Update download-model.bat
Added variables for model selection and text only mode.
Updated virtual environment handling to use Micromamba.
2023-03-24 18:15:24 -05:00
jllllll
24870e51ed
Update micromamba-cmd.bat
Add cd command for admin.
2023-03-24 18:12:02 -05:00
jllllll
f0c82f06c3
Add files via upload
Add script to open cmd within installation environment for easier modification.
2023-03-24 18:09:44 -05:00
jllllll
eec773b1f4
Update install.bat
Corrected libbitsandbytes_cudaall.dll install.
2023-03-24 17:54:47 -05:00
jllllll
817e6c681e
Update install.bat
Added `cd /D "%~dp0"` in case the script is ran as admin.
2023-03-24 17:51:13 -05:00
jllllll
a80a5465f2
Update install.bat
Updated Conda packages and channels to install cuda-toolkit and override 12.0 cuda packages requested by pytorch with their 11.7 equivalent.
Removed Conda installation since we can use the downloaded Micromamba.exe for the same purpose with a smaller footprint.
Removed redundant PATH changes.
Changed %gpuchoice% comparisons to be case-insensitive.
Added additional error handling and removed the use of .tmp files.
Added missing extension requirements.
Added GPTQ installation. Will attempt to compile locally and, if failed, will download and install a precompiled wheel.
Incorporated fixes from one-click-bandaid.
Fixed and expanded first sed command from one-click-bandaid.
libbitsandbytes_cudaall.dll is used here as the cuda116.dll used by one-click-bandaid does not work on my 1080ti. This can be changed if needed.
2023-03-24 17:27:29 -05:00
oobabooga
9ed3a03d4b
Don't use the official instructions 2023-03-18 11:25:08 -03:00
oobabooga
91371640f9
Use the official instructions
https://pytorch.org/get-started/locally/
2023-03-17 20:37:25 -03:00