Commit Graph

87 Commits

Author SHA1 Message Date
快乐的我531
4e56ad55e1
Let model downloader download *.tiktoken as well (#4121) 2023-09-28 18:03:18 -03:00
kalomaze
7c9664ed35
Allow full model URL to be used for download (#3919)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-09-16 10:06:13 -03:00
oobabooga
df52dab67b Lint 2023-09-11 07:57:38 -07:00
oobabooga
ed86878f02 Remove GGML support 2023-09-11 07:44:00 -07:00
missionfloyd
787219267c
Allow downloading single file from UI (#3737) 2023-08-29 23:32:36 -03:00
Alberto Ferrer
f63dd83631
Update download-model.py (Allow single file download) (#3732) 2023-08-29 22:57:58 -03:00
oobabooga
7f5370a272 Minor fixes/cosmetics 2023-08-26 22:11:07 -07:00
jllllll
4a999e3bcd
Use separate llama-cpp-python packages for GGML support 2023-08-26 10:40:08 -05:00
oobabooga
83640d6f43 Replace ggml occurences with gguf 2023-08-26 01:06:59 -07:00
Thomas De Bonnet
0dfd1a8b7d
Improve readability of download-model.py (#3497) 2023-08-20 20:13:13 -03:00
oobabooga
4b3384e353 Handle unfinished lists during markdown streaming 2023-08-03 17:15:18 -07:00
oobabooga
13449aa44d Decrease download timeout 2023-07-15 22:30:08 -07:00
oobabooga
e202190c4f lint 2023-07-12 11:33:25 -07:00
Ahmad Fahadh Ilyas
8db7e857b1
Add token authorization for downloading model (#3067) 2023-07-11 18:48:08 -03:00
FartyPants
61102899cd
google flan T5 download fix (#3080) 2023-07-11 18:46:59 -03:00
tianchen zhong
c7058afb40
Add new possible bin file name regex (#3070) 2023-07-09 17:22:56 -03:00
jeckyhl
88a747b5b9
fix: Error when downloading model from UI (#3014) 2023-07-05 11:27:29 -03:00
AN Long
be4582be40
Support specify retry times in download-model.py (#2908) 2023-07-04 22:26:30 -03:00
Roman
38897fbd8a
fix: added model parameter check (#2829) 2023-06-24 10:09:34 -03:00
Gaurav Bhagchandani
89fb6f9236
Fixed the ZeroDivisionError when downloading a model (#2797) 2023-06-21 12:31:50 -03:00
oobabooga
5dfe0bec06 Remove old/useless code 2023-06-20 23:36:56 -03:00
oobabooga
faa92eee8d Add spaces 2023-06-20 23:25:58 -03:00
Peter Sofronas
b22c7199c9
Download optimizations (#2786)
* download_model_files metadata writing improvement

* line swap

* reduce line length

* safer download and greater block size

* Minor changes by pycodestyle

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-06-20 23:14:18 -03:00
Morgan Schweers
447569e31a
Add a download progress bar to the web UI. (#2472)
* Show download progress on the model screen.

* In case of error, mark as done to clear progress bar.

* Increase the iteration block size to reduce overhead.
2023-06-20 22:59:14 -03:00
oobabooga
240752617d Increase download timeout to 20s 2023-06-08 11:16:38 -03:00
oobabooga
9f215523e2 Remove some unused imports 2023-06-06 07:05:46 -03:00
Morgan Schweers
1aed2b9e52
Make it possible to download protected HF models from the command line. (#2408) 2023-06-01 00:11:21 -03:00
Juan M Uys
b984a44f47
fix error when downloading a model for the first time (#2404) 2023-05-30 22:07:12 -03:00
oobabooga
b4662bf4af
Download gptq_model*.py using download-model.py 2023-05-29 16:12:54 -03:00
oobabooga
39dab18307 Add a timeout to download-model.py requests 2023-05-19 11:19:34 -03:00
oobabooga
c7ba2d4f3f Change a message in download-model.py 2023-05-10 19:00:14 -03:00
Wojtek Kowaluk
1436c5845a
fix ggml detection regex in model downloader (#1779) 2023-05-04 11:48:36 -03:00
Lou Bernardi
a6ef2429fa
Add "do not download" and "download from HF" to download-model.py (#1439) 2023-04-21 12:54:50 -03:00
Rudd-O
69d50e2e86
Fix download script (#1373) 2023-04-19 13:02:32 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support (#1256)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga
2c14df81a8 Use download-model.py to download the model 2023-04-10 11:36:39 -03:00
oobabooga
170e0c05c4 Typo 2023-04-09 17:00:59 -03:00
oobabooga
34ec02d41d Make download-model.py importable 2023-04-09 16:59:59 -03:00
Blake Wyatt
df561fd896
Fix ggml downloading in download-model.py (#915) 2023-04-08 18:52:30 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
oobabooga
b38ba230f4
Update download-model.py 2023-04-01 15:03:24 -03:00
oobabooga
526d5725db
Update download-model.py 2023-04-01 14:47:47 -03:00
oobabooga
23116b88ef
Add support for resuming downloads (#654 from nikita-skakun/support-partial-downloads) 2023-03-31 22:55:55 -03:00
oobabooga
74462ac713 Don't override the metadata when checking the sha256sum 2023-03-31 22:52:52 -03:00
oobabooga
875de5d983 Update ggml template 2023-03-31 17:57:31 -03:00
oobabooga
6a44f4aec6 Add support for downloading ggml files 2023-03-31 17:33:42 -03:00
Nikita Skakun
b99bea3c69 Fixed reported header affecting resuming download 2023-03-30 23:11:59 -07:00
oobabooga
92c7068daf Don't download if --check is specified 2023-03-31 01:31:47 -03:00
Nikita Skakun
0cc89e7755 Checksum code now activated by --check flag. 2023-03-30 20:06:12 -07:00
Nikita Skakun
d550c12a3e Fixed the bug with additional bytes.
The issue seems to be with huggingface not reporting the entire size of the model.
Added an error message with instructions if the checksums don't match.
2023-03-30 12:52:16 -07:00