oobabooga
|
70047a5c57
|
Bump bitsandytes to 0.42.0 on Windows
|
2024-03-03 13:19:27 -08:00 |
|
oobabooga
|
24e86bb21b
|
Bump llama-cpp-python to 0.2.55
|
2024-03-03 12:14:48 -08:00 |
|
oobabooga
|
314e42fd98
|
Fix transformers requirement
|
2024-03-03 10:49:28 -08:00 |
|
oobabooga
|
71b1617c1b
|
Remove bitsandbytes from incompatible requirements.txt files
|
2024-03-03 08:24:54 -08:00 |
|
kalomaze
|
cfb25c9b3f
|
Cubic sampling w/ curve param (#5551)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-03-03 13:22:21 -03:00 |
|
jeffbiocode
|
3168644152
|
Training: Update llama2-chat-format.json (#5593)
|
2024-03-03 12:42:14 -03:00 |
|
oobabooga
|
71dc5b4dee
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-02-28 19:59:20 -08:00 |
|
oobabooga
|
09b13acfb2
|
Perplexity evaluation: print to terminal after calculation is finished
|
2024-02-28 19:58:21 -08:00 |
|
dependabot[bot]
|
dfdf6eb5b4
|
Bump hqq from 0.1.3 to 0.1.3.post1 (#5582)
|
2024-02-26 20:51:39 -03:00 |
|
oobabooga
|
332957ffec
|
Bump llama-cpp-python to 0.2.52
|
2024-02-26 15:05:53 -08:00 |
|
oobabooga
|
b64770805b
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-02-26 08:51:31 -08:00 |
|
oobabooga
|
830168d3d4
|
Revert "Replace hashlib.sha256 with hashlib.file_digest so we don't need to load entire files into ram before hashing them. (#4383)"
This reverts commit 0ced78fdfa .
|
2024-02-26 05:54:33 -08:00 |
|
Bartowski
|
21acf504ce
|
Bump transformers to 4.38 for gemma compatibility (#5575)
|
2024-02-25 20:15:13 -03:00 |
|
oobabooga
|
4164e29416
|
Block the "To create a public link, set share=True" gradio message
|
2024-02-25 15:06:08 -08:00 |
|
oobabooga
|
d34126255d
|
Fix loading extensions with "-" in the name (closes #5557)
|
2024-02-25 09:24:52 -08:00 |
|
Lounger
|
0f68c6fb5b
|
Big picture fixes (#5565)
|
2024-02-25 14:10:16 -03:00 |
|
jeffbiocode
|
45c4cd01c5
|
Add llama 2 chat format for lora training (#5553)
|
2024-02-25 02:36:36 -03:00 |
|
Devin Roark
|
e0fc808980
|
fix: ngrok logging does not use the shared logger module (#5570)
|
2024-02-25 02:35:59 -03:00 |
|
oobabooga
|
32ee5504ed
|
Remove -k from curl command to download miniconda (#5535)
|
2024-02-25 02:35:23 -03:00 |
|
oobabooga
|
c07dc56736
|
Bump llama-cpp-python to 0.2.50
|
2024-02-24 21:34:11 -08:00 |
|
oobabooga
|
98580cad8e
|
Bump exllamav2 to 0.0.14
|
2024-02-24 18:35:42 -08:00 |
|
oobabooga
|
527f2652af
|
Bump llama-cpp-python to 0.2.47
|
2024-02-22 19:48:49 -08:00 |
|
oobabooga
|
3f42e3292a
|
Revert "Bump autoawq from 0.1.8 to 0.2.2 (#5547)"
This reverts commit d04fef6a07 .
|
2024-02-22 19:48:04 -08:00 |
|
oobabooga
|
10aedc329f
|
Logging: more readable messages when renaming chat histories
|
2024-02-22 07:57:06 -08:00 |
|
oobabooga
|
faf3bf2503
|
Perplexity evaluation: make UI events more robust (attempt)
|
2024-02-22 07:13:22 -08:00 |
|
oobabooga
|
ac5a7a26ea
|
Perplexity evaluation: add some informative error messages
|
2024-02-21 20:20:52 -08:00 |
|
oobabooga
|
59032140b5
|
Fix CFG with llamacpp_HF (2nd attempt)
|
2024-02-19 18:35:42 -08:00 |
|
oobabooga
|
c203c57c18
|
Fix CFG with llamacpp_HF
|
2024-02-19 18:09:49 -08:00 |
|
dependabot[bot]
|
5f7dbf454a
|
Update optimum requirement from ==1.16.* to ==1.17.* (#5548)
|
2024-02-19 19:15:21 -03:00 |
|
dependabot[bot]
|
d04fef6a07
|
Bump autoawq from 0.1.8 to 0.2.2 (#5547)
|
2024-02-19 19:14:55 -03:00 |
|
dependabot[bot]
|
ed6ff49431
|
Update accelerate requirement from ==0.25.* to ==0.27.* (#5546)
|
2024-02-19 19:14:04 -03:00 |
|
Kevin Pham
|
10df23efb7
|
Remove message.content from openai streaming API (#5503)
|
2024-02-19 18:50:27 -03:00 |
|
oobabooga
|
0b2279d031
|
Bump llama-cpp-python to 0.2.44
|
2024-02-19 13:42:31 -08:00 |
|
oobabooga
|
ae05d9830f
|
Replace {{char}}, {{user}} in the chat template itself
|
2024-02-18 19:57:54 -08:00 |
|
oobabooga
|
717c3494e8
|
Minor width change after daa140447e
|
2024-02-18 15:23:45 -08:00 |
|
oobabooga
|
1f27bef71b
|
Move chat UI elements to the right on desktop (#5538)
|
2024-02-18 14:32:05 -03:00 |
|
oobabooga
|
d8064c00e8
|
UI: hide chat scrollbar on desktop when not hovered
|
2024-02-17 20:47:14 -08:00 |
|
oobabooga
|
36c29084bb
|
UI: fix instruct style background for multiline inputs
|
2024-02-17 20:09:47 -08:00 |
|
oobabooga
|
904867a139
|
UI: fix scroll down after sending a multiline message
|
2024-02-17 19:27:13 -08:00 |
|
oobabooga
|
d6bd71db7f
|
ExLlamaV2: fix loading when autosplit is not set
|
2024-02-17 12:54:37 -08:00 |
|
oobabooga
|
af0bbf5b13
|
Lint
|
2024-02-17 09:01:04 -08:00 |
|
fschuh
|
fa1019e8fe
|
Removed extra spaces from Mistral instruction template that were causing Mistral to misbehave (#5517)
|
2024-02-16 21:40:51 -03:00 |
|
oobabooga
|
c375c753d6
|
Bump bitsandbytes to 0.42 (Linux only)
|
2024-02-16 10:47:57 -08:00 |
|
oobabooga
|
a6730f88f7
|
Add --autosplit flag for ExLlamaV2 (#5524)
|
2024-02-16 15:26:10 -03:00 |
|
oobabooga
|
4039999be5
|
Autodetect llamacpp_HF loader when tokenizer exists
|
2024-02-16 09:29:26 -08:00 |
|
oobabooga
|
76d28eaa9e
|
Add a menu for customizing the instruction template for the model (#5521)
|
2024-02-16 14:21:17 -03:00 |
|
oobabooga
|
0e1d8d5601
|
Instruction template: make "Send to default/notebook" work without a tokenizer
|
2024-02-16 08:01:07 -08:00 |
|
oobabooga
|
f465b7b486
|
Downloader: start one session per file (#5520)
|
2024-02-16 12:55:27 -03:00 |
|
oobabooga
|
44018c2f69
|
Add a "llamacpp_HF creator" menu (#5519)
|
2024-02-16 12:43:24 -03:00 |
|
oobabooga
|
b2b74c83a6
|
Fix Qwen1.5 in llamacpp_HF
|
2024-02-15 19:04:19 -08:00 |
|