Commit Graph

1260 Commits

Author SHA1 Message Date
oobabooga
549bb88975 Increase height of "Custom stopping strings" UI field 2024-03-08 12:54:30 -08:00
oobabooga
238f69accc Move "Command for chat-instruct mode" to the main chat tab (closes #5634) 2024-03-08 12:52:52 -08:00
oobabooga
bae14c8f13 Right-truncate long chat completion prompts instead of left-truncating
Instructions are usually at the beginning of the prompt.
2024-03-07 08:50:24 -08:00
Bartowski
104573f7d4
Update cache_4bit documentation (#5649)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-03-07 13:08:21 -03:00
oobabooga
2ec1d96c91
Add cache_4bit option for ExLlamaV2 (#5645) 2024-03-06 23:02:25 -03:00
oobabooga
2174958362
Revert gradio to 3.50.2 (#5640) 2024-03-06 11:52:46 -03:00
oobabooga
d61e31e182
Save the extensions after Gradio 4 (#5632) 2024-03-05 07:54:34 -03:00
oobabooga
63a1d4afc8
Bump gradio to 4.19 (#5522) 2024-03-05 07:32:28 -03:00
oobabooga
f697cb4609 Move update_wizard_windows.sh to update_wizard_windows.bat (oops) 2024-03-04 19:26:24 -08:00
kalomaze
cfb25c9b3f
Cubic sampling w/ curve param (#5551)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-03-03 13:22:21 -03:00
oobabooga
09b13acfb2 Perplexity evaluation: print to terminal after calculation is finished 2024-02-28 19:58:21 -08:00
oobabooga
4164e29416 Block the "To create a public link, set share=True" gradio message 2024-02-25 15:06:08 -08:00
oobabooga
d34126255d Fix loading extensions with "-" in the name (closes #5557) 2024-02-25 09:24:52 -08:00
oobabooga
10aedc329f Logging: more readable messages when renaming chat histories 2024-02-22 07:57:06 -08:00
oobabooga
faf3bf2503 Perplexity evaluation: make UI events more robust (attempt) 2024-02-22 07:13:22 -08:00
oobabooga
ac5a7a26ea Perplexity evaluation: add some informative error messages 2024-02-21 20:20:52 -08:00
oobabooga
59032140b5 Fix CFG with llamacpp_HF (2nd attempt) 2024-02-19 18:35:42 -08:00
oobabooga
c203c57c18 Fix CFG with llamacpp_HF 2024-02-19 18:09:49 -08:00
oobabooga
ae05d9830f Replace {{char}}, {{user}} in the chat template itself 2024-02-18 19:57:54 -08:00
oobabooga
1f27bef71b
Move chat UI elements to the right on desktop (#5538) 2024-02-18 14:32:05 -03:00
oobabooga
d6bd71db7f ExLlamaV2: fix loading when autosplit is not set 2024-02-17 12:54:37 -08:00
oobabooga
af0bbf5b13 Lint 2024-02-17 09:01:04 -08:00
oobabooga
a6730f88f7
Add --autosplit flag for ExLlamaV2 (#5524) 2024-02-16 15:26:10 -03:00
oobabooga
4039999be5 Autodetect llamacpp_HF loader when tokenizer exists 2024-02-16 09:29:26 -08:00
oobabooga
76d28eaa9e
Add a menu for customizing the instruction template for the model (#5521) 2024-02-16 14:21:17 -03:00
oobabooga
0e1d8d5601 Instruction template: make "Send to default/notebook" work without a tokenizer 2024-02-16 08:01:07 -08:00
oobabooga
44018c2f69
Add a "llamacpp_HF creator" menu (#5519) 2024-02-16 12:43:24 -03:00
oobabooga
b2b74c83a6 Fix Qwen1.5 in llamacpp_HF 2024-02-15 19:04:19 -08:00
oobabooga
080f7132c0
Revert gradio to 3.50.2 (#5513) 2024-02-15 20:40:23 -03:00
oobabooga
7123ac3f77
Remove "Maximum UI updates/second" parameter (#5507) 2024-02-14 23:34:30 -03:00
DominikKowalczyk
33c4ce0720
Bump gradio to 4.19 (#5419)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-02-14 23:28:26 -03:00
oobabooga
b16958575f Minor bug fix 2024-02-13 19:48:32 -08:00
oobabooga
d47182d9d1
llamacpp_HF: do not use oobabooga/llama-tokenizer (#5499) 2024-02-14 00:28:51 -03:00
oobabooga
069ed7c6ef Lint 2024-02-13 16:05:41 -08:00
oobabooga
86c320ab5a llama.cpp: add a progress bar for prompt evaluation 2024-02-07 21:56:10 -08:00
oobabooga
c55b8ce932 Improved random preset generation 2024-02-06 08:51:52 -08:00
oobabooga
4e34ae0587 Minor logging improvements 2024-02-06 08:22:08 -08:00
oobabooga
3add2376cd Better warpers logging 2024-02-06 07:09:21 -08:00
oobabooga
494cc3c5b0 Handle empty sampler priority field, use default values 2024-02-06 07:05:32 -08:00
oobabooga
775902c1f2 Sampler priority: better logging, always save to presets 2024-02-06 06:49:22 -08:00
oobabooga
acfbe6b3b3 Minor doc changes 2024-02-06 06:35:01 -08:00
oobabooga
8ee3cea7cb Improve some log messages 2024-02-06 06:31:27 -08:00
oobabooga
8a6d9abb41 Small fixes 2024-02-06 06:26:27 -08:00
oobabooga
2a1063eff5 Revert "Remove non-HF ExLlamaV2 loader (#5431)"
This reverts commit cde000d478.
2024-02-06 06:21:36 -08:00
oobabooga
8c35fefb3b
Add custom sampler order support (#5443) 2024-02-06 11:20:10 -03:00
oobabooga
7301c7618f Minor change to Models tab 2024-02-04 21:49:58 -08:00
oobabooga
f234fbe83f Improve a log message after previous commit 2024-02-04 21:44:53 -08:00
oobabooga
7073665a10
Truncate long chat completions inputs (#5439) 2024-02-05 02:31:24 -03:00
oobabooga
9033fa5eee Organize the Model tab 2024-02-04 19:30:22 -08:00
Forkoz
2a45620c85
Split by rows instead of layers for llama.cpp multi-gpu (#5435) 2024-02-04 23:36:40 -03:00