oobabooga
|
8c35fefb3b
|
Add custom sampler order support (#5443)
|
2024-02-06 11:20:10 -03:00 |
|
Forkoz
|
2a45620c85
|
Split by rows instead of layers for llama.cpp multi-gpu (#5435)
|
2024-02-04 23:36:40 -03:00 |
|
kalomaze
|
b6077b02e4
|
Quadratic sampling (#5403)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-02-04 00:20:02 -03:00 |
|
oobabooga
|
e055967974
|
Add prompt_lookup_num_tokens parameter (#5296)
|
2024-01-17 17:09:36 -03:00 |
|
oobabooga
|
b3fc2cd887
|
UI: Do not save unchanged extension settings to settings.yaml
|
2024-01-10 03:48:30 -08:00 |
|
oobabooga
|
53dc1d8197
|
UI: Do not save unchanged settings to settings.yaml
|
2024-01-09 18:59:04 -08:00 |
|
mamei16
|
bec4e0a1ce
|
Fix update event in refresh buttons (#5197)
|
2024-01-09 14:49:37 -03:00 |
|
oobabooga
|
4ca82a4df9
|
Save light/dark theme on "Save UI defaults to settings.yaml"
|
2024-01-09 04:20:10 -08:00 |
|
oobabooga
|
29c2693ea0
|
dynatemp_low, dynatemp_high, dynatemp_exponent parameters (#5209)
|
2024-01-08 23:28:35 -03:00 |
|
oobabooga
|
c4e005efec
|
Fix dropdown menus sometimes failing to refresh
|
2024-01-08 17:49:54 -08:00 |
|
oobabooga
|
0d07b3a6a1
|
Add dynamic_temperature_low parameter (#5198)
|
2024-01-07 17:03:47 -03:00 |
|
kalomaze
|
48327cc5c4
|
Dynamic Temperature HF loader support (#5174)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-01-07 10:36:26 -03:00 |
|
oobabooga
|
248742df1c
|
Save extension fields to settings.yaml on "Save UI defaults"
|
2024-01-04 20:33:42 -08:00 |
|
oobabooga
|
8c60495878
|
UI: add "Maximum UI updates/second" parameter
|
2023-12-24 09:17:40 -08:00 |
|
oobabooga
|
de138b8ba6
|
Add llama-cpp-python wheels with tensor cores support (#5003)
|
2023-12-19 17:30:53 -03:00 |
|
oobabooga
|
0a299d5959
|
Bump llama-cpp-python to 0.2.24 (#5001)
|
2023-12-19 15:22:21 -03:00 |
|
Water
|
674be9a09a
|
Add HQQ quant loader (#4888)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-12-18 21:23:16 -03:00 |
|
oobabooga
|
f1f2c4c3f4
|
Add --num_experts_per_token parameter (ExLlamav2) (#4955)
|
2023-12-17 12:08:33 -03:00 |
|
oobabooga
|
3bbf6c601d
|
AutoGPTQ: Add --disable_exllamav2 flag (Mixtral CPU offloading needs this)
|
2023-12-15 06:46:13 -08:00 |
|
oobabooga
|
39d2fe1ed9
|
Jinja templates for Instruct and Chat (#4874)
|
2023-12-12 17:23:14 -03:00 |
|
oobabooga
|
5fcee696ea
|
New feature: enlarge character pictures on click (#4654)
|
2023-11-19 02:05:17 -03:00 |
|
oobabooga
|
e0ca49ed9c
|
Bump llama-cpp-python to 0.2.18 (2nd attempt) (#4637)
* Update requirements*.txt
* Add back seed
|
2023-11-18 00:31:27 -03:00 |
|
oobabooga
|
9d6f79db74
|
Revert "Bump llama-cpp-python to 0.2.18 (#4611)"
This reverts commit 923c8e25fb .
|
2023-11-17 05:14:25 -08:00 |
|
oobabooga
|
8b66d83aa9
|
Set use_fast=True by default, create --no_use_fast flag
This increases tokens/second for HF loaders.
|
2023-11-16 19:55:28 -08:00 |
|
oobabooga
|
923c8e25fb
|
Bump llama-cpp-python to 0.2.18 (#4611)
|
2023-11-16 22:55:14 -03:00 |
|
oobabooga
|
6e2e0317af
|
Separate context and system message in instruction formats (#4499)
|
2023-11-07 20:02:58 -03:00 |
|
oobabooga
|
af3d25a503
|
Disable logits_all in llamacpp_HF (makes processing 3x faster)
|
2023-11-07 14:35:48 -08:00 |
|
feng lui
|
4766a57352
|
transformers: add use_flash_attention_2 option (#4373)
|
2023-11-04 13:59:33 -03:00 |
|
oobabooga
|
aa5d671579
|
Add temperature_last parameter (#4472)
|
2023-11-04 13:09:07 -03:00 |
|
kalomaze
|
367e5e6e43
|
Implement Min P as a sampler option in HF loaders (#4449)
|
2023-11-02 16:32:51 -03:00 |
|
oobabooga
|
c0655475ae
|
Add cache_8bit option
|
2023-11-02 11:23:04 -07:00 |
|
Abhilash Majumder
|
778a010df8
|
Intel Gpu support initialization (#4340)
|
2023-10-26 23:39:51 -03:00 |
|
tdrussell
|
72f6fc6923
|
Rename additive_repetition_penalty to presence_penalty, add frequency_penalty (#4376)
|
2023-10-25 12:10:28 -03:00 |
|
tdrussell
|
4440f87722
|
Add additive_repetition_penalty sampler setting. (#3627)
|
2023-10-23 02:28:07 -03:00 |
|
oobabooga
|
df90d03e0b
|
Replace --mul_mat_q with --no_mul_mat_q
|
2023-10-22 12:23:03 -07:00 |
|
oobabooga
|
fae8062d39
|
Bump to latest gradio (3.47) (#4258)
|
2023-10-10 22:20:49 -03:00 |
|
oobabooga
|
b6fe6acf88
|
Add threads_batch parameter
|
2023-10-01 21:28:00 -07:00 |
|
jllllll
|
41a2de96e5
|
Bump llama-cpp-python to 0.2.11
|
2023-10-01 18:08:10 -05:00 |
|
StoyanStAtanasov
|
7e6ff8d1f0
|
Enable NUMA feature for llama_cpp_python (#4040)
|
2023-09-26 22:05:00 -03:00 |
|
oobabooga
|
1ca54faaf0
|
Improve --multi-user mode
|
2023-09-26 06:42:33 -07:00 |
|
oobabooga
|
d0d221df49
|
Add --use_fast option (closes #3741)
|
2023-09-25 12:19:43 -07:00 |
|
oobabooga
|
b973b91d73
|
Automatically filter by loader (closes #4072)
|
2023-09-25 10:28:35 -07:00 |
|
oobabooga
|
08cf150c0c
|
Add a grammar editor to the UI (#4061)
|
2023-09-24 18:05:24 -03:00 |
|
oobabooga
|
b227e65d86
|
Add grammar to llama.cpp loader (closes #4019)
|
2023-09-24 07:10:45 -07:00 |
|
saltacc
|
f01b9aa71f
|
Add customizable ban tokens (#3899)
|
2023-09-15 18:27:27 -03:00 |
|
oobabooga
|
1ce3c93600
|
Allow "Your name" field to be saved
|
2023-09-14 03:44:35 -07:00 |
|
oobabooga
|
9f199c7a4c
|
Use Noto Sans font
Copied from 6c8bd06308/public/webfonts/NotoSans
|
2023-09-13 13:48:05 -07:00 |
|
oobabooga
|
ed86878f02
|
Remove GGML support
|
2023-09-11 07:44:00 -07:00 |
|
oobabooga
|
cec8db52e5
|
Add max_tokens_second param (#3533)
|
2023-08-29 17:44:31 -03:00 |
|
oobabooga
|
52ab2a6b9e
|
Add rope_freq_base parameter for CodeLlama
|
2023-08-25 06:55:15 -07:00 |
|