Tisjwlf
|
907702c204
|
Fix gguf multipart file loading (#5857)
|
2024-05-19 20:22:09 -03:00 |
|
A0nameless0man
|
5cb59707f3
|
fix: grammar not support utf-8 (#5900)
|
2024-05-19 20:10:39 -03:00 |
|
Samuel Wein
|
b63dc4e325
|
UI: Warn user if they are trying to load a model from no path (#6006)
|
2024-05-19 20:05:17 -03:00 |
|
chr
|
6b546a2c8b
|
llama.cpp: increase the max threads from 32 to 256 (#5889)
|
2024-05-19 20:02:19 -03:00 |
|
oobabooga
|
a38a37b3b3
|
llama.cpp: default n_gpu_layers to the maximum value for the model automatically
|
2024-05-19 10:57:42 -07:00 |
|
oobabooga
|
a4611232b7
|
Make --verbose output less spammy
|
2024-05-18 09:57:00 -07:00 |
|
oobabooga
|
e9c9483171
|
Improve the logging messages while loading models
|
2024-05-03 08:10:44 -07:00 |
|
oobabooga
|
e61055253c
|
Bump llama-cpp-python to 0.2.69, add --flash-attn option
|
2024-05-03 04:31:22 -07:00 |
|
oobabooga
|
51fb766bea
|
Add back my llama-cpp-python wheels, bump to 0.2.65 (#5964)
|
2024-04-30 09:11:31 -03:00 |
|
oobabooga
|
dfdb6fee22
|
Set llm_int8_enable_fp32_cpu_offload=True for --load-in-4bit
To allow for 32-bit CPU offloading (it's very slow).
|
2024-04-26 09:39:27 -07:00 |
|
oobabooga
|
70845c76fb
|
Add back the max_updates_second parameter (#5937)
|
2024-04-26 10:14:51 -03:00 |
|
oobabooga
|
6761b5e7c6
|
Improved instruct style (with syntax highlighting & LaTeX rendering) (#5936)
|
2024-04-26 10:13:11 -03:00 |
|
oobabooga
|
4094813f8d
|
Lint
|
2024-04-24 09:53:41 -07:00 |
|
oobabooga
|
64e2a9a0a7
|
Fix the Phi-3 template when used in the UI
|
2024-04-24 01:34:11 -07:00 |
|
oobabooga
|
f0538efb99
|
Remove obsolete --tensorcores references
|
2024-04-24 00:31:28 -07:00 |
|
Colin
|
f3c9103e04
|
Revert walrus operator for params['max_memory'] (#5878)
|
2024-04-24 01:09:14 -03:00 |
|
oobabooga
|
9b623b8a78
|
Bump llama-cpp-python to 0.2.64, use official wheels (#5921)
|
2024-04-23 23:17:05 -03:00 |
|
oobabooga
|
f27e1ba302
|
Add a /v1/internal/chat-prompt endpoint (#5879)
|
2024-04-19 00:24:46 -03:00 |
|
oobabooga
|
e158299fb4
|
Fix loading sharted GGUF models through llamacpp_HF
|
2024-04-11 14:50:05 -07:00 |
|
wangshuai09
|
fd4e46bce2
|
Add Ascend NPU support (basic) (#5541)
|
2024-04-11 18:42:20 -03:00 |
|
Ashley Kleynhans
|
70c637bf90
|
Fix saving of UI defaults to settings.yaml - Fixes #5592 (#5794)
|
2024-04-11 18:19:16 -03:00 |
|
oobabooga
|
3e3a7c4250
|
Bump llama-cpp-python to 0.2.61 & fix the crash
|
2024-04-11 14:15:34 -07:00 |
|
Victorivus
|
c423d51a83
|
Fix issue #5783 for character images with transparency (#5827)
|
2024-04-11 02:23:43 -03:00 |
|
Alex O'Connell
|
b94cd6754e
|
UI: Respect model and lora directory settings when downloading files (#5842)
|
2024-04-11 01:55:02 -03:00 |
|
oobabooga
|
17c4319e2d
|
Fix loading command-r context length metadata
|
2024-04-10 21:39:59 -07:00 |
|
oobabooga
|
cbd65ba767
|
Add a simple min_p preset, make it the default (#5836)
|
2024-04-09 12:50:16 -03:00 |
|
oobabooga
|
d02744282b
|
Minor logging change
|
2024-04-06 18:56:58 -07:00 |
|
oobabooga
|
dd6e4ac55f
|
Prevent double <BOS_TOKEN> with Command R+
|
2024-04-06 13:14:32 -07:00 |
|
oobabooga
|
1bdceea2d4
|
UI: Focus on the chat input after starting a new chat
|
2024-04-06 12:57:57 -07:00 |
|
oobabooga
|
168a0f4f67
|
UI: do not load the "gallery" extension by default
|
2024-04-06 12:43:21 -07:00 |
|
oobabooga
|
64a76856bd
|
Metadata: Fix loading Command R+ template with multiple options
|
2024-04-06 07:32:17 -07:00 |
|
oobabooga
|
1b87844928
|
Minor fix
|
2024-04-05 18:43:43 -07:00 |
|
oobabooga
|
6b7f7555fc
|
Logging message to make transformers loader a bit more transparent
|
2024-04-05 18:40:02 -07:00 |
|
oobabooga
|
0f536dd97d
|
UI: Fix the "Show controls" action
|
2024-04-05 12:18:33 -07:00 |
|
oobabooga
|
308452b783
|
Bitsandbytes: load preconverted 4bit models without additional flags
|
2024-04-04 18:10:24 -07:00 |
|
oobabooga
|
d423021a48
|
Remove CTransformers support (#5807)
|
2024-04-04 20:23:58 -03:00 |
|
oobabooga
|
13fe38eb27
|
Remove specialized code for gpt-4chan
|
2024-04-04 16:11:47 -07:00 |
|
oobabooga
|
9ab7365b56
|
Read rope_theta for DBRX model (thanks turboderp)
|
2024-04-01 20:25:31 -07:00 |
|
oobabooga
|
db5f6cd1d8
|
Fix ExLlamaV2 loaders using unnecessary "bits" metadata
|
2024-03-30 21:51:39 -07:00 |
|
oobabooga
|
624faa1438
|
Fix ExLlamaV2 context length setting (closes #5750)
|
2024-03-30 21:33:16 -07:00 |
|
oobabooga
|
9653a9176c
|
Minor improvements to Parameters tab
|
2024-03-29 10:41:24 -07:00 |
|
oobabooga
|
35da6b989d
|
Organize the parameters tab (#5767)
|
2024-03-28 16:45:03 -03:00 |
|
Yiximail
|
8c9aca239a
|
Fix prompt incorrectly set to empty when suffix is empty string (#5757)
|
2024-03-26 16:33:09 -03:00 |
|
oobabooga
|
2a92a842ce
|
Bump gradio to 4.23 (#5758)
|
2024-03-26 16:32:20 -03:00 |
|
oobabooga
|
49b111e2dd
|
Lint
|
2024-03-17 08:33:23 -07:00 |
|
oobabooga
|
d890c99b53
|
Fix StreamingLLM when content is removed from the beginning of the prompt
|
2024-03-14 09:18:54 -07:00 |
|
oobabooga
|
d828844a6f
|
Small fix: don't save truncation_length to settings.yaml
It should derive from model metadata or from a command-line flag.
|
2024-03-14 08:56:28 -07:00 |
|
oobabooga
|
2ef5490a36
|
UI: make light theme less blinding
|
2024-03-13 08:23:16 -07:00 |
|
oobabooga
|
40a60e0297
|
Convert attention_sink_size to int (closes #5696)
|
2024-03-13 08:15:49 -07:00 |
|
oobabooga
|
edec3bf3b0
|
UI: avoid caching convert_to_markdown calls during streaming
|
2024-03-13 08:14:34 -07:00 |
|