.. |
callbacks.py
|
Remove mutable defaults from function signature. (#1663)
|
2023-05-08 22:55:41 -03:00 |
chat.py
|
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741)
|
2023-05-09 20:18:02 -03:00 |
deepspeed_parameters.py
|
Fix deepspeed (oops)
|
2023-02-02 10:39:37 -03:00 |
evaluate.py
|
Fix evaluate comment saving
|
2023-04-21 12:34:08 -03:00 |
extensions.py
|
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741)
|
2023-05-09 20:18:02 -03:00 |
GPTQ_loader.py
|
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596)
|
2023-05-09 20:37:31 -03:00 |
html_generator.py
|
Add support for custom chat styles (#1917)
|
2023-05-08 12:35:03 -03:00 |
llama_attn_hijack.py
|
Better warning messages
|
2023-05-03 21:43:17 -03:00 |
llamacpp_model.py
|
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649)
|
2023-05-02 18:25:28 -03:00 |
logging_colors.py
|
Add credits
|
2023-05-03 21:49:55 -03:00 |
LoRA.py
|
fixed LoRA loading issue (#1865)
|
2023-05-08 16:21:55 -03:00 |
models.py
|
Fix trust_remote_code in wrong location (#1953)
|
2023-05-09 19:22:10 -03:00 |
monkey_patch_gptq_lora.py
|
Better warning messages
|
2023-05-03 21:43:17 -03:00 |
RWKV.py
|
Make the RWKV model cache the RNN state between messages (#1354)
|
2023-05-09 11:12:53 -03:00 |
shared.py
|
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596)
|
2023-05-09 20:37:31 -03:00 |
text_generation.py
|
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741)
|
2023-05-09 20:18:02 -03:00 |
training.py
|
Sort dropdowns numerically
|
2023-05-05 23:14:56 -03:00 |
ui.py
|
Add support for custom chat styles (#1917)
|
2023-05-08 12:35:03 -03:00 |
utils.py
|
Add support for custom chat styles (#1917)
|
2023-05-08 12:35:03 -03:00 |