.. |
AutoGPTQ_loader.py
|
Extend AutoGPTQ support for any GPTQ model (#1668)
|
2023-06-02 01:33:55 -03:00 |
callbacks.py
|
Remove mutable defaults from function signature. (#1663)
|
2023-05-08 22:55:41 -03:00 |
chat.py
|
Fix "regenerate" when "Start reply with" is set
|
2023-06-05 11:56:03 -03:00 |
deepspeed_parameters.py
|
Style improvements (#1957)
|
2023-05-09 22:49:39 -03:00 |
evaluate.py
|
Minor fix
|
2023-05-29 13:31:17 -03:00 |
extensions.py
|
Prevent unwanted log messages from modules
|
2023-05-21 22:42:34 -03:00 |
GPTQ_loader.py
|
Prevent unwanted log messages from modules
|
2023-05-21 22:42:34 -03:00 |
html_generator.py
|
Add markdown table rendering
|
2023-05-10 13:41:23 -03:00 |
llama_attn_hijack.py
|
Prevent unwanted log messages from modules
|
2023-05-21 22:42:34 -03:00 |
llamacpp_model.py
|
Make llama.cpp read prompt size and seed from settings (#2299)
|
2023-05-25 10:29:31 -03:00 |
logging_colors.py
|
Prevent unwanted log messages from modules
|
2023-05-21 22:42:34 -03:00 |
LoRA.py
|
Add AutoGPTQ LoRA support
|
2023-06-05 23:32:57 -03:00 |
models.py
|
Use AutoGPTQ by default for GPTQ models
|
2023-06-05 15:41:48 -03:00 |
monkey_patch_gptq_lora.py
|
Better warning messages
|
2023-05-03 21:43:17 -03:00 |
RWKV.py
|
Fix the missing Chinese character bug (#2497)
|
2023-06-02 13:45:41 -03:00 |
sampler_hijack.py
|
Add tail-free and top-a sampling (#2357)
|
2023-05-29 21:40:01 -03:00 |
shared.py
|
Increase chat_prompt_size_max
|
2023-06-05 17:37:37 -03:00 |
text_generation.py
|
Don't stream at more than 24 fps
|
2023-05-31 23:41:42 -03:00 |
training.py
|
Fix warning for qlora (#2438)
|
2023-05-30 11:09:18 -03:00 |
ui.py
|
Use AutoGPTQ by default for GPTQ models
|
2023-06-05 15:41:48 -03:00 |
utils.py
|
Use YAML for presets and settings
|
2023-05-28 22:34:12 -03:00 |