.. |
api.py
|
Fix stopping strings in the gradio API
|
2023-04-19 13:52:21 -03:00 |
callbacks.py
|
Make the code more like PEP8 for readability (#862)
|
2023-04-07 00:15:45 -03:00 |
chat.py
|
Reset your name when choosing a character
|
2023-04-17 13:56:40 -03:00 |
deepspeed_parameters.py
|
Fix deepspeed (oops)
|
2023-02-02 10:39:37 -03:00 |
evaluate.py
|
Fix evaluate comment saving
|
2023-04-21 12:34:08 -03:00 |
extensions.py
|
Merge pull request from GHSA-hv5m-3rp9-xcpf
|
2023-04-16 01:36:50 -03:00 |
GPTQ_loader.py
|
Change GPTQ triton default settings
|
2023-04-22 12:27:30 -03:00 |
html_generator.py
|
Readability
|
2023-04-16 21:26:19 -03:00 |
llama_attn_hijack.py
|
Added xformers support to Llama (#950)
|
2023-04-09 23:08:40 -03:00 |
llamacpp_model_alternative.py
|
Bump llama-cpp-python to use LlamaCache
|
2023-04-16 00:53:40 -03:00 |
llamacpp_model.py
|
Make the code more like PEP8 for readability (#862)
|
2023-04-07 00:15:45 -03:00 |
LoRA.py
|
Add 4-bit LoRA support (#1200)
|
2023-04-16 23:26:52 -03:00 |
models.py
|
Minor change
|
2023-04-22 15:15:31 -03:00 |
monkey_patch_gptq_lora.py
|
Add 4-bit LoRA support (#1200)
|
2023-04-16 23:26:52 -03:00 |
RWKV.py
|
Make the code more like PEP8 for readability (#862)
|
2023-04-07 00:15:45 -03:00 |
shared.py
|
Don't require llama.cpp models to be placed in subfolders
|
2023-04-22 14:56:48 -03:00 |
text_generation.py
|
Don't require llama.cpp models to be placed in subfolders
|
2023-04-22 14:56:48 -03:00 |
training.py
|
Lora trainer docs (#1493)
|
2023-04-23 12:54:41 -03:00 |
ui.py
|
Change dropdown menu highlight color
|
2023-04-21 02:47:18 -03:00 |