text-generation-webui/modules
Alex "mcmonkey" Goodwin 64e3b44e0f
initial multi-lora support (#1103)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-14 14:52:06 -03:00
..
api.py Minor change to API code 2023-04-14 12:11:47 -03:00
callbacks.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
chat.py Automatically set wbits/groupsize/instruct based on model name (#1167) 2023-04-14 11:07:28 -03:00
deepspeed_parameters.py Fix deepspeed (oops) 2023-02-02 10:39:37 -03:00
extensions.py Change the timing for setup() calls 2023-04-07 12:20:57 -03:00
GPTQ_loader.py Simplify GPTQ_loader.py 2023-04-13 12:13:07 -03:00
html_generator.py Don't treat Intruct mode histories as regular histories 2023-04-10 15:48:07 -03:00
llama_attn_hijack.py Added xformers support to Llama (#950) 2023-04-09 23:08:40 -03:00
llamacpp_model_alternative.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
llamacpp_model.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
LoRA.py initial multi-lora support (#1103) 2023-04-14 14:52:06 -03:00
models.py Two new options: truncation length and ban eos token 2023-04-11 18:46:06 -03:00
RWKV.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
shared.py initial multi-lora support (#1103) 2023-04-14 14:52:06 -03:00
text_generation.py Automatically set wbits/groupsize/instruct based on model name (#1167) 2023-04-14 11:07:28 -03:00
training.py lora training fixes: (#970) 2023-04-12 11:38:01 -03:00
ui.py Automatically set wbits/groupsize/instruct based on model name (#1167) 2023-04-14 11:07:28 -03:00