text-generation-webui/modules
2023-04-25 22:58:48 -03:00
..
callbacks.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
chat.py Remove obsolete function 2023-04-24 13:27:24 -03:00
deepspeed_parameters.py Fix deepspeed (oops) 2023-02-02 10:39:37 -03:00
evaluate.py Fix evaluate comment saving 2023-04-21 12:34:08 -03:00
extensions.py Apply settings regardless of setup() function 2023-04-25 01:16:23 -03:00
GPTQ_loader.py LLaVA support (#1487) 2023-04-23 20:32:22 -03:00
html_generator.py Readability 2023-04-16 21:26:19 -03:00
llama_attn_hijack.py Added xformers support to Llama (#950) 2023-04-09 23:08:40 -03:00
llamacpp_model_alternative.py add n_batch support for llama.cpp (#1115) 2023-04-24 03:46:18 -03:00
llamacpp_model.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
LoRA.py Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
models.py Seq2Seq support (including FLAN-T5) (#1535) 2023-04-25 22:39:04 -03:00
monkey_patch_gptq_lora.py Monkey patch fixes 2023-04-25 21:20:26 -03:00
RWKV.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
shared.py Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
text_generation.py Fix missing initial space for LlamaTokenizer 2023-04-25 22:47:23 -03:00
training.py LoRA trainer improvements part 5 (#1546) 2023-04-25 21:27:30 -03:00
ui.py Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00