text-generation-webui/modules
practicaldreamer e3968f7dd0
Fix Training Pad Token (#1678)
Currently padding with 0 the character vs 0 the token id (<unk> in the case of llama)
2023-05-02 23:16:08 -03:00
..
callbacks.py optimize stopping strings processing (#1625) 2023-05-02 01:21:54 -03:00
chat.py Move the rstrips 2023-04-26 17:17:22 -03:00
deepspeed_parameters.py Fix deepspeed (oops) 2023-02-02 10:39:37 -03:00
evaluate.py Fix evaluate comment saving 2023-04-21 12:34:08 -03:00
extensions.py Only show extension in UI if it has an ui() function 2023-05-02 19:20:02 -03:00
GPTQ_loader.py LLaVA support (#1487) 2023-04-23 20:32:22 -03:00
html_generator.py Readability 2023-04-16 21:26:19 -03:00
llama_attn_hijack.py Added xformers support to Llama (#950) 2023-04-09 23:08:40 -03:00
llamacpp_model.py added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649) 2023-05-02 18:25:28 -03:00
LoRA.py Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
models.py added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649) 2023-05-02 18:25:28 -03:00
monkey_patch_gptq_lora.py Make universal tokenizer, xformers, sdp-attention apply to monkey patch 2023-04-25 23:18:11 -03:00
RWKV.py Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
shared.py added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649) 2023-05-02 18:25:28 -03:00
text_generation.py LLaVA: small fixes (#1664) 2023-05-02 23:12:22 -03:00
training.py Fix Training Pad Token (#1678) 2023-05-02 23:16:08 -03:00
ui.py Precise prompts for instruct mode 2023-04-26 03:21:53 -03:00