text-generation-webui/modules
2023-04-06 17:40:44 -03:00
..
api.py Refactor several function calls and the API 2023-04-06 01:22:15 -03:00
callbacks.py Fix type object is not subscriptable 2023-03-31 14:20:31 +03:00
chat.py Instruction Character Vicuna, Instruction Mode Bugfix (#838) 2023-04-06 17:40:44 -03:00
deepspeed_parameters.py Fix deepspeed (oops) 2023-02-02 10:39:37 -03:00
extensions.py Fix loading extensions from within the interface 2023-03-28 23:27:02 -03:00
GPTQ_loader.py Broaden GPTQ-for-LLaMA branch support (#820) 2023-04-06 12:16:48 -03:00
html_generator.py Stop character pic from being cached when changing chars or clearing. (#798) 2023-04-05 14:25:01 -03:00
llamacpp_model_alternative.py Add new llama.cpp library (2048 context, temperature, etc now work) 2023-04-06 13:12:14 -03:00
llamacpp_model.py Add --threads flag for llama.cpp 2023-03-31 21:18:05 -03:00
LoRA.py Move an import 2023-03-29 22:50:58 -03:00
models.py Bump transformers (16-bit llama must be reconverted/redownloaded) 2023-04-06 16:04:03 -03:00
RWKV.py Add repetition_penalty 2023-03-31 14:45:17 -03:00
shared.py Add LLaMA-Precise preset (#767) 2023-04-05 18:52:36 -03:00
text_generation.py Bump transformers (16-bit llama must be reconverted/redownloaded) 2023-04-06 16:04:03 -03:00
training.py Lora trainer improvements (#763) 2023-04-06 02:04:11 -03:00
ui.py Further reorganize the UI 2023-03-15 13:24:54 -03:00