text-generation-webui/modules
2023-07-17 21:27:18 -03:00
..
AutoGPTQ_loader.py Add --no_use_cuda_fp16 param for AutoGPTQ 2023-06-23 12:22:56 -03:00
block_requests.py Block a cloudfare request 2023-07-06 22:24:52 -07:00
callbacks.py Make stop_everything work with non-streamed generation (#2848) 2023-06-24 11:19:16 -03:00
chat.py More robust and error prone training (#3058) 2023-07-12 15:29:43 -03:00
deepspeed_parameters.py Style improvements (#1957) 2023-05-09 22:49:39 -03:00
evaluate.py Sort some imports 2023-06-25 01:44:36 -03:00
exllama_hf.py Make it possible to evaluate exllama perplexity (#3138) 2023-07-16 01:52:55 -03:00
exllama.py Add decode functions to llama.cpp/exllama 2023-07-07 09:11:30 -07:00
extensions.py Add support for logits processors in extensions (#3029) 2023-07-13 17:22:41 -03:00
github.py Implement sessions + add basic multi-user support (#2991) 2023-07-04 00:03:30 -03:00
GPTQ_loader.py Prevent unwanted log messages from modules 2023-05-21 22:42:34 -03:00
html_generator.py Implement sessions + add basic multi-user support (#2991) 2023-07-04 00:03:30 -03:00
llama_attn_hijack.py Prevent unwanted log messages from modules 2023-05-21 22:42:34 -03:00
llamacpp_hf.py Optimize llamacpp_hf a bit 2023-07-16 20:49:48 -07:00
llamacpp_model.py Add low vram mode on llama cpp (#3076) 2023-07-12 11:05:13 -03:00
loaders.py Create llamacpp_HF loader (#3062) 2023-07-16 02:21:13 -03:00
logging_colors.py Add menus for saving presets/characters/instruction templates/prompts (#2621) 2023-06-11 12:19:18 -03:00
LoRA.py Use 'torch.backends.mps.is_available' to check if mps is supported (#3164) 2023-07-17 21:27:18 -03:00
models_settings.py Make AutoGPTQ the default again 2023-07-15 22:29:23 -07:00
models.py Use 'torch.backends.mps.is_available' to check if mps is supported (#3164) 2023-07-17 21:27:18 -03:00
monkey_patch_gptq_lora.py Sort some imports 2023-06-25 01:44:36 -03:00
presets.py Implement sessions + add basic multi-user support (#2991) 2023-07-04 00:03:30 -03:00
relative_imports.py Add ExLlama+LoRA support (#2756) 2023-06-19 12:31:24 -03:00
RWKV.py Add ExLlama support (#2444) 2023-06-16 20:35:38 -03:00
sampler_hijack.py lint 2023-07-12 11:33:25 -07:00
shared.py Increase max_new_tokens upper limit 2023-07-17 17:08:22 -07:00
text_generation.py Use 'torch.backends.mps.is_available' to check if mps is supported (#3164) 2023-07-17 21:27:18 -03:00
training.py More robust and error prone training (#3058) 2023-07-12 15:29:43 -03:00
ui.py Add low vram mode on llama cpp (#3076) 2023-07-12 11:05:13 -03:00
utils.py lint 2023-07-12 11:33:25 -07:00