.. |
AutoGPTQ_loader.py
|
Add --no_use_cuda_fp16 param for AutoGPTQ
|
2023-06-23 12:22:56 -03:00 |
block_requests.py
|
Block a cloudfare request
|
2023-07-06 22:24:52 -07:00 |
callbacks.py
|
Make stop_everything work with non-streamed generation (#2848)
|
2023-06-24 11:19:16 -03:00 |
chat.py
|
Create logs dir if missing when saving history (#3462)
|
2023-08-05 13:47:16 -03:00 |
deepspeed_parameters.py
|
Fix typo in deepspeed_parameters.py (#3222)
|
2023-07-24 11:17:28 -03:00 |
evaluate.py
|
Sort some imports
|
2023-06-25 01:44:36 -03:00 |
exllama_hf.py
|
Make it possible to evaluate exllama perplexity (#3138)
|
2023-07-16 01:52:55 -03:00 |
exllama.py
|
Implement auto_max_new_tokens for ExLlama
|
2023-08-02 11:03:56 -07:00 |
extensions.py
|
Add extension example, replace input_hijack with chat_input_modifier (#3307)
|
2023-07-25 18:49:56 -03:00 |
github.py
|
Implement sessions + add basic multi-user support (#2991)
|
2023-07-04 00:03:30 -03:00 |
GPTQ_loader.py
|
Prevent unwanted log messages from modules
|
2023-05-21 22:42:34 -03:00 |
html_generator.py
|
Fix chat message order (#3461)
|
2023-08-05 13:53:54 -03:00 |
llama_attn_hijack.py
|
Prevent unwanted log messages from modules
|
2023-05-21 22:42:34 -03:00 |
llamacpp_hf.py
|
Add the --cpu option for llama.cpp to prevent CUDA from being used (#3432)
|
2023-08-03 11:00:36 -03:00 |
llamacpp_model.py
|
Fix llama.cpp truncation (#3400)
|
2023-08-03 20:01:15 -03:00 |
loaders.py
|
Add the --cpu option for llama.cpp to prevent CUDA from being used (#3432)
|
2023-08-03 11:00:36 -03:00 |
logging_colors.py
|
Add menus for saving presets/characters/instruction templates/prompts (#2621)
|
2023-06-11 12:19:18 -03:00 |
LoRA.py
|
Use 'torch.backends.mps.is_available' to check if mps is supported (#3164)
|
2023-07-17 21:27:18 -03:00 |
models_settings.py
|
When saving model settings, only save the settings for the current loader
|
2023-08-01 06:10:09 -07:00 |
models.py
|
Remove flexgen support
|
2023-07-25 15:15:29 -07:00 |
monkey_patch_gptq_lora.py
|
Sort some imports
|
2023-06-25 01:44:36 -03:00 |
presets.py
|
When saving a preset, only save params that differ from the defaults
|
2023-07-31 19:13:29 -07:00 |
relative_imports.py
|
Add ExLlama+LoRA support (#2756)
|
2023-06-19 12:31:24 -03:00 |
RWKV.py
|
Add ExLlama support (#2444)
|
2023-06-16 20:35:38 -03:00 |
sampler_hijack.py
|
Fix: Mirostat fails on models split across multiple GPUs
|
2023-08-05 13:45:47 -03:00 |
shared.py
|
Add SSL certificate support (#3453)
|
2023-08-04 13:57:31 -03:00 |
text_generation.py
|
Fix llama.cpp truncation (#3400)
|
2023-08-03 20:01:15 -03:00 |
training.py
|
Properly format exceptions in the UI
|
2023-08-03 06:57:21 -07:00 |
ui.py
|
Remove unnecessary chat.js (#3445)
|
2023-08-04 01:58:37 -03:00 |
utils.py
|
Remove flexgen support
|
2023-07-25 15:15:29 -07:00 |