FartyPants
|
21c189112c
|
Several Training Enhancements (#2868)
|
2023-06-25 15:34:46 -03:00 |
|
oobabooga
|
95212edf1f
|
Update training.py
|
2023-06-25 12:13:15 -03:00 |
|
oobabooga
|
f31281a8de
|
Fix loading instruction templates containing literal '\n'
|
2023-06-25 02:13:26 -03:00 |
|
oobabooga
|
f0fcd1f697
|
Sort some imports
|
2023-06-25 01:44:36 -03:00 |
|
oobabooga
|
365b672531
|
Minor change to prevent future bugs
|
2023-06-25 01:38:54 -03:00 |
|
jllllll
|
bef67af23c
|
Use pre-compiled python module for ExLlama (#2770)
|
2023-06-24 20:24:17 -03:00 |
|
oobabooga
|
cec5fb0ef6
|
Failed attempt at evaluating exllama_hf perplexity
|
2023-06-24 12:02:25 -03:00 |
|
快乐的我531
|
e356f69b36
|
Make stop_everything work with non-streamed generation (#2848)
|
2023-06-24 11:19:16 -03:00 |
|
oobabooga
|
ec482f3dae
|
Apply input extensions after yielding *Is typing...*
|
2023-06-24 11:07:11 -03:00 |
|
oobabooga
|
3e80f2aceb
|
Apply the output extensions only once
Relevant for google translate, silero
|
2023-06-24 10:59:07 -03:00 |
|
missionfloyd
|
51a388fa34
|
Organize chat history/character import menu (#2845)
* Organize character import menu
* Move Chat history upload/download labels
|
2023-06-24 09:55:02 -03:00 |
|
oobabooga
|
8bb3bb39b3
|
Implement stopping string search in string space (#2847)
|
2023-06-24 09:43:00 -03:00 |
|
oobabooga
|
3ae9af01aa
|
Add --no_use_cuda_fp16 param for AutoGPTQ
|
2023-06-23 12:22:56 -03:00 |
|
Panchovix
|
5646690769
|
Fix some models not loading on exllama_hf (#2835)
|
2023-06-23 11:31:02 -03:00 |
|
oobabooga
|
383c50f05b
|
Replace old presets with the results of Preset Arena (#2830)
|
2023-06-23 01:48:29 -03:00 |
|
Panchovix
|
b4a38c24b7
|
Fix Multi-GPU not working on exllama_hf (#2803)
|
2023-06-22 16:05:25 -03:00 |
|
LarryVRH
|
580c1ee748
|
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. (#2777)
|
2023-06-21 15:31:42 -03:00 |
|
EugeoSynthesisThirtyTwo
|
7625c6de89
|
fix usage of self in classmethod (#2781)
|
2023-06-20 16:18:42 -03:00 |
|
MikoAL
|
c40932eb39
|
Added Falcon LoRA training support (#2684)
I am 50% sure this will work
|
2023-06-20 01:03:44 -03:00 |
|
FartyPants
|
ce86f726e9
|
Added saving of training logs to training_log.json (#2769)
|
2023-06-20 00:47:36 -03:00 |
|
Cebtenzzre
|
59e7ecb198
|
llama.cpp: implement ban_eos_token via logits_processor (#2765)
|
2023-06-19 21:31:19 -03:00 |
|
oobabooga
|
eb30f4441f
|
Add ExLlama+LoRA support (#2756)
|
2023-06-19 12:31:24 -03:00 |
|
oobabooga
|
5f418f6171
|
Fix a memory leak (credits for the fix: Ph0rk0z)
|
2023-06-19 01:19:28 -03:00 |
|
ThisIsPIRI
|
def3b69002
|
Fix loading condition for universal llama tokenizer (#2753)
|
2023-06-18 18:14:06 -03:00 |
|
oobabooga
|
09c781b16f
|
Add modules/block_requests.py
This has become unnecessary, but it could be useful in the future
for other libraries.
|
2023-06-18 16:31:14 -03:00 |
|
Forkoz
|
3cae1221d4
|
Update exllama.py - Respect model dir parameter (#2744)
|
2023-06-18 13:26:30 -03:00 |
|
oobabooga
|
c5641b65d3
|
Handle leading spaces properly in ExLllama
|
2023-06-17 19:35:12 -03:00 |
|
oobabooga
|
05a743d6ad
|
Make llama.cpp use tfs parameter
|
2023-06-17 19:08:25 -03:00 |
|
oobabooga
|
e19cbea719
|
Add a variable to modules/shared.py
|
2023-06-17 19:02:29 -03:00 |
|
oobabooga
|
cbd63eeeff
|
Fix repeated tokens with exllama
|
2023-06-17 19:02:08 -03:00 |
|
oobabooga
|
766c760cd7
|
Use gen_begin_reuse in exllama
|
2023-06-17 18:00:10 -03:00 |
|
oobabooga
|
b27f83c0e9
|
Make exllama stoppable
|
2023-06-16 22:03:23 -03:00 |
|
oobabooga
|
7f06d551a3
|
Fix streaming callback
|
2023-06-16 21:44:56 -03:00 |
|
oobabooga
|
5f392122fd
|
Add gpu_split param to ExLlama
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
|
2023-06-16 20:49:36 -03:00 |
|
oobabooga
|
9f40032d32
|
Add ExLlama support (#2444)
|
2023-06-16 20:35:38 -03:00 |
|
oobabooga
|
dea43685b0
|
Add some clarifications
|
2023-06-16 19:10:53 -03:00 |
|
oobabooga
|
7ef6a50e84
|
Reorganize model loading UI completely (#2720)
|
2023-06-16 19:00:37 -03:00 |
|
Tom Jobbins
|
646b0c889f
|
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP (#2648)
|
2023-06-15 23:59:54 -03:00 |
|
oobabooga
|
2b9a6b9259
|
Merge remote-tracking branch 'refs/remotes/origin/main'
|
2023-06-14 18:45:24 -03:00 |
|
oobabooga
|
4d508cbe58
|
Add some checks to AutoGPTQ loader
|
2023-06-14 18:44:43 -03:00 |
|
FartyPants
|
56c19e623c
|
Add LORA name instead of "default" in PeftModel (#2689)
|
2023-06-14 18:29:42 -03:00 |
|
oobabooga
|
474dc7355a
|
Allow API requests to use parameter presets
|
2023-06-14 11:32:20 -03:00 |
|
oobabooga
|
e471919e6d
|
Make llava/minigpt-4 work with AutoGPTQ
|
2023-06-11 17:56:01 -03:00 |
|
oobabooga
|
f4defde752
|
Add a menu for installing extensions
|
2023-06-11 17:11:06 -03:00 |
|
oobabooga
|
ac122832f7
|
Make dropdown menus more similar to automatic1111
|
2023-06-11 14:20:16 -03:00 |
|
oobabooga
|
6133675e0f
|
Add menus for saving presets/characters/instruction templates/prompts (#2621)
|
2023-06-11 12:19:18 -03:00 |
|
brandonj60
|
b04e18d10c
|
Add Mirostat v2 sampling to transformer models (#2571)
|
2023-06-09 21:26:31 -03:00 |
|
oobabooga
|
6015616338
|
Style changes
|
2023-06-06 13:06:05 -03:00 |
|
oobabooga
|
f040073ef1
|
Handle the case of older autogptq install
|
2023-06-06 13:05:05 -03:00 |
|
oobabooga
|
bc58dc40bd
|
Fix a minor bug
|
2023-06-06 12:57:13 -03:00 |
|