oobabooga
|
333075e726
|
Fix #3003
|
2023-07-04 11:38:35 -03:00 |
|
oobabooga
|
463ddfffd0
|
Fix start_with
|
2023-07-03 23:32:02 -07:00 |
|
oobabooga
|
373555c4fb
|
Fix loading some histories (thanks kaiokendev)
|
2023-07-03 22:19:28 -07:00 |
|
Panchovix
|
10c8c197bf
|
Add Support for Static NTK RoPE scaling for exllama/exllama_hf (#2955)
|
2023-07-04 01:13:16 -03:00 |
|
oobabooga
|
7e8340b14d
|
Make greetings appear in --multi-user mode
|
2023-07-03 20:08:14 -07:00 |
|
oobabooga
|
4b1804a438
|
Implement sessions + add basic multi-user support (#2991)
|
2023-07-04 00:03:30 -03:00 |
|
FartyPants
|
1f8cae14f9
|
Update training.py - correct use of lora_names (#2988)
|
2023-07-03 17:41:18 -03:00 |
|
FartyPants
|
c23c88ee4c
|
Update LoRA.py - avoid potential error (#2953)
|
2023-07-03 17:40:22 -03:00 |
|
FartyPants
|
33f56fd41d
|
Update models.py to clear LORA names after unload (#2951)
|
2023-07-03 17:39:06 -03:00 |
|
FartyPants
|
48b11f9c5b
|
Training: added trainable parameters info (#2944)
|
2023-07-03 17:38:36 -03:00 |
|
Turamarth14
|
847f70b694
|
Update html_generator.py (#2954)
With version 10.0.0 of Pillow the constant Image.ANTIALIAS has been removed. Instead Image.LANCZOS should be used.
|
2023-07-02 01:43:58 -03:00 |
|
ardfork
|
3c076c3c80
|
Disable half2 for ExLlama when using HIP (#2912)
|
2023-06-29 15:03:16 -03:00 |
|
missionfloyd
|
ac0f96e785
|
Some more character import tweaks. (#2921)
|
2023-06-29 14:56:25 -03:00 |
|
oobabooga
|
79db629665
|
Minor bug fix
|
2023-06-29 13:53:06 -03:00 |
|
oobabooga
|
3443219cbc
|
Add repetition penalty range parameter to transformers (#2916)
|
2023-06-29 13:40:13 -03:00 |
|
oobabooga
|
20740ab16e
|
Revert "Fix exllama_hf gibbersh above 2048 context, and works >5000 context. (#2913)"
This reverts commit 37a16d23a7 .
|
2023-06-28 18:10:34 -03:00 |
|
Panchovix
|
37a16d23a7
|
Fix exllama_hf gibbersh above 2048 context, and works >5000 context. (#2913)
|
2023-06-28 12:36:07 -03:00 |
|
FartyPants
|
ab1998146b
|
Training update - backup the existing adapter before training on top of it (#2902)
|
2023-06-27 18:24:04 -03:00 |
|
oobabooga
|
22d455b072
|
Add LoRA support to ExLlama_HF
|
2023-06-26 00:10:33 -03:00 |
|
oobabooga
|
c52290de50
|
ExLlama with long context (#2875)
|
2023-06-25 22:49:26 -03:00 |
|
oobabooga
|
9290c6236f
|
Keep ExLlama_HF if already selected
|
2023-06-25 19:06:28 -03:00 |
|
oobabooga
|
75fd763f99
|
Fix chat saving issue (closes #2863)
|
2023-06-25 18:14:57 -03:00 |
|
FartyPants
|
21c189112c
|
Several Training Enhancements (#2868)
|
2023-06-25 15:34:46 -03:00 |
|
oobabooga
|
95212edf1f
|
Update training.py
|
2023-06-25 12:13:15 -03:00 |
|
oobabooga
|
f31281a8de
|
Fix loading instruction templates containing literal '\n'
|
2023-06-25 02:13:26 -03:00 |
|
oobabooga
|
f0fcd1f697
|
Sort some imports
|
2023-06-25 01:44:36 -03:00 |
|
oobabooga
|
365b672531
|
Minor change to prevent future bugs
|
2023-06-25 01:38:54 -03:00 |
|
jllllll
|
bef67af23c
|
Use pre-compiled python module for ExLlama (#2770)
|
2023-06-24 20:24:17 -03:00 |
|
oobabooga
|
cec5fb0ef6
|
Failed attempt at evaluating exllama_hf perplexity
|
2023-06-24 12:02:25 -03:00 |
|
快乐的我531
|
e356f69b36
|
Make stop_everything work with non-streamed generation (#2848)
|
2023-06-24 11:19:16 -03:00 |
|
oobabooga
|
ec482f3dae
|
Apply input extensions after yielding *Is typing...*
|
2023-06-24 11:07:11 -03:00 |
|
oobabooga
|
3e80f2aceb
|
Apply the output extensions only once
Relevant for google translate, silero
|
2023-06-24 10:59:07 -03:00 |
|
missionfloyd
|
51a388fa34
|
Organize chat history/character import menu (#2845)
* Organize character import menu
* Move Chat history upload/download labels
|
2023-06-24 09:55:02 -03:00 |
|
oobabooga
|
8bb3bb39b3
|
Implement stopping string search in string space (#2847)
|
2023-06-24 09:43:00 -03:00 |
|
oobabooga
|
3ae9af01aa
|
Add --no_use_cuda_fp16 param for AutoGPTQ
|
2023-06-23 12:22:56 -03:00 |
|
Panchovix
|
5646690769
|
Fix some models not loading on exllama_hf (#2835)
|
2023-06-23 11:31:02 -03:00 |
|
oobabooga
|
383c50f05b
|
Replace old presets with the results of Preset Arena (#2830)
|
2023-06-23 01:48:29 -03:00 |
|
Panchovix
|
b4a38c24b7
|
Fix Multi-GPU not working on exllama_hf (#2803)
|
2023-06-22 16:05:25 -03:00 |
|
LarryVRH
|
580c1ee748
|
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. (#2777)
|
2023-06-21 15:31:42 -03:00 |
|
EugeoSynthesisThirtyTwo
|
7625c6de89
|
fix usage of self in classmethod (#2781)
|
2023-06-20 16:18:42 -03:00 |
|
MikoAL
|
c40932eb39
|
Added Falcon LoRA training support (#2684)
I am 50% sure this will work
|
2023-06-20 01:03:44 -03:00 |
|
FartyPants
|
ce86f726e9
|
Added saving of training logs to training_log.json (#2769)
|
2023-06-20 00:47:36 -03:00 |
|
Cebtenzzre
|
59e7ecb198
|
llama.cpp: implement ban_eos_token via logits_processor (#2765)
|
2023-06-19 21:31:19 -03:00 |
|
oobabooga
|
eb30f4441f
|
Add ExLlama+LoRA support (#2756)
|
2023-06-19 12:31:24 -03:00 |
|
oobabooga
|
5f418f6171
|
Fix a memory leak (credits for the fix: Ph0rk0z)
|
2023-06-19 01:19:28 -03:00 |
|
ThisIsPIRI
|
def3b69002
|
Fix loading condition for universal llama tokenizer (#2753)
|
2023-06-18 18:14:06 -03:00 |
|
oobabooga
|
09c781b16f
|
Add modules/block_requests.py
This has become unnecessary, but it could be useful in the future
for other libraries.
|
2023-06-18 16:31:14 -03:00 |
|
Forkoz
|
3cae1221d4
|
Update exllama.py - Respect model dir parameter (#2744)
|
2023-06-18 13:26:30 -03:00 |
|
oobabooga
|
c5641b65d3
|
Handle leading spaces properly in ExLllama
|
2023-06-17 19:35:12 -03:00 |
|
oobabooga
|
05a743d6ad
|
Make llama.cpp use tfs parameter
|
2023-06-17 19:08:25 -03:00 |
|
oobabooga
|
e19cbea719
|
Add a variable to modules/shared.py
|
2023-06-17 19:02:29 -03:00 |
|
oobabooga
|
cbd63eeeff
|
Fix repeated tokens with exllama
|
2023-06-17 19:02:08 -03:00 |
|
oobabooga
|
766c760cd7
|
Use gen_begin_reuse in exllama
|
2023-06-17 18:00:10 -03:00 |
|
oobabooga
|
b27f83c0e9
|
Make exllama stoppable
|
2023-06-16 22:03:23 -03:00 |
|
oobabooga
|
7f06d551a3
|
Fix streaming callback
|
2023-06-16 21:44:56 -03:00 |
|
oobabooga
|
5f392122fd
|
Add gpu_split param to ExLlama
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
|
2023-06-16 20:49:36 -03:00 |
|
oobabooga
|
9f40032d32
|
Add ExLlama support (#2444)
|
2023-06-16 20:35:38 -03:00 |
|
oobabooga
|
dea43685b0
|
Add some clarifications
|
2023-06-16 19:10:53 -03:00 |
|
oobabooga
|
7ef6a50e84
|
Reorganize model loading UI completely (#2720)
|
2023-06-16 19:00:37 -03:00 |
|
Tom Jobbins
|
646b0c889f
|
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP (#2648)
|
2023-06-15 23:59:54 -03:00 |
|
oobabooga
|
2b9a6b9259
|
Merge remote-tracking branch 'refs/remotes/origin/main'
|
2023-06-14 18:45:24 -03:00 |
|
oobabooga
|
4d508cbe58
|
Add some checks to AutoGPTQ loader
|
2023-06-14 18:44:43 -03:00 |
|
FartyPants
|
56c19e623c
|
Add LORA name instead of "default" in PeftModel (#2689)
|
2023-06-14 18:29:42 -03:00 |
|
oobabooga
|
474dc7355a
|
Allow API requests to use parameter presets
|
2023-06-14 11:32:20 -03:00 |
|
oobabooga
|
e471919e6d
|
Make llava/minigpt-4 work with AutoGPTQ
|
2023-06-11 17:56:01 -03:00 |
|
oobabooga
|
f4defde752
|
Add a menu for installing extensions
|
2023-06-11 17:11:06 -03:00 |
|
oobabooga
|
ac122832f7
|
Make dropdown menus more similar to automatic1111
|
2023-06-11 14:20:16 -03:00 |
|
oobabooga
|
6133675e0f
|
Add menus for saving presets/characters/instruction templates/prompts (#2621)
|
2023-06-11 12:19:18 -03:00 |
|
brandonj60
|
b04e18d10c
|
Add Mirostat v2 sampling to transformer models (#2571)
|
2023-06-09 21:26:31 -03:00 |
|
oobabooga
|
6015616338
|
Style changes
|
2023-06-06 13:06:05 -03:00 |
|
oobabooga
|
f040073ef1
|
Handle the case of older autogptq install
|
2023-06-06 13:05:05 -03:00 |
|
oobabooga
|
bc58dc40bd
|
Fix a minor bug
|
2023-06-06 12:57:13 -03:00 |
|
oobabooga
|
00b94847da
|
Remove softprompt support
|
2023-06-06 07:42:23 -03:00 |
|
oobabooga
|
0aebc838a0
|
Don't save the history for 'None' character
|
2023-06-06 07:21:07 -03:00 |
|
oobabooga
|
9f215523e2
|
Remove some unused imports
|
2023-06-06 07:05:46 -03:00 |
|
oobabooga
|
0f0108ce34
|
Never load the history for default character
|
2023-06-06 07:00:11 -03:00 |
|
oobabooga
|
11f38b5c2b
|
Add AutoGPTQ LoRA support
|
2023-06-05 23:32:57 -03:00 |
|
oobabooga
|
3a5cfe96f0
|
Increase chat_prompt_size_max
|
2023-06-05 17:37:37 -03:00 |
|
oobabooga
|
f276d88546
|
Use AutoGPTQ by default for GPTQ models
|
2023-06-05 15:41:48 -03:00 |
|
oobabooga
|
9b0e95abeb
|
Fix "regenerate" when "Start reply with" is set
|
2023-06-05 11:56:03 -03:00 |
|
oobabooga
|
19f78684e6
|
Add "Start reply with" feature to chat mode
|
2023-06-02 13:58:08 -03:00 |
|
GralchemOz
|
f7b07c4705
|
Fix the missing Chinese character bug (#2497)
|
2023-06-02 13:45:41 -03:00 |
|
oobabooga
|
2f6631195a
|
Add desc_act checkbox to the UI
|
2023-06-02 01:45:46 -03:00 |
|
LaaZa
|
9c066601f5
|
Extend AutoGPTQ support for any GPTQ model (#1668)
|
2023-06-02 01:33:55 -03:00 |
|
oobabooga
|
a83f9aa65b
|
Update shared.py
|
2023-06-01 12:08:39 -03:00 |
|
oobabooga
|
b6c407f51d
|
Don't stream at more than 24 fps
This is a performance optimization
|
2023-05-31 23:41:42 -03:00 |
|
Forkoz
|
9ab90d8b60
|
Fix warning for qlora (#2438)
|
2023-05-30 11:09:18 -03:00 |
|
oobabooga
|
3578dd3611
|
Change a warning message
|
2023-05-29 22:40:54 -03:00 |
|
oobabooga
|
3a6e194bc7
|
Change a warning message
|
2023-05-29 22:39:23 -03:00 |
|
Luis Lopez
|
9e7204bef4
|
Add tail-free and top-a sampling (#2357)
|
2023-05-29 21:40:01 -03:00 |
|
oobabooga
|
1394f44e14
|
Add triton checkbox for AutoGPTQ
|
2023-05-29 15:32:45 -03:00 |
|
oobabooga
|
f34d20922c
|
Minor fix
|
2023-05-29 13:31:17 -03:00 |
|
oobabooga
|
983eef1e29
|
Attempt at evaluating falcon perplexity (failed)
|
2023-05-29 13:28:25 -03:00 |
|
Honkware
|
204731952a
|
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-29 10:20:18 -03:00 |
|
Forkoz
|
60ae80cf28
|
Fix hang in tokenizer for AutoGPTQ llama models. (#2399)
|
2023-05-28 23:10:10 -03:00 |
|
oobabooga
|
2f811b1bdf
|
Change a warning message
|
2023-05-28 22:48:20 -03:00 |
|
oobabooga
|
9ee1e37121
|
Fix return message when no model is loaded
|
2023-05-28 22:46:32 -03:00 |
|
oobabooga
|
00ebea0b2a
|
Use YAML for presets and settings
|
2023-05-28 22:34:12 -03:00 |
|
oobabooga
|
acfd876f29
|
Some qol changes to "Perplexity evaluation"
|
2023-05-25 15:06:22 -03:00 |
|
oobabooga
|
8efdc01ffb
|
Better default for compute_dtype
|
2023-05-25 15:05:53 -03:00 |
|