oobabooga
|
8513028968
|
Fix lag in the chat tab during streaming
|
2023-12-12 13:01:25 -08:00 |
|
oobabooga
|
39d2fe1ed9
|
Jinja templates for Instruct and Chat (#4874)
|
2023-12-12 17:23:14 -03:00 |
|
oobabooga
|
181743fd97
|
Fix missing spaces tokenizer issue (closes #4834)
|
2023-12-08 05:16:46 -08:00 |
|
Yiximail
|
1c74b3ab45
|
Fix partial unicode characters issue (#4837)
|
2023-12-08 09:50:53 -03:00 |
|
oobabooga
|
6430acadde
|
Minor bug fix after https://github.com/oobabooga/text-generation-webui/pull/4814
|
2023-12-05 10:08:11 -08:00 |
|
oobabooga
|
0f828ea441
|
Do not limit API updates/second
|
2023-12-04 20:45:43 -08:00 |
|
oobabooga
|
9edb193def
|
Optimize HF text generation (#4814)
|
2023-12-05 00:00:40 -03:00 |
|
tsukanov-as
|
9f7ae6bb2e
|
fix detection of stopping strings when HTML escaping is used (#4728)
|
2023-11-27 15:42:08 -03:00 |
|
oobabooga
|
1b69694fe9
|
Add types to the encode/decode/token-count endpoints
|
2023-11-07 19:32:14 -08:00 |
|
oobabooga
|
ec17a5d2b7
|
Make OpenAI API the default API (#4430)
|
2023-11-06 02:38:29 -03:00 |
|
oobabooga
|
aa5d671579
|
Add temperature_last parameter (#4472)
|
2023-11-04 13:09:07 -03:00 |
|
kalomaze
|
367e5e6e43
|
Implement Min P as a sampler option in HF loaders (#4449)
|
2023-11-02 16:32:51 -03:00 |
|
Abhilash Majumder
|
778a010df8
|
Intel Gpu support initialization (#4340)
|
2023-10-26 23:39:51 -03:00 |
|
tdrussell
|
72f6fc6923
|
Rename additive_repetition_penalty to presence_penalty, add frequency_penalty (#4376)
|
2023-10-25 12:10:28 -03:00 |
|
tdrussell
|
4440f87722
|
Add additive_repetition_penalty sampler setting. (#3627)
|
2023-10-23 02:28:07 -03:00 |
|
oobabooga
|
b88b2b74a6
|
Experimental Intel Arc transformers support (untested)
|
2023-10-15 20:51:11 -07:00 |
|
Brian Dashore
|
98fa73a974
|
Text Generation: stop if EOS token is reached (#4213)
|
2023-10-07 19:46:42 -03:00 |
|
oobabooga
|
ae4ba3007f
|
Add grammar to transformers and _HF loaders (#4091)
|
2023-10-05 10:01:36 -03:00 |
|
oobabooga
|
869f47fff9
|
Lint
|
2023-09-19 13:51:57 -07:00 |
|
BadisG
|
893a72a1c5
|
Stop generation immediately when using "Maximum tokens/second" (#3952)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-09-18 14:27:06 -03:00 |
|
oobabooga
|
0ede2965d5
|
Remove an error message
|
2023-09-17 18:46:08 -07:00 |
|
oobabooga
|
a069f3904c
|
Undo part of ad8ac545a5
|
2023-09-17 08:12:23 -07:00 |
|
oobabooga
|
ad8ac545a5
|
Tokenization improvements
|
2023-09-17 07:02:00 -07:00 |
|
saltacc
|
cd08eb0753
|
token probs for non HF loaders (#3957)
|
2023-09-17 10:42:32 -03:00 |
|
oobabooga
|
ef04138bc0
|
Improve the UI tokenizer
|
2023-09-15 19:30:44 -07:00 |
|
saltacc
|
f01b9aa71f
|
Add customizable ban tokens (#3899)
|
2023-09-15 18:27:27 -03:00 |
|
oobabooga
|
c2a309f56e
|
Add ExLlamaV2 and ExLlamav2_HF loaders (#3881)
|
2023-09-12 14:33:07 -03:00 |
|
oobabooga
|
47e490c7b4
|
Set use_cache=True by default for all models
|
2023-08-30 13:26:27 -07:00 |
|
oobabooga
|
cec8db52e5
|
Add max_tokens_second param (#3533)
|
2023-08-29 17:44:31 -03:00 |
|
oobabooga
|
2cb07065ec
|
Fix an escaping bug
|
2023-08-20 21:50:42 -07:00 |
|
oobabooga
|
a74dd9003f
|
Fix HTML escaping for perplexity_colors extension
|
2023-08-20 21:40:22 -07:00 |
|
cal066
|
7a4fcee069
|
Add ctransformers support (#3313)
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
|
2023-08-11 14:41:33 -03:00 |
|
oobabooga
|
65aa11890f
|
Refactor everything (#3481)
|
2023-08-06 21:49:27 -03:00 |
|
oobabooga
|
0af10ab49b
|
Add Classifier Free Guidance (CFG) for Transformers/ExLlama (#3325)
|
2023-08-06 17:22:48 -03:00 |
|
Pete
|
f4005164f4
|
Fix llama.cpp truncation (#3400)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-08-03 20:01:15 -03:00 |
|
oobabooga
|
e931844fe2
|
Add auto_max_new_tokens parameter (#3419)
|
2023-08-02 14:52:20 -03:00 |
|
oobabooga
|
75c2dd38cf
|
Remove flexgen support
|
2023-07-25 15:15:29 -07:00 |
|
appe233
|
89e0d15cf5
|
Use 'torch.backends.mps.is_available' to check if mps is supported (#3164)
|
2023-07-17 21:27:18 -03:00 |
|
Morgan Schweers
|
6d1e911577
|
Add support for logits processors in extensions (#3029)
|
2023-07-13 17:22:41 -03:00 |
|
oobabooga
|
4b1804a438
|
Implement sessions + add basic multi-user support (#2991)
|
2023-07-04 00:03:30 -03:00 |
|
oobabooga
|
3443219cbc
|
Add repetition penalty range parameter to transformers (#2916)
|
2023-06-29 13:40:13 -03:00 |
|
oobabooga
|
365b672531
|
Minor change to prevent future bugs
|
2023-06-25 01:38:54 -03:00 |
|
快乐的我531
|
e356f69b36
|
Make stop_everything work with non-streamed generation (#2848)
|
2023-06-24 11:19:16 -03:00 |
|
oobabooga
|
3e80f2aceb
|
Apply the output extensions only once
Relevant for google translate, silero
|
2023-06-24 10:59:07 -03:00 |
|
oobabooga
|
8bb3bb39b3
|
Implement stopping string search in string space (#2847)
|
2023-06-24 09:43:00 -03:00 |
|
LarryVRH
|
580c1ee748
|
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. (#2777)
|
2023-06-21 15:31:42 -03:00 |
|
oobabooga
|
7f06d551a3
|
Fix streaming callback
|
2023-06-16 21:44:56 -03:00 |
|
oobabooga
|
9f40032d32
|
Add ExLlama support (#2444)
|
2023-06-16 20:35:38 -03:00 |
|
oobabooga
|
7ef6a50e84
|
Reorganize model loading UI completely (#2720)
|
2023-06-16 19:00:37 -03:00 |
|
brandonj60
|
b04e18d10c
|
Add Mirostat v2 sampling to transformer models (#2571)
|
2023-06-09 21:26:31 -03:00 |
|