Commit Graph

607 Commits

Author SHA1 Message Date
catalpaaa
78bbc66fc4
allow custom stopping strings in all modes (#903) 2023-04-11 12:30:06 -03:00
oobabooga
0f212093a3
Refactor the UI
A single dictionary called 'interface_state' is now passed as input to all functions. The values are updated only when necessary.

The goal is to make it easier to add new elements to the UI.
2023-04-11 11:46:30 -03:00
oobabooga
58b34c0841 Fix chat_prompt_size 2023-04-10 20:06:42 -03:00
Alex "mcmonkey" Goodwin
0caf718a21
add on-page documentation to parameters (#1008) 2023-04-10 17:19:12 -03:00
oobabooga
bd04ff27ad Make the bos token optional 2023-04-10 16:44:22 -03:00
oobabooga
0f1627eff1 Don't treat Intruct mode histories as regular histories
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
2023-04-10 15:48:07 -03:00
oobabooga
d679c4be13 Change a label 2023-04-10 11:44:37 -03:00
oobabooga
45244ed125 More descriptive download info 2023-04-10 11:42:12 -03:00
oobabooga
11b23db8d4 Remove unused imports 2023-04-10 11:37:42 -03:00
oobabooga
2c14df81a8 Use download-model.py to download the model 2023-04-10 11:36:39 -03:00
oobabooga
c6e9ba20a4 Merge branch 'main' into UsamaKenway-main 2023-04-10 11:14:03 -03:00
oobabooga
d29f4624e9 Add a Continue button to chat mode 2023-04-09 20:04:16 -03:00
oobabooga
f91d3a3ff4 server.py readability 2023-04-09 14:46:32 -03:00
Usama Kenway
ebdf4c8c12 path fixed 2023-04-09 16:53:21 +05:00
Usama Kenway
7436dd5b4a download custom model menu (from hugging face) added in model tab 2023-04-09 16:11:43 +05:00
oobabooga
cb169d0834 Minor formatting changes 2023-04-08 17:34:07 -03:00
oobabooga
2f16d0afca Remove redundant events 2023-04-08 17:32:36 -03:00
oobabooga
a6a00cb82f
Properly concatenate chat events 2023-04-08 17:25:21 -03:00
Φφ
ffd102e5c0
SD Api Pics extension, v.1.1 (#596) 2023-04-07 21:36:04 -03:00
oobabooga
5543a5089d Auto-submit the whisper extension transcription 2023-04-07 15:57:51 -03:00
oobabooga
1dc464dcb0 Sort imports 2023-04-07 14:42:03 -03:00
oobabooga
962e33dc10 Change button style 2023-04-07 12:22:14 -03:00
Maya
744bf7cbf2
Get rid of type parameter warning (#883)
Fix annoying `The 'type' parameter has been deprecated. Use the Number component instead` warning
2023-04-07 11:17:16 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
oobabooga
5b301d9a02 Create a Model tab 2023-04-06 01:54:05 -03:00
oobabooga
4a400320dd Clean up 2023-04-06 01:47:00 -03:00
Randell Miller
641646a801
Fix crash if missing instructions directory (#812) 2023-04-06 01:24:22 -03:00
oobabooga
3f3e42e26c
Refactor several function calls and the API 2023-04-06 01:22:15 -03:00
oobabooga
7f66421369 Fix loading characters 2023-04-05 14:22:32 -03:00
oobabooga
90141bc1a8 Fix saving prompts on Windows 2023-04-05 14:08:54 -03:00
oobabooga
cf2c4e740b Disable gradio analytics globally 2023-04-05 14:05:50 -03:00
oobabooga
e722c240af Add Instruct mode 2023-04-05 13:54:50 -03:00
oobabooga
ae1fe45bc0 One more cache reset 2023-04-04 23:15:57 -03:00
oobabooga
80dfba05f3 Better crop/resize cached images 2023-04-04 22:52:15 -03:00
oobabooga
65d8a24a6d Show profile pictures in the Character tab 2023-04-04 22:28:49 -03:00
oobabooga
8de22ac82a Merge character upload tabs 2023-04-03 18:01:45 -03:00
oobabooga
3012bdb5e0 Fix a label 2023-04-03 12:20:53 -03:00
OWKenobi
dcf61a8897
"character greeting" displayed and editable on the fly (#743)
* Add greetings field

* add greeting field and make it interactive

* Minor changes

* Fix a bug

* Simplify clear_chat_log

* Change a label

* Minor change

* Simplifications

* Simplification

* Simplify loading the default character history

* Fix regression

---------

Co-authored-by: oobabooga
2023-04-03 12:16:15 -03:00
oobabooga
2a267011dc Use Path.stem for simplicity 2023-04-03 00:56:14 -03:00
TheTerrasque
2157bb4319
New yaml character format (#337 from TheTerrasque/feature/yaml-characters)
This doesn't break backward compatibility with JSON characters.
2023-04-02 20:34:25 -03:00
oobabooga
0dc6fa038b Use gr.State() to store the user input 2023-04-02 18:05:21 -03:00
Brian O'Connor
d0f9625f0b Clear text input for chat
Add logic to clear the textbox for chat input when the user submits or hits the generate button.
2023-04-01 21:48:24 -04:00
oobabooga
b0890a7925 Add shared.is_chat() function 2023-04-01 20:15:00 -03:00
oobabooga
8c51b405e4 Progress towards generalizing Interface mode tab 2023-03-31 23:41:10 -03:00
oobabooga
1d1d9e40cd Add seed to settings 2023-03-31 12:22:07 -03:00
oobabooga
fd72afd8e7 Increase the textbox sizes 2023-03-31 00:43:00 -03:00
oobabooga
bd65940a48 Increase --chat box height 2023-03-30 00:43:49 -03:00
oobabooga
55755e27b9 Don't hardcode prompts in the settings dict/json 2023-03-29 22:47:01 -03:00
oobabooga
1cb9246160 Adapt to the new model names 2023-03-29 21:47:36 -03:00
oobabooga
cac577d99f Fix interface reloading 2023-03-28 13:25:58 -03:00
Alex "mcmonkey" Goodwin
9cc811a0e6 fix LoRA path typo in #549 2023-03-27 22:16:40 -07:00
Alex "mcmonkey" Goodwin
31f04dc615 Merge branch 'main' into add-train-lora-tab 2023-03-27 20:03:30 -07:00
oobabooga
005f552ea3 Some simplifications 2023-03-27 23:29:52 -03:00
oobabooga
fde92048af Merge branch 'main' into catalpaaa-lora-and-model-dir 2023-03-27 23:16:44 -03:00
oobabooga
2f0571bfa4 Small style changes 2023-03-27 21:24:39 -03:00
oobabooga
c2cad30772 Merge branch 'main' into mcmonkey4eva-add-train-lora-tab 2023-03-27 21:05:44 -03:00
oobabooga
641e1a09a7 Don't flash when selecting a new prompt 2023-03-27 14:48:43 -03:00
oobabooga
268abd1cba Add some space in notebook mode 2023-03-27 13:52:12 -03:00
Alex "mcmonkey" Goodwin
c07bcd0850 add some outputs to indicate progress updates (sorta)
Actual progressbar still needed. Also minor formatting fixes.
2023-03-27 09:41:06 -07:00
oobabooga
af65c12900 Change Stop button behavior 2023-03-27 13:23:59 -03:00
oobabooga
572bafcd24 Less verbose message 2023-03-27 12:43:37 -03:00
Alex "mcmonkey" Goodwin
2afe1c13c1 move Training to before Interface mode
as Interface Mode seems to be a core 'settings' page that naturally belongs at the very end
2023-03-27 08:32:32 -07:00
oobabooga
202e981d00 Make Generate/Stop buttons smaller in notebook mode 2023-03-27 12:30:57 -03:00
Alex "mcmonkey" Goodwin
e439228ed8 Merge branch 'main' into add-train-lora-tab 2023-03-27 08:21:19 -07:00
oobabooga
57345b8f30 Add prompt loading/saving menus + reorganize interface 2023-03-27 12:16:37 -03:00
oobabooga
95c97e1747 Unload the model using the "Remove all" button 2023-03-26 23:47:29 -03:00
oobabooga
e07c9e3093 Merge branch 'main' into Brawlence-main 2023-03-26 23:40:51 -03:00
oobabooga
1c77fdca4c Change notebook mode appearance 2023-03-26 22:20:30 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Alex "mcmonkey" Goodwin
566898a79a initial lora training tab 2023-03-25 12:08:26 -07:00
catalpaaa
d51cb8292b Update server.py
yea i should go to bed
2023-03-24 17:36:31 -07:00
catalpaaa
9e2963e0c8 Update server.py 2023-03-24 17:35:45 -07:00
catalpaaa
ec2a1facee Update server.py 2023-03-24 17:34:33 -07:00
catalpaaa
b37c54edcf lora-dir, model-dir and login auth
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
d8e950d6bd
Don't load the model twice when using --lora 2023-03-24 16:30:32 -03:00
oobabooga
fd99995b01
Make the Stop button more consistent in chat mode 2023-03-24 15:59:27 -03:00
oobabooga
9bdb3c784d
Minor fix 2023-03-23 22:02:40 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs 2023-03-23 21:56:26 -03:00
Φφ
483d173d23 Code reuse + indication
Now shows the message in the console when unloading weights. Also reload_model() calls unload_model() first to free the memory so that multiple reloads won't overfill it.
2023-03-23 07:06:26 +03:00
Φφ
1917b15275 Unload and reload models on request 2023-03-23 07:06:26 +03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab 2023-03-22 15:40:20 -03:00
oobabooga
4d701a6eb9 Create a mirror for the preset menu 2023-03-19 12:51:47 -03:00
oobabooga
20f5b455bf Add parameters reference #386 #331 2023-03-17 20:19:04 -03:00
oobabooga
a717fd709d Sort the imports 2023-03-17 11:42:25 -03:00
oobabooga
29fe7b1c74 Remove LoRA tab, move it into the Parameters menu 2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
oobabooga
104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga
38d7017657 Add all command-line flags to "Interface mode" 2023-03-16 12:44:03 -03:00
oobabooga
d54f3f4a34 Add no-stream checkbox to the interface 2023-03-16 10:19:00 -03:00
oobabooga
25a00eaf98 Add "Experimental" warning 2023-03-15 23:43:35 -03:00
oobabooga
599d3139fd Increase the reload timeout a bit 2023-03-15 23:34:08 -03:00
oobabooga
4d64a57092 Add Interface mode tab 2023-03-15 23:29:56 -03:00
oobabooga
ffb898608b Mini refactor 2023-03-15 20:44:34 -03:00
oobabooga
67d62475dc Further reorganize chat UI 2023-03-15 18:56:26 -03:00
oobabooga
c1959c26ee Show/hide the extensions block using javascript 2023-03-15 16:35:28 -03:00
oobabooga
348596f634 Fix broken extensions 2023-03-15 15:11:16 -03:00
oobabooga
658849d6c3 Move a checkbutton 2023-03-15 13:29:00 -03:00
oobabooga
d30a14087f Further reorganize the UI 2023-03-15 13:24:54 -03:00
oobabooga
ffc6cb3116
Merge pull request #325 from Ph0rk0z/fix-RWKV-Names
Fix rwkv names
2023-03-15 12:56:21 -03:00
oobabooga
1413931705 Add a header bar and redesign the interface (#293) 2023-03-15 12:01:32 -03:00
oobabooga
9d6a625bd6 Add 'hallucinations' filter #326
This breaks the API since a new parameter has been added.
It should be a one-line fix. See api-example.py.
2023-03-15 11:10:35 -03:00
Forkoz
3b62bd180d
Remove PTH extension from RWKV
When loading the current model was blank unless you typed it out.
2023-03-14 21:23:39 +00:00
Forkoz
f0f325eac1
Remove Json from loading
no more 20b tokenizer
2023-03-14 21:21:47 +00:00
oobabooga
72d207c098
Remove the chat API
It is not implemented, has not been tested, and this is causing confusion.
2023-03-14 16:31:27 -03:00
oobabooga
a95592fc56 Add back a progress indicator to --no-stream 2023-03-12 20:38:40 -03:00
oobabooga
bcf0075278
Merge pull request #235 from xanthousm/Quality_of_life-main
--auto-launch and "Is typing..."
2023-03-12 03:12:56 -03:00
oobabooga
92fe947721 Merge branch 'main' into new-streaming 2023-03-11 19:59:45 -03:00
oobabooga
2743dd736a Add *Is typing...* to impersonate as well 2023-03-11 10:50:18 -03:00
Xan
96c51973f9 --auto-launch and "Is typing..."
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
oobabooga
9849aac0f1 Don't show .pt models in the list 2023-03-09 21:54:50 -03:00
oobabooga
038e90765b Rename to "Text generation web UI" 2023-03-09 09:44:08 -03:00
jtang613
807a41cf87 Lets propose a name besides "Gradio" 2023-03-08 21:02:25 -05:00
oobabooga
ab50f80542 New text streaming method (much faster) 2023-03-08 02:46:35 -03:00
oobabooga
bf56b6c1fb Load settings.json without the need for --settings settings.json
This is for setting UI defaults
2023-03-06 10:57:45 -03:00
oobabooga
bcea196c9d Bump flexgen version 2023-03-02 12:03:57 -03:00
oobabooga
169209805d Model-aware prompts and presets 2023-03-02 11:25:04 -03:00
oobabooga
99dc95e14e Minor aesthetic change 2023-03-01 19:32:04 -03:00
oobabooga
a1429d1607 Add default extensions to the settings 2023-02-28 02:20:11 -03:00
oobabooga
365e1089b3 Move some buttons 2023-02-28 01:34:07 -03:00
oobabooga
43b6ab8673 Store thumbnails as files instead of base64 strings
This improves the UI responsiveness for large histories.
2023-02-27 13:41:00 -03:00
oobabooga
611010e8af Add a confirmation to clear history 2023-02-27 11:41:21 -03:00
oobabooga
7a776ccf87 Make the gallery interactive to load characters 2023-02-26 17:19:36 -03:00
oobabooga
e91eb24649 Decrease the repetition penalty upper limit to 3 2023-02-26 01:51:59 -03:00
oobabooga
3d94ebfdd0 Change --chat colors 2023-02-26 00:51:15 -03:00
oobabooga
b3d2365d92 Rename a button 2023-02-25 16:33:46 -03:00
oobabooga
03d25c1c61 Reorder the chat buttons 2023-02-25 15:35:43 -03:00
oobabooga
e2cf4e4968 Reorder the custom parameters 2023-02-25 15:21:40 -03:00
oobabooga
381f747181 Reorganize the custom parameters for mobile usage 2023-02-25 15:17:44 -03:00
oobabooga
01acb250c5 Add a comment 2023-02-25 02:07:29 -03:00
oobabooga
7c2babfe39 Rename greed to "generation attempts" 2023-02-25 01:42:19 -03:00
oobabooga
2dfb999bf1 Add greed parameter 2023-02-25 01:31:01 -03:00
oobabooga
7a527a5581 Move "send picture" into an extension
I am not proud of how I did it for now.
2023-02-25 00:23:51 -03:00
oobabooga
e51ece21c0 Add ui() function to extensions 2023-02-24 19:00:11 -03:00
oobabooga
77f58e5dab Remove a space 2023-02-24 17:32:34 -03:00
oobabooga
c5066f1192 Rename some variables, be consistent about ' and " 2023-02-24 17:31:23 -03:00
oobabooga
78ad55641b Remove duplicate max_new_tokens parameter 2023-02-24 17:19:42 -03:00
oobabooga
65326b545a Move all gradio elements to shared (so that extensions can use them) 2023-02-24 16:46:50 -03:00
oobabooga
0a3590da8c Add a progress bar 2023-02-24 14:19:27 -03:00
oobabooga
3b8cecbab7 Reload the default chat on page refresh 2023-02-23 19:50:23 -03:00
oobabooga
f1914115d3 Fix minor issue with chat logs 2023-02-23 16:04:47 -03:00
oobabooga
2e86a1ec04 Move chat history into shared module 2023-02-23 15:11:18 -03:00
oobabooga
c87800341c Move function to extensions module 2023-02-23 14:55:21 -03:00
oobabooga
7224343a70 Improve the imports 2023-02-23 14:41:42 -03:00
oobabooga
364529d0c7 Further refactor 2023-02-23 14:31:28 -03:00
oobabooga
e46c43afa6 Move some stuff from server.py to modules 2023-02-23 13:42:23 -03:00
oobabooga
1dacd34165 Further refactor 2023-02-23 13:28:30 -03:00
oobabooga
ce7feb3641 Further refactor 2023-02-23 13:03:52 -03:00
oobabooga
98af4bfb0d Refactor the code to make it more modular 2023-02-23 12:05:25 -03:00
oobabooga
18e0ec955e Improve some descriptions in --help 2023-02-23 10:11:58 -03:00
oobabooga
c72892835a Don't show *-np models in the list of choices 2023-02-22 11:38:16 -03:00
oobabooga
044b963987 Add stop parameter for flexgen (#105) 2023-02-22 11:23:36 -03:00
oobabooga
ea21a22940 Remove redundant preset 2023-02-22 01:01:26 -03:00
oobabooga
b8b3d4139c Add --compress-weight parameter 2023-02-22 00:43:21 -03:00
oobabooga
eef6fc3cbf Add a preset for FlexGen 2023-02-21 23:33:15 -03:00
oobabooga
311404e258 Reuse disk-cache-dir parameter for flexgen 2023-02-21 22:11:05 -03:00
oobabooga
f3c75bbd64 Add --percent flag for flexgen 2023-02-21 22:08:46 -03:00
oobabooga
b83f51ee04 Add FlexGen support #92 (experimental) 2023-02-21 21:00:06 -03:00
oobabooga
444cd69c67 Fix regex bug in loading character jsons with special characters 2023-02-20 19:38:19 -03:00
oobabooga
d7a738fb7a Load any 13b/20b/30b model in 8-bit mode when no flags are supplied 2023-02-20 15:44:10 -03:00
oobabooga
77846ceef3 Minor change 2023-02-20 15:05:48 -03:00
oobabooga
e195377050 Deprecate torch dumps, move to safetensors (they load even faster) 2023-02-20 15:03:19 -03:00
oobabooga
14ffa0b418 Fix line breaks in --chat mode 2023-02-20 13:25:46 -03:00
SillyLossy
ded890c378 Escape regexp in message extraction 2023-02-19 12:55:45 +02:00
oobabooga
8c9dd95d55
Print the softprompt metadata when it is loaded 2023-02-19 01:48:23 -03:00
oobabooga
f79805f4a4
Change a comment 2023-02-18 22:58:40 -03:00
oobabooga
d58544a420 Some minor formatting changes 2023-02-18 11:07:55 -03:00
oobabooga
0dd41e4830 Reorganize the sliders some more 2023-02-17 16:33:27 -03:00
oobabooga
6b9ac2f88e Reorganize the generation parameters 2023-02-17 16:18:01 -03:00
oobabooga
596732a981 The soft prompt length must be considered here too 2023-02-17 12:35:30 -03:00
oobabooga
edc0262889 Minor file uploading fixes 2023-02-17 10:27:41 -03:00
oobabooga
243244eeec Attempt at fixing greyed out files on iphone 2023-02-17 10:17:15 -03:00
oobabooga
a226f4cddb No change, so reverting 2023-02-17 09:27:17 -03:00
oobabooga
40cb9f63f6 Try making Colab happy (tensorflow warnings) 2023-02-17 09:23:11 -03:00
oobabooga
aeddf902ec Make the refresh button prettier 2023-02-16 21:55:20 -03:00
oobabooga
21512e2790 Make the Stop button work more reliably 2023-02-16 21:21:45 -03:00
oobabooga
08805b3374 Force "You" in impersonate too 2023-02-16 13:24:13 -03:00
oobabooga
d7db04403f Fix --chat chatbox height 2023-02-16 12:45:05 -03:00
oobabooga
589069e105 Don't regenerate if no message has been sent 2023-02-16 12:32:35 -03:00
oobabooga
405dfbf57c Force your name to be "You" for pygmalion (properly) 2023-02-16 12:16:12 -03:00
oobabooga
7bd2ae05bf Force your name to be "You" for pygmalion
This allows you to customize your displayed name.
2023-02-15 21:32:53 -03:00
oobabooga
3746d72853 More style fixes 2023-02-15 21:13:12 -03:00
oobabooga
6f213b8c14 Style fix 2023-02-15 20:58:17 -03:00
oobabooga
ccf10db60f Move stuff into tabs in chat mode 2023-02-15 20:55:32 -03:00
oobabooga
a55e8836f6 Bump gradio version
It looks uglier, but the old one was bugged and unstable.
2023-02-15 20:20:56 -03:00
oobabooga
0e89ff4b13 Clear the persistent history after clicking on "Clear history" 2023-02-15 16:49:52 -03:00
oobabooga
b3bcd2881d Implement regenerate/impersonate the proper way (fixes #78) 2023-02-15 14:39:26 -03:00
oobabooga
5ee9283cae Mention BLIP 2023-02-15 13:53:38 -03:00
oobabooga
8d3b3959e7 Document --picture option 2023-02-15 13:50:18 -03:00
oobabooga
2eea0f4edb Minor change 2023-02-15 12:58:11 -03:00
oobabooga
3c31fa7079 Simplifications 2023-02-15 12:46:11 -03:00
oobabooga
80fbc584f7 Readability 2023-02-15 11:38:44 -03:00
oobabooga
b397bea387 Make chat history persistent 2023-02-15 11:30:38 -03:00
oobabooga
7be372829d Set chat prompt size in tokens 2023-02-15 10:18:50 -03:00
oobabooga
8c3ef58e00 Use BLIP directly + some simplifications 2023-02-14 23:55:46 -03:00
SillyLossy
a7d98f494a Use BLIP to send a picture to model 2023-02-15 01:38:21 +02:00
oobabooga
d910d435cd Consider the softprompt in the maximum prompt length calculation 2023-02-14 12:06:47 -03:00
oobabooga
8b3bb512ef Minor bug fix (soft prompt was being loaded twice) 2023-02-13 23:34:04 -03:00
oobabooga
7739a29524 Some simplifications 2023-02-13 18:48:32 -03:00
oobabooga
3277b751f5 Add softprompt support (for real this time)
Is this too much voodoo for our purposes?
2023-02-13 15:25:16 -03:00
oobabooga
aa1177ff15 Send last internal reply to input rather than visible 2023-02-13 03:29:23 -03:00
oobabooga
2c3abcf57a Add support for rosey/chip/joi instruct models 2023-02-12 09:46:34 -03:00
oobabooga
7ef7bba6e6 Add progress bar for model loading 2023-02-12 09:36:27 -03:00
oobabooga
5d3f15b915 Use the CPU if no GPU is detected 2023-02-11 23:17:06 -03:00
oobabooga
b3c4657c47 Remove commas from preset files 2023-02-11 14:54:29 -03:00
oobabooga
0dd1409f24 Add penalty_alpha parameter (contrastive search) 2023-02-11 14:48:12 -03:00
oobabooga
2ed0386d87 Fix replace last reply in --chat mode (for #69) 2023-02-11 07:59:54 -03:00
oobabooga
316e07f06a auto-assign gpu memory with --auto-devices alone 2023-02-10 16:36:06 -03:00
oobabooga
219366342b Sort imports according to PEP8 (based on #67) 2023-02-10 15:40:03 -03:00
81300
20dbef9623
Extend bfloat16 support 2023-02-09 20:00:03 +02:00
oobabooga
cadd100405 min_length has to be 0 when streaming is on 2023-02-08 00:23:35 -03:00
oobabooga
6be571cff7 Better variable names 2023-02-08 00:19:20 -03:00
oobabooga
58b07cca81 length_penalty can be negative (apparently) 2023-02-07 23:33:02 -03:00
oobabooga
7e4c25691d Repetition penalty has to be < 5 2023-02-07 23:23:39 -03:00
oobabooga
1c30e1b49a Add even more sliders 2023-02-07 23:11:04 -03:00
oobabooga
24dc705eca Add lots of sliders 2023-02-07 22:08:21 -03:00
Martin J
06a4664805 Fix a regex issue in tokenize_dialogue.
The existing regex would fail if using character names that start with
numbers, for example: 9S or 2B.
2023-02-05 07:42:57 +01:00
oobabooga
2fe235738e Reorganize chat buttons 2023-02-04 22:53:42 -03:00
oobabooga
2207d44986 Windows doesn't like : in filenames 2023-02-04 20:07:39 -03:00
oobabooga
65266f3349 Fix loading official colab chat logs 2023-02-03 22:43:02 -03:00
oobabooga
44e8c671f9 Fix API documentation formatting in chat mode 2023-02-03 10:00:05 -03:00
oobabooga
a28f0d8bd7 Show it/s in the same units with or without streaming
Closes #49
2023-02-03 09:11:11 -03:00
oobabooga
4e4cd67223 Save chat history with name/date in filename
closes #50
2023-02-03 09:02:35 -03:00
oobabooga
3af3ffeb90 Make --help output more readable 2023-02-02 23:36:28 -03:00
oobabooga
638495b633 Simplify generate() function 2023-02-02 13:47:08 -03:00
oobabooga
3f05cf5ddd Simplify encode() function 2023-02-02 13:31:32 -03:00
oobabooga
2583bc5840 Simplify deepspeed implementation (#40) 2023-02-02 12:15:44 -03:00
oobabooga
f38c9bf428 Fix deepspeed (oops) 2023-02-02 10:39:37 -03:00
oobabooga
90f1067598 Move deepspeed parameters to another file 2023-02-02 10:25:09 -03:00
81300
248ec4fa21
Merge branch 'oobabooga:main' into ds 2023-02-01 20:50:51 +02:00
81300
a6f4760772
Add arg for bfloat16 2023-02-01 20:22:07 +02:00
81300
c515282f5c
no_split_module_classes not needed 2023-02-01 19:47:26 +02:00
81300
0a0d289537
Fix issue with generating on multiple GPUs 2023-02-01 19:02:07 +02:00
81300
a97afa6965
Add DeepSpeed ZeRO-3 integration 2023-02-01 18:48:13 +02:00
oobabooga
6b13816c47 Change default --disk behavior 2023-02-01 10:43:28 -03:00
oobabooga
119be56390 Add back low_cpu_mem_usage=True
Removing it didn't help with anything, so I am adding it bad on a purely
superstiticious basis.
2023-02-01 10:01:44 -03:00
oobabooga
d4a0b377ab Allow standalone --cpu-memory
I think that what I am doing probably makes sense, but I could be wrong.
2023-01-31 21:23:16 -03:00
oobabooga
8ef89df746 Try to leave at least 1GiB free to prevent oom errors 2023-01-31 20:47:05 -03:00
oobabooga
bb77f20a6c Don't use low_cpu_mem_usage and device_map together 2023-01-31 13:24:05 -03:00
oobabooga
001ecf95b2
Update server.py 2023-01-31 08:14:16 -03:00
Silver267
a85bb5e9a2
Fix an error
Fixes "UnboundLocalError: local variable 'substring_found' referenced before assignment" when loading non-pygmalion models in cai chat mode.
2023-01-31 01:34:10 -05:00
oobabooga
5b0bbfa6e8 Clean up 2023-01-30 14:17:12 -03:00
oobabooga
2dadf42cb5 Print the tokenized example dialogue in a prettier way 2023-01-30 08:29:49 -03:00
oobabooga
161cae001b I needed this 2023-01-29 23:20:22 -03:00
oobabooga
3ebca480f6 Minor fix 2023-01-29 23:05:17 -03:00
oobabooga
00707a0b3b Add "Impersonate" button 2023-01-29 22:56:23 -03:00
oobabooga
de72e83508 Reorganize things 2023-01-29 14:27:22 -03:00
oobabooga
6fbfee9e6d Remove some bloat 2023-01-29 12:05:18 -03:00
oobabooga
9c9bd1074f Add option to replace the bot's last reply 2023-01-29 12:02:44 -03:00
oobabooga
e5ff4ddfc8 Add bot prefix modifier option in extensions 2023-01-29 10:11:59 -03:00
oobabooga
b6d01bb704 Enable extensions in all modes, not just chat 2023-01-29 09:48:18 -03:00