Commit Graph

1135 Commits

Author SHA1 Message Date
oobabooga
4dc1d8c091
Update README.md 2023-03-12 12:46:53 -03:00
oobabooga
901dcba9b4
Merge pull request #263 from HideLord/main
Fixing compatibility with GPTQ repository
2023-03-12 12:42:08 -03:00
oobabooga
fda376d9c3 Use os.path.abspath() instead of str() 2023-03-12 12:41:04 -03:00
HideLord
8403152257 Fixing compatibility with GPTQ repo commit 2f667f7da051967566a5fb0546f8614bcd3a1ccd. Expects string and breaks on 2023-03-12 17:28:15 +02:00
HideLord
a27f98dbbc Merge branch 'main' of https://github.com/HideLord/text-generation-webui 2023-03-12 16:51:04 +02:00
oobabooga
f3b00dd165
Merge pull request #224 from ItsLogic/llama-bits
Allow users to load 2, 3 and 4 bit llama models
2023-03-12 11:23:50 -03:00
oobabooga
89e9493509 Update README 2023-03-12 11:23:20 -03:00
oobabooga
65dda28c9d Rename --llama-bits to --gptq-bits 2023-03-12 11:19:07 -03:00
oobabooga
fed3617f07 Move LLaMA 4-bit into a separate file 2023-03-12 11:12:34 -03:00
oobabooga
0ac562bdba Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253 2023-03-12 10:46:16 -03:00
oobabooga
78901d522b Remove unused imports 2023-03-12 08:59:05 -03:00
oobabooga
35c14f31b2
Merge pull request #259 from hieultp/patch-1
Fix typo error in LLaMa prompts
2023-03-12 08:52:02 -03:00
oobabooga
3c25557ef0 Add tqdm to requirements.txt 2023-03-12 08:48:16 -03:00
Phuoc-Hieu Le
781c09235c
Fix typo error in script.py 2023-03-12 15:21:50 +07:00
Xan
9276af3561 clean up 2023-03-12 19:06:24 +11:00
Xan
b3e10e47c0 Fix merge conflict in text_generation
- Need to update `shared.still_streaming = False` before the final `yield formatted_outputs`, shifted the position of some yields.
2023-03-12 18:56:35 +11:00
Xan
d4afed4e44 Fixes and polish
- Change wav naming to be completely unique using timestamp instead of message ID, stops browser using cached audio when new audio is made with the same file name (eg after regenerate or clear history).
- Make the autoplay setting actually disable autoplay.
- Make Settings panel a bit more compact.
- Hide html errors when audio file of chat history is missing.
- Add button to permanently convert TTS history to normal text messages
- Changed the "show message text" toggle to affect the chat history.
2023-03-12 17:56:57 +11:00
oobabooga
ad14f0e499 Fix regenerate (provisory way) 2023-03-12 03:42:29 -03:00
oobabooga
6e12068ba2
Merge pull request #258 from lxe/lxe/utf8
Load and save character files and chat history in UTF-8
2023-03-12 03:28:49 -03:00
oobabooga
e2da6b9685 Fix You You You appearing in chat mode 2023-03-12 03:25:56 -03:00
oobabooga
bcf0075278
Merge pull request #235 from xanthousm/Quality_of_life-main
--auto-launch and "Is typing..."
2023-03-12 03:12:56 -03:00
Aleksey Smolenchuk
3f7c3d6559
No need to set encoding on binary read 2023-03-11 22:10:57 -08:00
oobabooga
3437de686c
Merge pull request #189 from oobabooga/new-streaming
New streaming method (much faster)
2023-03-12 03:01:26 -03:00
oobabooga
341e135036 Various fixes in chat mode 2023-03-12 02:53:08 -03:00
Aleksey Smolenchuk
3baf5fc700
Load and save chat history in utf-8 2023-03-11 21:40:01 -08:00
oobabooga
b0e8cb8c88 Various fixes in chat mode 2023-03-12 02:31:45 -03:00
unknown
433f6350bc Load and save character files in UTF-8 2023-03-11 21:23:05 -08:00
oobabooga
0bd5430988 Use 'with' statement to better handle streaming memory 2023-03-12 02:04:28 -03:00
oobabooga
37f0166b2d Fix memory leak in new streaming (second attempt) 2023-03-11 23:14:49 -03:00
HideLord
def97f658c Small patch to fix loading of character jsons. Now it correctly reads non-ascii characters on Windows. 2023-03-12 02:54:22 +02:00
oobabooga
92fe947721 Merge branch 'main' into new-streaming 2023-03-11 19:59:45 -03:00
oobabooga
195e99d0b6 Add llama_prompts extension 2023-03-11 16:11:15 -03:00
oobabooga
501afbc234 Add requests to requirements.txt 2023-03-11 14:47:30 -03:00
oobabooga
8f8da6707d Minor style changes to silero_tts 2023-03-11 11:17:13 -03:00
oobabooga
2743dd736a Add *Is typing...* to impersonate as well 2023-03-11 10:50:18 -03:00
Xan
96c51973f9 --auto-launch and "Is typing..."
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
Xan
33df4bd91f Merge remote-tracking branch 'upstream/main' 2023-03-11 22:40:47 +11:00
Xan
b8f7d34c1d Undo changes to requirements
needing to manually install tensorboard might be a windows-only problem. Can be easily solved manually.
2023-03-11 17:05:09 +11:00
Xan
0dfac4b777 Working html autoplay, clean up, improve wav naming
- New autoplay using html tag, removed from old message when new input provided
- Add voice pitch and speed control
- Group settings together
- Use name + conversation history to match wavs to messages, minimize problems when changing characters

Current minor bugs:
- Gradio seems to cache the audio files, so using "clear history" and generating new messages will play the old audio (the new messages are saving correctly). Gradio will clear cache and use correct audio after a few messages or after a page refresh.
- Switching characters does not immediately update the message ID used for the audio. ID is updated after the first new message, but that message will use the wrong ID
2023-03-11 16:34:59 +11:00
draff
28fd4fc970 Change wording to be consistent with other args 2023-03-10 23:34:13 +00:00
draff
001e638b47 Make it actually work 2023-03-10 23:28:19 +00:00
draff
804486214b Re-implement --load-in-4bit and update --llama-bits arg description 2023-03-10 23:21:01 +00:00
ItsLogic
9ba8156a70
remove unnecessary Path() 2023-03-10 22:33:58 +00:00
draff
e6c631aea4 Replace --load-in-4bit with --llama-bits
Replaces --load-in-4bit with a more flexible --llama-bits arg to allow for 2 and 3 bit models as well. This commit also fixes a loading issue with .pt files which are not in the root of the models folder
2023-03-10 21:36:45 +00:00
oobabooga
026d60bd34 Remove default preset that didn't do anything 2023-03-10 14:01:02 -03:00
oobabooga
e01da4097c
Merge pull request #210 from rohvani/pt-path-changes
Add llama-65b-4bit.pt support
2023-03-10 11:04:56 -03:00
oobabooga
e9dbdafb14
Merge branch 'main' into pt-path-changes 2023-03-10 11:03:42 -03:00
oobabooga
706a03b2cb Minor changes 2023-03-10 11:02:25 -03:00
oobabooga
de7dd8b6aa Add comments 2023-03-10 10:54:08 -03:00
oobabooga
113b791aa5
Merge pull request #219 from deepdiffuser/4bit-multigpu
add multi-gpu support for 4bit gptq LLaMA
2023-03-10 10:52:45 -03:00