oobabooga
4066ab4c0c
Reorder the imports
2023-03-12 13:36:18 -03:00
Xan
9276af3561
clean up
2023-03-12 19:06:24 +11:00
Xan
b3e10e47c0
Fix merge conflict in text_generation
...
- Need to update `shared.still_streaming = False` before the final `yield formatted_outputs`, shifted the position of some yields.
2023-03-12 18:56:35 +11:00
Xan
d4afed4e44
Fixes and polish
...
- Change wav naming to be completely unique using timestamp instead of message ID, stops browser using cached audio when new audio is made with the same file name (eg after regenerate or clear history).
- Make the autoplay setting actually disable autoplay.
- Make Settings panel a bit more compact.
- Hide html errors when audio file of chat history is missing.
- Add button to permanently convert TTS history to normal text messages
- Changed the "show message text" toggle to affect the chat history.
2023-03-12 17:56:57 +11:00
oobabooga
ad14f0e499
Fix regenerate (provisory way)
2023-03-12 03:42:29 -03:00
oobabooga
6e12068ba2
Merge pull request #258 from lxe/lxe/utf8
...
Load and save character files and chat history in UTF-8
2023-03-12 03:28:49 -03:00
oobabooga
e2da6b9685
Fix You You You appearing in chat mode
2023-03-12 03:25:56 -03:00
oobabooga
bcf0075278
Merge pull request #235 from xanthousm/Quality_of_life-main
...
--auto-launch and "Is typing..."
2023-03-12 03:12:56 -03:00
Aleksey Smolenchuk
3f7c3d6559
No need to set encoding on binary read
2023-03-11 22:10:57 -08:00
oobabooga
3437de686c
Merge pull request #189 from oobabooga/new-streaming
...
New streaming method (much faster)
2023-03-12 03:01:26 -03:00
oobabooga
341e135036
Various fixes in chat mode
2023-03-12 02:53:08 -03:00
Aleksey Smolenchuk
3baf5fc700
Load and save chat history in utf-8
2023-03-11 21:40:01 -08:00
oobabooga
b0e8cb8c88
Various fixes in chat mode
2023-03-12 02:31:45 -03:00
unknown
433f6350bc
Load and save character files in UTF-8
2023-03-11 21:23:05 -08:00
oobabooga
0bd5430988
Use 'with' statement to better handle streaming memory
2023-03-12 02:04:28 -03:00
oobabooga
37f0166b2d
Fix memory leak in new streaming (second attempt)
2023-03-11 23:14:49 -03:00
oobabooga
92fe947721
Merge branch 'main' into new-streaming
2023-03-11 19:59:45 -03:00
oobabooga
195e99d0b6
Add llama_prompts extension
2023-03-11 16:11:15 -03:00
oobabooga
501afbc234
Add requests to requirements.txt
2023-03-11 14:47:30 -03:00
oobabooga
8f8da6707d
Minor style changes to silero_tts
2023-03-11 11:17:13 -03:00
oobabooga
2743dd736a
Add *Is typing...* to impersonate as well
2023-03-11 10:50:18 -03:00
Xan
96c51973f9
--auto-launch and "Is typing..."
...
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
Xan
33df4bd91f
Merge remote-tracking branch 'upstream/main'
2023-03-11 22:40:47 +11:00
Xan
b8f7d34c1d
Undo changes to requirements
...
needing to manually install tensorboard might be a windows-only problem. Can be easily solved manually.
2023-03-11 17:05:09 +11:00
Xan
0dfac4b777
Working html autoplay, clean up, improve wav naming
...
- New autoplay using html tag, removed from old message when new input provided
- Add voice pitch and speed control
- Group settings together
- Use name + conversation history to match wavs to messages, minimize problems when changing characters
Current minor bugs:
- Gradio seems to cache the audio files, so using "clear history" and generating new messages will play the old audio (the new messages are saving correctly). Gradio will clear cache and use correct audio after a few messages or after a page refresh.
- Switching characters does not immediately update the message ID used for the audio. ID is updated after the first new message, but that message will use the wrong ID
2023-03-11 16:34:59 +11:00
oobabooga
026d60bd34
Remove default preset that didn't do anything
2023-03-10 14:01:02 -03:00
oobabooga
e01da4097c
Merge pull request #210 from rohvani/pt-path-changes
...
Add llama-65b-4bit.pt support
2023-03-10 11:04:56 -03:00
oobabooga
e9dbdafb14
Merge branch 'main' into pt-path-changes
2023-03-10 11:03:42 -03:00
oobabooga
706a03b2cb
Minor changes
2023-03-10 11:02:25 -03:00
oobabooga
de7dd8b6aa
Add comments
2023-03-10 10:54:08 -03:00
oobabooga
113b791aa5
Merge pull request #219 from deepdiffuser/4bit-multigpu
...
add multi-gpu support for 4bit gptq LLaMA
2023-03-10 10:52:45 -03:00
oobabooga
e461c0b7a0
Move the import to the top
2023-03-10 10:51:12 -03:00
deepdiffuser
9fbd60bf22
add no_split_module_classes to prevent tensor split error
2023-03-10 05:30:47 -08:00
deepdiffuser
ab47044459
add multi-gpu support for 4bit gptq LLaMA
2023-03-10 04:52:45 -08:00
rohvani
2ac2913747
fix reference issue
2023-03-09 20:13:23 -08:00
oobabooga
1d7e893fa1
Merge pull request #211 from zoidbb/add-tokenizer-to-hf-downloads
...
download tokenizer when present
2023-03-10 00:46:21 -03:00
oobabooga
875847bf88
Consider tokenizer a type of text
2023-03-10 00:45:28 -03:00
oobabooga
8ed214001d
Merge branch 'main' of github.com:oobabooga/text-generation-webui
2023-03-10 00:42:09 -03:00
oobabooga
249c268176
Fix the download script for long lists of files on HF
2023-03-10 00:41:10 -03:00
Ber Zoidberg
ec3de0495c
download tokenizer when present
2023-03-09 19:08:09 -08:00
rohvani
5ee376c580
add LLaMA preset
2023-03-09 18:31:41 -08:00
rohvani
826e297b0e
add llama-65b-4bit support & multiple pt paths
2023-03-09 18:31:32 -08:00
oobabooga
7c3d1b43c1
Merge pull request #204 from MichealC0/patch-1
...
Update README.md
2023-03-09 23:04:09 -03:00
oobabooga
9849aac0f1
Don't show .pt models in the list
2023-03-09 21:54:50 -03:00
oobabooga
1a3d25f75d
Merge pull request #206 from oobabooga/llama-4bit
...
Add LLaMA 4-bit support
2023-03-09 21:07:32 -03:00
oobabooga
eb0cb9b6df
Update README
2023-03-09 20:53:52 -03:00
oobabooga
74102d5ee4
Insert to the path instead of appending
2023-03-09 20:51:22 -03:00
oobabooga
2965aa1625
Check if the .pt file exists
2023-03-09 20:48:51 -03:00
oobabooga
d41e3c233b
Update README.md
2023-03-09 18:02:44 -03:00
oobabooga
fd540b8930
Use new LLaMA implementation (this will break stuff. I am sorry)
...
https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model
2023-03-09 17:59:15 -03:00