oobabooga
8d788874d7
Add support for characters
2023-01-19 16:46:46 -03:00
oobabooga
3121f4788e
Fix uploading chat log in --chat mode
2023-01-19 15:05:42 -03:00
oobabooga
849e4c7f90
Better way of finding the generated reply in the output string
2023-01-19 14:57:01 -03:00
oobabooga
d03b0ad7a8
Implement saving/loading chat logs ( #9 )
2023-01-19 14:03:47 -03:00
oobabooga
39bfea5a22
Add a progress bar
2023-01-19 12:20:57 -03:00
oobabooga
5390fc87c8
add auto-devices when disk is used
2023-01-19 12:11:44 -03:00
oobabooga
759da435e3
Release 8-bit models memory
2023-01-19 12:03:16 -03:00
oobabooga
f9faad4cfa
Add low VRAM guide
2023-01-19 11:25:17 -03:00
oobabooga
7ace04864a
Implement sending layers to disk with --disk ( #10 )
2023-01-19 11:09:24 -03:00
oobabooga
1ce95ee817
Mention text streaming
2023-01-19 10:46:41 -03:00
oobabooga
93fa9bbe01
Clean up the streaming implementation
2023-01-19 10:43:05 -03:00
oobabooga
c90310e40e
Small simplification
2023-01-19 00:41:57 -03:00
oobabooga
99536ef5bf
Add no-stream option
2023-01-18 23:56:42 -03:00
oobabooga
116299b3ad
Manual eos_token implementation
2023-01-18 22:57:39 -03:00
oobabooga
3cb30bed0a
Add a "stop" button
2023-01-18 22:44:47 -03:00
oobabooga
8f27d33034
Fix another bug
2023-01-18 22:08:23 -03:00
oobabooga
6c7f187586
Minor change
2023-01-18 21:59:23 -03:00
oobabooga
b3cba0b330
Bug
2023-01-18 21:54:44 -03:00
oobabooga
df2e910421
Stop generating in chat mode when \nYou: is generated
2023-01-18 21:51:18 -03:00
oobabooga
022960a087
This is the correct way of sampling 1 token at a time
2023-01-18 21:37:21 -03:00
oobabooga
0f01a3b1fa
Implement text streaming ( #10 )
...
Still experimental. There might be bugs.
2023-01-18 19:06:50 -03:00
oobabooga
ca13acdfa0
Ensure that the chat prompt will always contain < 2048 tokens
...
This way, we can keep the context string at the top of the prompt
even if you keep talking to the bot for hours.
Before this commit, the prompt would be simply truncated and the
context string would eventually be lost.
2023-01-17 20:16:23 -03:00
oobabooga
6456777b09
Clean things up
2023-01-16 16:35:45 -03:00
oobabooga
3a99b2b030
Change a truncation parameter
2023-01-16 13:53:30 -03:00
oobabooga
54bf55372b
Truncate prompts to 2048 characters
2023-01-16 13:43:23 -03:00
oobabooga
99d24bdbfe
Update README.md
2023-01-16 11:23:45 -03:00
oobabooga
ed1d2c0d38
Update README.md
2023-01-16 11:19:23 -03:00
oobabooga
82c50a09b2
Update README.md
2023-01-16 10:31:13 -03:00
oobabooga
c7a2818665
Grammar
2023-01-16 10:10:09 -03:00
oobabooga
3a28865eb2
Update README.md
2023-01-16 10:07:00 -03:00
oobabooga
d973897021
Typo
2023-01-16 01:52:28 -03:00
oobabooga
18626c4788
Update README.md
2023-01-16 01:51:59 -03:00
oobabooga
47a20638de
Don't need this
2023-01-15 23:15:30 -03:00
oobabooga
b55486fa00
Reorganize things
2023-01-15 23:01:51 -03:00
oobabooga
a3bb18002a
Update README
2023-01-15 22:34:56 -03:00
oobabooga
ebf4d5f506
Add --max-gpu-memory parameter for #7
2023-01-15 22:33:35 -03:00
oobabooga
bb1a172da0
Fix a bug in cai mode chat
2023-01-15 19:41:25 -03:00
oobabooga
e6691bd920
Make chat mode more like cai
2023-01-15 18:16:46 -03:00
oobabooga
9f1b6e0398
Update README
2023-01-15 16:50:58 -03:00
oobabooga
e04ecd4bce
Minor improvements
2023-01-15 16:43:31 -03:00
oobabooga
c6083f3dca
Fix the template
2023-01-15 15:57:00 -03:00
oobabooga
99b35dd48d
Update README
2023-01-15 15:46:09 -03:00
oobabooga
027c3dd27d
Allow jpg profile images
2023-01-15 15:45:25 -03:00
oobabooga
ada2e556d0
Update README
2023-01-15 15:41:36 -03:00
oobabooga
26d8ec3188
Update README
2023-01-15 15:40:03 -03:00
oobabooga
afe9f77f96
Reorder parameters
2023-01-15 15:30:39 -03:00
oobabooga
88d67427e1
Implement default settings customization using a json file
2023-01-15 15:23:41 -03:00
oobabooga
cf0cdae9c6
Add italics to *actions*
2023-01-15 13:32:24 -03:00
oobabooga
56be7149aa
Update README
2023-01-15 12:39:31 -03:00
oobabooga
6136da419c
Add --cai-chat option that mimics Character.AI's interface
2023-01-15 12:20:04 -03:00