oobabooga
6c6a52aaad
Change the filenames for caches and histories
2023-08-09 07:47:19 -07:00
oobabooga
d8fb506aff
Add RoPE scaling support for transformers (including dynamic NTK)
...
https://github.com/huggingface/transformers/pull/24653
2023-08-08 21:25:48 -07:00
Friedemann Lipphardt
901b028d55
Add option for named cloudflare tunnels ( #3364 )
2023-08-08 22:20:27 -03:00
oobabooga
bf08b16b32
Fix disappearing profile picture bug
2023-08-08 14:09:01 -07:00
Gennadij
0e78f3b4d4
Fixed a typo in "rms_norm_eps", incorrectly set as n_gqa ( #3494 )
2023-08-08 00:31:11 -03:00
oobabooga
37fb719452
Increase the Context/Greeting boxes sizes
2023-08-08 00:09:00 -03:00
oobabooga
584dd33424
Fix missing example_dialogue when uploading characters
2023-08-07 23:44:59 -03:00
oobabooga
412f6ff9d3
Change alpha_value maximum and step
2023-08-07 06:08:51 -07:00
oobabooga
a373c96d59
Fix a bug in modules/shared.py
2023-08-06 20:36:35 -07:00
oobabooga
3d48933f27
Remove ancient deprecation warnings
2023-08-06 18:58:59 -07:00
oobabooga
c237ce607e
Move characters/instruction-following to instruction-templates
2023-08-06 17:50:32 -07:00
oobabooga
65aa11890f
Refactor everything ( #3481 )
2023-08-06 21:49:27 -03:00
oobabooga
d4b851bdc8
Credit turboderp
2023-08-06 13:43:15 -07:00
oobabooga
0af10ab49b
Add Classifier Free Guidance (CFG) for Transformers/ExLlama ( #3325 )
2023-08-06 17:22:48 -03:00
missionfloyd
5134878344
Fix chat message order ( #3461 )
2023-08-05 13:53:54 -03:00
jllllll
44f31731af
Create logs dir if missing when saving history ( #3462 )
2023-08-05 13:47:16 -03:00
Forkoz
9dcb37e8d4
Fix: Mirostat fails on models split across multiple GPUs
2023-08-05 13:45:47 -03:00
oobabooga
8df3cdfd51
Add SSL certificate support ( #3453 )
2023-08-04 13:57:31 -03:00
missionfloyd
2336b75d92
Remove unnecessary chat.js ( #3445 )
2023-08-04 01:58:37 -03:00
oobabooga
4b3384e353
Handle unfinished lists during markdown streaming
2023-08-03 17:15:18 -07:00
Pete
f4005164f4
Fix llama.cpp truncation ( #3400 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-08-03 20:01:15 -03:00
oobabooga
87dab03dc0
Add the --cpu option for llama.cpp to prevent CUDA from being used ( #3432 )
2023-08-03 11:00:36 -03:00
oobabooga
3e70bce576
Properly format exceptions in the UI
2023-08-03 06:57:21 -07:00
oobabooga
32c564509e
Fix loading session in chat mode
2023-08-02 21:13:16 -07:00
oobabooga
0e8f9354b5
Add direct download for session/chat history JSONs
2023-08-02 19:43:39 -07:00
oobabooga
32a2bbee4a
Implement auto_max_new_tokens for ExLlama
2023-08-02 11:03:56 -07:00
oobabooga
e931844fe2
Add auto_max_new_tokens parameter ( #3419 )
2023-08-02 14:52:20 -03:00
Pete
6afc1a193b
Add a scrollbar to notebook/default, improve chat scrollbar style ( #3403 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-08-02 12:02:36 -03:00
oobabooga
b53ed70a70
Make llamacpp_HF 6x faster
2023-08-01 13:18:20 -07:00
oobabooga
8d46a8c50a
Change the default chat style and the default preset
2023-08-01 09:35:17 -07:00
oobabooga
959feba602
When saving model settings, only save the settings for the current loader
2023-08-01 06:10:09 -07:00
oobabooga
f094330df0
When saving a preset, only save params that differ from the defaults
2023-07-31 19:13:29 -07:00
oobabooga
84297d05c4
Add a "Filter by loader" menu to the Parameters tab
2023-07-31 19:09:02 -07:00
oobabooga
7de7b3d495
Fix newlines in exported character yamls
2023-07-31 10:46:02 -07:00
oobabooga
5ca37765d3
Only replace {{user}} and {{char}} at generation time
2023-07-30 11:42:30 -07:00
oobabooga
6e16af34fd
Save uploaded characters as yaml
...
Also allow yaml characters to be uploaded directly
2023-07-30 11:25:38 -07:00
oobabooga
b31321c779
Define visible_text before applying chat_input extensions
2023-07-26 07:27:14 -07:00
oobabooga
b17893a58f
Revert "Add tensor split support for llama.cpp ( #3171 )"
...
This reverts commit 031fe7225e
.
2023-07-26 07:06:01 -07:00
oobabooga
28779cd959
Use dark theme by default
2023-07-25 20:11:57 -07:00
oobabooga
c2e0d46616
Add credits
2023-07-25 15:49:04 -07:00
oobabooga
77d2e9f060
Remove flexgen 2
2023-07-25 15:18:25 -07:00
oobabooga
75c2dd38cf
Remove flexgen support
2023-07-25 15:15:29 -07:00
Foxtr0t1337
85b3a26e25
Ignore values which are not string in training.py ( #3287 )
2023-07-25 19:00:25 -03:00
Shouyi
031fe7225e
Add tensor split support for llama.cpp ( #3171 )
2023-07-25 18:59:26 -03:00
Eve
f653546484
README updates and improvements ( #3198 )
2023-07-25 18:58:13 -03:00
oobabooga
ef8637e32d
Add extension example, replace input_hijack with chat_input_modifier ( #3307 )
2023-07-25 18:49:56 -03:00
oobabooga
a07d070b6c
Add llama-2-70b GGML support ( #3285 )
2023-07-24 16:37:03 -03:00
jllllll
1141987a0d
Add checks for ROCm and unsupported architectures to llama_cpp_cuda loading ( #3225 )
2023-07-24 11:25:36 -03:00
Ikko Eltociear Ashimine
b2d5433409
Fix typo in deepspeed_parameters.py ( #3222 )
...
configration -> configuration
2023-07-24 11:17:28 -03:00
oobabooga
4b19b74e6c
Add CUDA wheels for llama-cpp-python by jllllll
2023-07-19 19:33:43 -07:00
oobabooga
913e060348
Change the default preset to Divine Intellect
...
It seems to reduce hallucination while using instruction-tuned models.
2023-07-19 08:24:37 -07:00
randoentity
a69955377a
[GGML] Support for customizable RoPE ( #3083 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-17 22:32:37 -03:00
appe233
89e0d15cf5
Use 'torch.backends.mps.is_available' to check if mps is supported ( #3164 )
2023-07-17 21:27:18 -03:00
oobabooga
8c1c2e0fae
Increase max_new_tokens upper limit
2023-07-17 17:08:22 -07:00
oobabooga
b1a6ea68dd
Disable "autoload the model" by default
2023-07-17 07:40:56 -07:00
oobabooga
a199f21799
Optimize llamacpp_hf a bit
2023-07-16 20:49:48 -07:00
oobabooga
6a3edb0542
Clean up llamacpp_hf.py
2023-07-15 22:40:55 -07:00
oobabooga
27a84b4e04
Make AutoGPTQ the default again
...
Purely for compatibility with more models.
You should still use ExLlama_HF for LLaMA models.
2023-07-15 22:29:23 -07:00
oobabooga
5e3f7e00a9
Create llamacpp_HF loader ( #3062 )
2023-07-16 02:21:13 -03:00
oobabooga
94dfcec237
Make it possible to evaluate exllama perplexity ( #3138 )
2023-07-16 01:52:55 -03:00
oobabooga
b284f2407d
Make ExLlama_HF the new default for GPTQ
2023-07-14 14:03:56 -07:00
Morgan Schweers
6d1e911577
Add support for logits processors in extensions ( #3029 )
2023-07-13 17:22:41 -03:00
oobabooga
e202190c4f
lint
2023-07-12 11:33:25 -07:00
FartyPants
9b55d3a9f9
More robust and error prone training ( #3058 )
2023-07-12 15:29:43 -03:00
oobabooga
30f37530d5
Add back .replace('\r', '')
2023-07-12 09:52:20 -07:00
Fernando Tarin Morales
987d0fe023
Fix: Fixed the tokenization process of a raw dataset and improved its efficiency ( #3035 )
2023-07-12 12:05:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training ( #2624 )
2023-07-12 11:53:31 -03:00
kizinfo
5d513eea22
Add ability to load all text files from a subdirectory for training ( #1997 )
...
* Update utils.py
returns individual txt files and subdirectories to getdatasets to allow for training from a directory of text files
* Update training.py
minor tweak to training on raw datasets to detect if a directory is selected, and if so, to load in all the txt files in that directory for training
* Update put-trainer-datasets-here.txt
document
* Minor change
* Use pathlib, sort by natural keys
* Space
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 11:44:30 -03:00
practicaldreamer
73a0def4af
Add Feature to Log Sample of Training Dataset for Inspection ( #1711 )
2023-07-12 11:26:45 -03:00
oobabooga
b6ba68eda9
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
2023-07-12 07:19:34 -07:00
oobabooga
a17b78d334
Disable wandb during training
2023-07-12 07:19:12 -07:00
Gabriel Pena
eedb3bf023
Add low vram mode on llama cpp ( #3076 )
2023-07-12 11:05:13 -03:00
Axiom Wolf
d986c17c52
Chat history download creates more detailed file names ( #3051 )
2023-07-12 00:10:36 -03:00
Salvador E. Tropea
324e45b848
[Fixed] wbits and groupsize values from model not shown ( #2977 )
2023-07-11 23:27:38 -03:00
oobabooga
e3810dff40
Style changes
2023-07-11 18:49:06 -07:00
Ricardo Pinto
3e9da5a27c
Changed FormComponent to IOComponent ( #3017 )
...
Co-authored-by: Ricardo Pinto <1-ricardo.pinto@users.noreply.gitlab.cognitage.com>
2023-07-11 18:52:16 -03:00
Forkoz
74ea7522a0
Lora fixes for AutoGPTQ ( #2818 )
2023-07-09 01:03:43 -03:00
oobabooga
5ac4e4da8b
Make --model work with argument like models/folder_name
2023-07-08 10:22:54 -07:00
oobabooga
b6643e5039
Add decode functions to llama.cpp/exllama
2023-07-07 09:11:30 -07:00
oobabooga
1ba2e88551
Add truncation to exllama
2023-07-07 09:09:23 -07:00
oobabooga
c21b73ff37
Minor change to ui.py
2023-07-07 09:09:14 -07:00
oobabooga
de994331a4
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-07-06 22:25:43 -07:00
oobabooga
9aee1064a3
Block a cloudfare request
2023-07-06 22:24:52 -07:00
Fernando Tarin Morales
d7e14e1f78
Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits ( #3036 )
2023-07-07 02:24:07 -03:00
Xiaojian "JJ" Deng
ff45317032
Update models.py ( #3020 )
...
Hopefully fixed error with "ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently
imported."
2023-07-05 21:40:43 -03:00
oobabooga
8705eba830
Remove universal llama tokenizer support
...
Instead replace it with a warning if the tokenizer files look off
2023-07-04 19:43:19 -07:00
oobabooga
333075e726
Fix #3003
2023-07-04 11:38:35 -03:00
oobabooga
463ddfffd0
Fix start_with
2023-07-03 23:32:02 -07:00
oobabooga
373555c4fb
Fix loading some histories (thanks kaiokendev)
2023-07-03 22:19:28 -07:00
Panchovix
10c8c197bf
Add Support for Static NTK RoPE scaling for exllama/exllama_hf ( #2955 )
2023-07-04 01:13:16 -03:00
oobabooga
7e8340b14d
Make greetings appear in --multi-user mode
2023-07-03 20:08:14 -07:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
FartyPants
1f8cae14f9
Update training.py - correct use of lora_names ( #2988 )
2023-07-03 17:41:18 -03:00
FartyPants
c23c88ee4c
Update LoRA.py - avoid potential error ( #2953 )
2023-07-03 17:40:22 -03:00
FartyPants
33f56fd41d
Update models.py to clear LORA names after unload ( #2951 )
2023-07-03 17:39:06 -03:00
FartyPants
48b11f9c5b
Training: added trainable parameters info ( #2944 )
2023-07-03 17:38:36 -03:00
Turamarth14
847f70b694
Update html_generator.py ( #2954 )
...
With version 10.0.0 of Pillow the constant Image.ANTIALIAS has been removed. Instead Image.LANCZOS should be used.
2023-07-02 01:43:58 -03:00
ardfork
3c076c3c80
Disable half2 for ExLlama when using HIP ( #2912 )
2023-06-29 15:03:16 -03:00
missionfloyd
ac0f96e785
Some more character import tweaks. ( #2921 )
2023-06-29 14:56:25 -03:00
oobabooga
79db629665
Minor bug fix
2023-06-29 13:53:06 -03:00
oobabooga
3443219cbc
Add repetition penalty range parameter to transformers ( #2916 )
2023-06-29 13:40:13 -03:00
oobabooga
20740ab16e
Revert "Fix exllama_hf gibbersh above 2048 context, and works >5000 context. ( #2913 )"
...
This reverts commit 37a16d23a7
.
2023-06-28 18:10:34 -03:00
Panchovix
37a16d23a7
Fix exllama_hf gibbersh above 2048 context, and works >5000 context. ( #2913 )
2023-06-28 12:36:07 -03:00
FartyPants
ab1998146b
Training update - backup the existing adapter before training on top of it ( #2902 )
2023-06-27 18:24:04 -03:00
oobabooga
22d455b072
Add LoRA support to ExLlama_HF
2023-06-26 00:10:33 -03:00
oobabooga
c52290de50
ExLlama with long context ( #2875 )
2023-06-25 22:49:26 -03:00
oobabooga
9290c6236f
Keep ExLlama_HF if already selected
2023-06-25 19:06:28 -03:00
oobabooga
75fd763f99
Fix chat saving issue ( closes #2863 )
2023-06-25 18:14:57 -03:00
FartyPants
21c189112c
Several Training Enhancements ( #2868 )
2023-06-25 15:34:46 -03:00
oobabooga
95212edf1f
Update training.py
2023-06-25 12:13:15 -03:00
oobabooga
f31281a8de
Fix loading instruction templates containing literal '\n'
2023-06-25 02:13:26 -03:00
oobabooga
f0fcd1f697
Sort some imports
2023-06-25 01:44:36 -03:00
oobabooga
365b672531
Minor change to prevent future bugs
2023-06-25 01:38:54 -03:00
jllllll
bef67af23c
Use pre-compiled python module for ExLlama ( #2770 )
2023-06-24 20:24:17 -03:00
oobabooga
cec5fb0ef6
Failed attempt at evaluating exllama_hf perplexity
2023-06-24 12:02:25 -03:00
快乐的我531
e356f69b36
Make stop_everything work with non-streamed generation ( #2848 )
2023-06-24 11:19:16 -03:00
oobabooga
ec482f3dae
Apply input extensions after yielding *Is typing...*
2023-06-24 11:07:11 -03:00
oobabooga
3e80f2aceb
Apply the output extensions only once
...
Relevant for google translate, silero
2023-06-24 10:59:07 -03:00
missionfloyd
51a388fa34
Organize chat history/character import menu ( #2845 )
...
* Organize character import menu
* Move Chat history upload/download labels
2023-06-24 09:55:02 -03:00
oobabooga
8bb3bb39b3
Implement stopping string search in string space ( #2847 )
2023-06-24 09:43:00 -03:00
oobabooga
3ae9af01aa
Add --no_use_cuda_fp16 param for AutoGPTQ
2023-06-23 12:22:56 -03:00
Panchovix
5646690769
Fix some models not loading on exllama_hf ( #2835 )
2023-06-23 11:31:02 -03:00
oobabooga
383c50f05b
Replace old presets with the results of Preset Arena ( #2830 )
2023-06-23 01:48:29 -03:00
Panchovix
b4a38c24b7
Fix Multi-GPU not working on exllama_hf ( #2803 )
2023-06-22 16:05:25 -03:00
LarryVRH
580c1ee748
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. ( #2777 )
2023-06-21 15:31:42 -03:00
EugeoSynthesisThirtyTwo
7625c6de89
fix usage of self in classmethod ( #2781 )
2023-06-20 16:18:42 -03:00
MikoAL
c40932eb39
Added Falcon LoRA training support ( #2684 )
...
I am 50% sure this will work
2023-06-20 01:03:44 -03:00
FartyPants
ce86f726e9
Added saving of training logs to training_log.json ( #2769 )
2023-06-20 00:47:36 -03:00
Cebtenzzre
59e7ecb198
llama.cpp: implement ban_eos_token via logits_processor ( #2765 )
2023-06-19 21:31:19 -03:00
oobabooga
eb30f4441f
Add ExLlama+LoRA support ( #2756 )
2023-06-19 12:31:24 -03:00
oobabooga
5f418f6171
Fix a memory leak (credits for the fix: Ph0rk0z)
2023-06-19 01:19:28 -03:00
ThisIsPIRI
def3b69002
Fix loading condition for universal llama tokenizer ( #2753 )
2023-06-18 18:14:06 -03:00
oobabooga
09c781b16f
Add modules/block_requests.py
...
This has become unnecessary, but it could be useful in the future
for other libraries.
2023-06-18 16:31:14 -03:00
Forkoz
3cae1221d4
Update exllama.py - Respect model dir parameter ( #2744 )
2023-06-18 13:26:30 -03:00
oobabooga
c5641b65d3
Handle leading spaces properly in ExLllama
2023-06-17 19:35:12 -03:00
oobabooga
05a743d6ad
Make llama.cpp use tfs parameter
2023-06-17 19:08:25 -03:00
oobabooga
e19cbea719
Add a variable to modules/shared.py
2023-06-17 19:02:29 -03:00
oobabooga
cbd63eeeff
Fix repeated tokens with exllama
2023-06-17 19:02:08 -03:00
oobabooga
766c760cd7
Use gen_begin_reuse in exllama
2023-06-17 18:00:10 -03:00
oobabooga
b27f83c0e9
Make exllama stoppable
2023-06-16 22:03:23 -03:00
oobabooga
7f06d551a3
Fix streaming callback
2023-06-16 21:44:56 -03:00
oobabooga
5f392122fd
Add gpu_split param to ExLlama
...
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga
9f40032d32
Add ExLlama support ( #2444 )
2023-06-16 20:35:38 -03:00
oobabooga
dea43685b0
Add some clarifications
2023-06-16 19:10:53 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely ( #2720 )
2023-06-16 19:00:37 -03:00
Tom Jobbins
646b0c889f
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP ( #2648 )
2023-06-15 23:59:54 -03:00
oobabooga
2b9a6b9259
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-06-14 18:45:24 -03:00
oobabooga
4d508cbe58
Add some checks to AutoGPTQ loader
2023-06-14 18:44:43 -03:00
FartyPants
56c19e623c
Add LORA name instead of "default" in PeftModel ( #2689 )
2023-06-14 18:29:42 -03:00
oobabooga
474dc7355a
Allow API requests to use parameter presets
2023-06-14 11:32:20 -03:00
oobabooga
e471919e6d
Make llava/minigpt-4 work with AutoGPTQ
2023-06-11 17:56:01 -03:00
oobabooga
f4defde752
Add a menu for installing extensions
2023-06-11 17:11:06 -03:00
oobabooga
ac122832f7
Make dropdown menus more similar to automatic1111
2023-06-11 14:20:16 -03:00
oobabooga
6133675e0f
Add menus for saving presets/characters/instruction templates/prompts ( #2621 )
2023-06-11 12:19:18 -03:00
brandonj60
b04e18d10c
Add Mirostat v2 sampling to transformer models ( #2571 )
2023-06-09 21:26:31 -03:00
oobabooga
6015616338
Style changes
2023-06-06 13:06:05 -03:00
oobabooga
f040073ef1
Handle the case of older autogptq install
2023-06-06 13:05:05 -03:00
oobabooga
bc58dc40bd
Fix a minor bug
2023-06-06 12:57:13 -03:00
oobabooga
00b94847da
Remove softprompt support
2023-06-06 07:42:23 -03:00
oobabooga
0aebc838a0
Don't save the history for 'None' character
2023-06-06 07:21:07 -03:00
oobabooga
9f215523e2
Remove some unused imports
2023-06-06 07:05:46 -03:00
oobabooga
0f0108ce34
Never load the history for default character
2023-06-06 07:00:11 -03:00
oobabooga
11f38b5c2b
Add AutoGPTQ LoRA support
2023-06-05 23:32:57 -03:00
oobabooga
3a5cfe96f0
Increase chat_prompt_size_max
2023-06-05 17:37:37 -03:00
oobabooga
f276d88546
Use AutoGPTQ by default for GPTQ models
2023-06-05 15:41:48 -03:00
oobabooga
9b0e95abeb
Fix "regenerate" when "Start reply with" is set
2023-06-05 11:56:03 -03:00
oobabooga
19f78684e6
Add "Start reply with" feature to chat mode
2023-06-02 13:58:08 -03:00
GralchemOz
f7b07c4705
Fix the missing Chinese character bug ( #2497 )
2023-06-02 13:45:41 -03:00
oobabooga
2f6631195a
Add desc_act checkbox to the UI
2023-06-02 01:45:46 -03:00
LaaZa
9c066601f5
Extend AutoGPTQ support for any GPTQ model ( #1668 )
2023-06-02 01:33:55 -03:00
oobabooga
a83f9aa65b
Update shared.py
2023-06-01 12:08:39 -03:00
oobabooga
b6c407f51d
Don't stream at more than 24 fps
...
This is a performance optimization
2023-05-31 23:41:42 -03:00
Forkoz
9ab90d8b60
Fix warning for qlora ( #2438 )
2023-05-30 11:09:18 -03:00
oobabooga
3578dd3611
Change a warning message
2023-05-29 22:40:54 -03:00
oobabooga
3a6e194bc7
Change a warning message
2023-05-29 22:39:23 -03:00
Luis Lopez
9e7204bef4
Add tail-free and top-a sampling ( #2357 )
2023-05-29 21:40:01 -03:00
oobabooga
1394f44e14
Add triton checkbox for AutoGPTQ
2023-05-29 15:32:45 -03:00
oobabooga
f34d20922c
Minor fix
2023-05-29 13:31:17 -03:00
oobabooga
983eef1e29
Attempt at evaluating falcon perplexity (failed)
2023-05-29 13:28:25 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) ( #2367 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
Forkoz
60ae80cf28
Fix hang in tokenizer for AutoGPTQ llama models. ( #2399 )
2023-05-28 23:10:10 -03:00
oobabooga
2f811b1bdf
Change a warning message
2023-05-28 22:48:20 -03:00
oobabooga
9ee1e37121
Fix return message when no model is loaded
2023-05-28 22:46:32 -03:00
oobabooga
00ebea0b2a
Use YAML for presets and settings
2023-05-28 22:34:12 -03:00
oobabooga
acfd876f29
Some qol changes to "Perplexity evaluation"
2023-05-25 15:06:22 -03:00
oobabooga
8efdc01ffb
Better default for compute_dtype
2023-05-25 15:05:53 -03:00
oobabooga
37d4ad012b
Add a button for rendering markdown for any model
2023-05-25 11:59:27 -03:00
DGdev91
cf088566f8
Make llama.cpp read prompt size and seed from settings ( #2299 )
2023-05-25 10:29:31 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter ( #2320 )
2023-05-25 01:14:13 -03:00
oobabooga
63ce5f9c28
Add back a missing bos token
2023-05-24 13:54:36 -03:00
Alex "mcmonkey" Goodwin
3cd7c5bdd0
LoRA Trainer: train_only_after
option to control which part of your input to train on ( #2315 )
2023-05-24 12:43:22 -03:00
flurb18
d37a28730d
Beginning of multi-user support ( #2262 )
...
Adds a lock to generate_reply
2023-05-24 09:38:20 -03:00
Gabriel Terrien
7aed53559a
Support of the --gradio-auth flag ( #2283 )
2023-05-23 20:39:26 -03:00
oobabooga
fb6a00f4e5
Small AutoGPTQ fix
2023-05-23 15:20:01 -03:00
oobabooga
cd3618d7fb
Add support for RWKV in Hugging Face format
2023-05-23 02:07:28 -03:00
oobabooga
75adc110d4
Fix "perplexity evaluation" progress messages
2023-05-23 01:54:52 -03:00
oobabooga
4d94a111d4
memoize load_character to speed up the chat API
2023-05-23 00:50:58 -03:00
Gabriel Terrien
0f51b64bb3
Add a "dark_theme" option to settings.json ( #2288 )
2023-05-22 19:45:11 -03:00
oobabooga
c0fd7f3257
Add mirostat parameters for llama.cpp ( #2287 )
2023-05-22 19:37:24 -03:00
oobabooga
d63ef59a0f
Apply LLaMA-Precise preset to Vicuna by default
2023-05-21 23:00:42 -03:00