oobabooga
30f37530d5
Add back .replace('\r', '')
2023-07-12 09:52:20 -07:00
Fernando Tarin Morales
987d0fe023
Fix: Fixed the tokenization process of a raw dataset and improved its efficiency ( #3035 )
2023-07-12 12:05:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training ( #2624 )
2023-07-12 11:53:31 -03:00
kizinfo
5d513eea22
Add ability to load all text files from a subdirectory for training ( #1997 )
...
* Update utils.py
returns individual txt files and subdirectories to getdatasets to allow for training from a directory of text files
* Update training.py
minor tweak to training on raw datasets to detect if a directory is selected, and if so, to load in all the txt files in that directory for training
* Update put-trainer-datasets-here.txt
document
* Minor change
* Use pathlib, sort by natural keys
* Space
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 11:44:30 -03:00
practicaldreamer
73a0def4af
Add Feature to Log Sample of Training Dataset for Inspection ( #1711 )
2023-07-12 11:26:45 -03:00
oobabooga
a17b78d334
Disable wandb during training
2023-07-12 07:19:12 -07:00
oobabooga
e3810dff40
Style changes
2023-07-11 18:49:06 -07:00
FartyPants
1f8cae14f9
Update training.py - correct use of lora_names ( #2988 )
2023-07-03 17:41:18 -03:00
FartyPants
48b11f9c5b
Training: added trainable parameters info ( #2944 )
2023-07-03 17:38:36 -03:00
FartyPants
ab1998146b
Training update - backup the existing adapter before training on top of it ( #2902 )
2023-06-27 18:24:04 -03:00
FartyPants
21c189112c
Several Training Enhancements ( #2868 )
2023-06-25 15:34:46 -03:00
oobabooga
95212edf1f
Update training.py
2023-06-25 12:13:15 -03:00
oobabooga
f0fcd1f697
Sort some imports
2023-06-25 01:44:36 -03:00
MikoAL
c40932eb39
Added Falcon LoRA training support ( #2684 )
...
I am 50% sure this will work
2023-06-20 01:03:44 -03:00
FartyPants
ce86f726e9
Added saving of training logs to training_log.json ( #2769 )
2023-06-20 00:47:36 -03:00
Forkoz
9ab90d8b60
Fix warning for qlora ( #2438 )
2023-05-30 11:09:18 -03:00
oobabooga
3a6e194bc7
Change a warning message
2023-05-29 22:39:23 -03:00
oobabooga
acfd876f29
Some qol changes to "Perplexity evaluation"
2023-05-25 15:06:22 -03:00
oobabooga
63ce5f9c28
Add back a missing bos token
2023-05-24 13:54:36 -03:00
Alex "mcmonkey" Goodwin
3cd7c5bdd0
LoRA Trainer: train_only_after
option to control which part of your input to train on ( #2315 )
2023-05-24 12:43:22 -03:00
oobabooga
e116d31180
Prevent unwanted log messages from modules
2023-05-21 22:42:34 -03:00
Alex "mcmonkey" Goodwin
50c70e28f0
Lora Trainer improvements, part 6 - slightly better raw text inputs ( #2108 )
2023-05-19 12:58:54 -03:00
Forkoz
d205ec9706
Fix Training fails when evaluation dataset is selected ( #2099 )
...
Fixes https://github.com/oobabooga/text-generation-webui/issues/2078 from Googulator
2023-05-16 13:40:19 -03:00
oobabooga
56f6b7052a
Sort dropdowns numerically
2023-05-05 23:14:56 -03:00
oobabooga
95d04d6a8d
Better warning messages
2023-05-03 21:43:17 -03:00
practicaldreamer
e3968f7dd0
Fix Training Pad Token ( #1678 )
...
Currently padding with 0 the character vs 0 the token id (<unk> in the case of llama)
2023-05-02 23:16:08 -03:00
Alex "mcmonkey" Goodwin
312cb7dda6
LoRA trainer improvements part 5 ( #1546 )
...
* full dynamic model type support on modern peft
* remove shuffle option
2023-04-25 21:27:30 -03:00
Alex "mcmonkey" Goodwin
459e725af9
Lora trainer docs ( #1493 )
2023-04-23 12:54:41 -03:00
oobabooga
d46b9b7c50
Fix evaluate comment saving
2023-04-21 12:34:08 -03:00
oobabooga
c4f4f41389
Add an "Evaluate" tab to calculate the perplexities of models ( #1322 )
2023-04-21 00:20:33 -03:00
Alex "mcmonkey" Goodwin
ee30625cd1
4-Bit LoRA training + several new training options and fixes
2023-04-19 19:39:03 -03:00
Tynan Burke
6a810b16b2
typo in training.py ( #1329 )
2023-04-17 21:40:46 -03:00
oobabooga
b705b4210c
Minor changes to training.py
2023-04-16 03:08:37 -03:00
oobabooga
5c513a5f5c
Make training.py more readable
2023-04-16 02:46:27 -03:00
Alex "mcmonkey" Goodwin
a3eec62b50
Lora trainer improvements part 3 ( #1098 )
...
* add support for other model types
dependent on future-peft-changes but with fallback to function now
* use encoding=utf8 for training format
* make shuffling optional
and describe dropout a bit more
* add eval_steps to control evaluation
* make callbacks not depend on globals
* make save steps controllable
* placeholder of initial loading-existing-model support
and var name cleanup
* save/load parameters
* last bit of cleanup
* remove `gptq_bits` ref as main branch removed that setting
* add higher_rank_limit option
2048 is basically unreachable due to VRAM, but i trained at 1536 with batch size = 1 on a 7B model.
Note that it's in the do_train input just to save as a parameter
* fix math on save_steps
2023-04-16 02:35:13 -03:00
oobabooga
abef355ed0
Remove deprecated flag
2023-04-15 01:21:19 -03:00
Lukas
5ad92c940e
lora training fixes: ( #970 )
...
Fix wrong input format being picked
Fix crash when an entry in the dataset has an attribute of value None
2023-04-12 11:38:01 -03:00
IggoOnCode
09d8119e3c
Add CPU LoRA training ( #938 )
...
(It's very slow)
2023-04-10 17:29:00 -03:00
oobabooga
768354239b
Change training file encoding
2023-04-07 11:15:52 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability ( #862 )
2023-04-07 00:15:45 -03:00
Alex "mcmonkey" Goodwin
0c7ef26981
Lora trainer improvements ( #763 )
2023-04-06 02:04:11 -03:00
oobabooga
2a267011dc
Use Path.stem for simplicity
2023-04-03 00:56:14 -03:00
oobabooga
58349f44a0
Handle training exception for unsupported models
2023-03-29 11:55:34 -03:00
oobabooga
a6d0373063
Fix training dataset loading #636
2023-03-29 11:48:17 -03:00
Alex "mcmonkey" Goodwin
e817fac542
better defaults
2023-03-27 22:29:23 -07:00
Alex "mcmonkey" Goodwin
2e08af4edf
implement initial Raw Text File Input
...
also bump default Rank & Alpha for values that will make sense in testing if you don't know what you're doing and leave the defaults.
2023-03-27 22:15:32 -07:00
Alex "mcmonkey" Goodwin
b749952fe3
change number minimums to 0
...
gradio calculates 'step' relative to the minimum, so at '1' the step values were all offset awkwardly. 0 isn't valid, but, uh, just don't slam the slider to the left.
2023-03-27 21:22:43 -07:00
Alex "mcmonkey" Goodwin
ec6224f556
use new shared.args.lora_dir
2023-03-27 20:04:16 -07:00
Alex "mcmonkey" Goodwin
8a97f6ba29
corrections per the PR comments
2023-03-27 18:39:06 -07:00
Alex "mcmonkey" Goodwin
7fab7ea1b6
couple missed camelCases
2023-03-27 18:19:06 -07:00