cebtenzzre
|
4338e72a51
|
MPT: use upstream llama.cpp implementation (#1515)
|
2023-10-19 15:25:17 -04:00 |
|
Cebtenzzre
|
f9deb87d20
|
convert scripts: add feed-forward length for better compatiblilty
This GGUF key is used by all llama.cpp models with upstream support.
|
2023-10-05 18:16:19 -04:00 |
|
Cebtenzzre
|
0493e6eb07
|
convert scripts: use bytes_to_unicode from transformers
|
2023-10-05 18:16:19 -04:00 |
|
Cebtenzzre
|
4219c0e2e7
|
convert scripts: make them directly executable
|
2023-10-05 18:16:19 -04:00 |
|
Cebtenzzre
|
cca9e6ce81
|
convert_mpt_hf_to_gguf.py: better tokenizer decoding
|
2023-10-05 18:16:19 -04:00 |
|
Cebtenzzre
|
25297786db
|
convert scripts: load model as late as possible
|
2023-10-05 18:16:19 -04:00 |
|
Cebtenzzre
|
fd47088f2b
|
conversion scripts: cleanup
|
2023-10-05 18:16:19 -04:00 |
|
Cebtenzzre
|
7c67262a13
|
backend: port MPT to GGUF
|
2023-10-05 18:16:19 -04:00 |
|