oobabooga
|
16e1696071
|
Minor qol change
|
2023-09-12 10:44:26 -07:00 |
|
oobabooga
|
9331ab4798
|
Read GGUF metadata (#3873)
|
2023-09-11 18:49:30 -03:00 |
|
oobabooga
|
ed86878f02
|
Remove GGML support
|
2023-09-11 07:44:00 -07:00 |
|
jllllll
|
4a999e3bcd
|
Use separate llama-cpp-python packages for GGML support
|
2023-08-26 10:40:08 -05:00 |
|
oobabooga
|
83640d6f43
|
Replace ggml occurences with gguf
|
2023-08-26 01:06:59 -07:00 |
|
oobabooga
|
d6934bc7bc
|
Implement CFG for ExLlama_HF (#3666)
|
2023-08-24 16:27:36 -03:00 |
|
oobabooga
|
65aa11890f
|
Refactor everything (#3481)
|
2023-08-06 21:49:27 -03:00 |
|
oobabooga
|
959feba602
|
When saving model settings, only save the settings for the current loader
|
2023-08-01 06:10:09 -07:00 |
|
oobabooga
|
75c2dd38cf
|
Remove flexgen support
|
2023-07-25 15:15:29 -07:00 |
|
oobabooga
|
27a84b4e04
|
Make AutoGPTQ the default again
Purely for compatibility with more models.
You should still use ExLlama_HF for LLaMA models.
|
2023-07-15 22:29:23 -07:00 |
|
oobabooga
|
b284f2407d
|
Make ExLlama_HF the new default for GPTQ
|
2023-07-14 14:03:56 -07:00 |
|
Salvador E. Tropea
|
324e45b848
|
[Fixed] wbits and groupsize values from model not shown (#2977)
|
2023-07-11 23:27:38 -03:00 |
|
oobabooga
|
9290c6236f
|
Keep ExLlama_HF if already selected
|
2023-06-25 19:06:28 -03:00 |
|
oobabooga
|
9f40032d32
|
Add ExLlama support (#2444)
|
2023-06-16 20:35:38 -03:00 |
|
oobabooga
|
7ef6a50e84
|
Reorganize model loading UI completely (#2720)
|
2023-06-16 19:00:37 -03:00 |
|