Forkoz
|
5c5ef4cef7
|
UI: change n_gpu_layers maximum to 256 for larger models. (#5262)
|
2024-01-17 17:13:16 -03:00 |
|
oobabooga
|
cbf6f9e695
|
Update some UI messages
|
2023-12-30 21:31:17 -08:00 |
|
oobabooga
|
0e54a09bcb
|
Remove exllamav1 loaders (#5128)
|
2023-12-31 01:57:06 -03:00 |
|
oobabooga
|
e83e6cedbe
|
Organize the model menu
|
2023-12-19 13:18:26 -08:00 |
|
oobabooga
|
de138b8ba6
|
Add llama-cpp-python wheels with tensor cores support (#5003)
|
2023-12-19 17:30:53 -03:00 |
|
oobabooga
|
0a299d5959
|
Bump llama-cpp-python to 0.2.24 (#5001)
|
2023-12-19 15:22:21 -03:00 |
|
oobabooga
|
f6d701624c
|
UI: mention that QuIP# does not work on Windows
|
2023-12-18 18:05:02 -08:00 |
|
Water
|
674be9a09a
|
Add HQQ quant loader (#4888)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-12-18 21:23:16 -03:00 |
|
oobabooga
|
f1f2c4c3f4
|
Add --num_experts_per_token parameter (ExLlamav2) (#4955)
|
2023-12-17 12:08:33 -03:00 |
|
oobabooga
|
3bbf6c601d
|
AutoGPTQ: Add --disable_exllamav2 flag (Mixtral CPU offloading needs this)
|
2023-12-15 06:46:13 -08:00 |
|
oobabooga
|
7f1a6a70e3
|
Update the llamacpp_HF comment
|
2023-12-12 21:04:20 -08:00 |
|
Morgan Schweers
|
602b8c6210
|
Make new browser reloads recognize current model. (#4865)
|
2023-12-11 02:51:01 -03:00 |
|
oobabooga
|
2a335b8aa7
|
Cleanup: set shared.model_name only once
|
2023-12-08 06:35:23 -08:00 |
|
oobabooga
|
7fc9033b2e
|
Recommend ExLlama_HF and ExLlamav2_HF
|
2023-12-04 15:28:46 -08:00 |
|
oobabooga
|
e0ca49ed9c
|
Bump llama-cpp-python to 0.2.18 (2nd attempt) (#4637)
* Update requirements*.txt
* Add back seed
|
2023-11-18 00:31:27 -03:00 |
|
oobabooga
|
9d6f79db74
|
Revert "Bump llama-cpp-python to 0.2.18 (#4611)"
This reverts commit 923c8e25fb .
|
2023-11-17 05:14:25 -08:00 |
|
oobabooga
|
8b66d83aa9
|
Set use_fast=True by default, create --no_use_fast flag
This increases tokens/second for HF loaders.
|
2023-11-16 19:55:28 -08:00 |
|
oobabooga
|
923c8e25fb
|
Bump llama-cpp-python to 0.2.18 (#4611)
|
2023-11-16 22:55:14 -03:00 |
|
oobabooga
|
cd41f8912b
|
Warn users about n_ctx / max_seq_len
|
2023-11-15 18:56:42 -08:00 |
|
oobabooga
|
af3d25a503
|
Disable logits_all in llamacpp_HF (makes processing 3x faster)
|
2023-11-07 14:35:48 -08:00 |
|
oobabooga
|
ec17a5d2b7
|
Make OpenAI API the default API (#4430)
|
2023-11-06 02:38:29 -03:00 |
|
feng lui
|
4766a57352
|
transformers: add use_flash_attention_2 option (#4373)
|
2023-11-04 13:59:33 -03:00 |
|
wouter van der plas
|
add359379e
|
fixed two links in the ui (#4452)
|
2023-11-04 13:41:42 -03:00 |
|
oobabooga
|
45fcb60e7a
|
Make truncation_length_max apply to max_seq_len/n_ctx
|
2023-11-03 11:29:31 -07:00 |
|
oobabooga
|
fcb7017b7a
|
Remove a checkbox
|
2023-11-02 12:24:09 -07:00 |
|
Julien Chaumond
|
fdcaa955e3
|
transformers: Add a flag to force load from safetensors (#4450)
|
2023-11-02 16:20:54 -03:00 |
|
oobabooga
|
c0655475ae
|
Add cache_8bit option
|
2023-11-02 11:23:04 -07:00 |
|
Mehran Ziadloo
|
aaf726dbfb
|
Updating the shared settings object when loading a model (#4425)
|
2023-11-01 01:29:57 -03:00 |
|
Abhilash Majumder
|
778a010df8
|
Intel Gpu support initialization (#4340)
|
2023-10-26 23:39:51 -03:00 |
|
oobabooga
|
92691ee626
|
Disable trust_remote_code by default
|
2023-10-23 09:57:44 -07:00 |
|
oobabooga
|
df90d03e0b
|
Replace --mul_mat_q with --no_mul_mat_q
|
2023-10-22 12:23:03 -07:00 |
|
oobabooga
|
773c17faec
|
Fix a warning
|
2023-10-10 20:53:38 -07:00 |
|
oobabooga
|
3a9d90c3a1
|
Download models with 4 threads by default
|
2023-10-10 13:52:10 -07:00 |
|
cal066
|
cc632c3f33
|
AutoAWQ: initial support (#3999)
|
2023-10-05 13:19:18 -03:00 |
|
oobabooga
|
b6fe6acf88
|
Add threads_batch parameter
|
2023-10-01 21:28:00 -07:00 |
|
jllllll
|
41a2de96e5
|
Bump llama-cpp-python to 0.2.11
|
2023-10-01 18:08:10 -05:00 |
|
oobabooga
|
f2d82f731a
|
Add recommended NTKv1 alpha values
|
2023-09-29 13:48:38 -07:00 |
|
oobabooga
|
96da2e1c0d
|
Read more metadata (config.json & quantize_config.json)
|
2023-09-29 06:14:16 -07:00 |
|
oobabooga
|
f931184b53
|
Increase truncation limits to 32768
|
2023-09-28 19:28:22 -07:00 |
|
StoyanStAtanasov
|
7e6ff8d1f0
|
Enable NUMA feature for llama_cpp_python (#4040)
|
2023-09-26 22:05:00 -03:00 |
|
oobabooga
|
1ca54faaf0
|
Improve --multi-user mode
|
2023-09-26 06:42:33 -07:00 |
|
oobabooga
|
d0d221df49
|
Add --use_fast option (closes #3741)
|
2023-09-25 12:19:43 -07:00 |
|
oobabooga
|
b973b91d73
|
Automatically filter by loader (closes #4072)
|
2023-09-25 10:28:35 -07:00 |
|
oobabooga
|
36c38d7561
|
Add disable_exllama to Transformers loader (for GPTQ LoRA training)
|
2023-09-24 20:03:11 -07:00 |
|
oobabooga
|
7a3ca2c68f
|
Better detect EXL2 models
|
2023-09-23 13:05:55 -07:00 |
|
oobabooga
|
37e2980e05
|
Recommend mul_mat_q for llama.cpp
|
2023-09-17 08:27:11 -07:00 |
|
kalomaze
|
7c9664ed35
|
Allow full model URL to be used for download (#3919)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-09-16 10:06:13 -03:00 |
|
Johan
|
fdcee0c215
|
Allow custom tokenizer for llamacpp_HF loader (#3941)
|
2023-09-15 12:38:38 -03:00 |
|
oobabooga
|
9331ab4798
|
Read GGUF metadata (#3873)
|
2023-09-11 18:49:30 -03:00 |
|
oobabooga
|
ed86878f02
|
Remove GGML support
|
2023-09-11 07:44:00 -07:00 |
|