Commit Graph

1996 Commits

Author SHA1 Message Date
Luis Lopez
9e7204bef4
Add tail-free and top-a sampling (#2357) 2023-05-29 21:40:01 -03:00
oobabooga
b4662bf4af
Download gptq_model*.py using download-model.py 2023-05-29 16:12:54 -03:00
oobabooga
540a161a08
Update GPTQ-models-(4-bit-mode).md 2023-05-29 15:45:40 -03:00
oobabooga
b8d2f6d876 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-05-29 15:33:05 -03:00
oobabooga
1394f44e14 Add triton checkbox for AutoGPTQ 2023-05-29 15:32:45 -03:00
oobabooga
166a0d9893
Update GPTQ-models-(4-bit-mode).md 2023-05-29 15:07:59 -03:00
oobabooga
962d05ca7e
Update README.md 2023-05-29 14:56:55 -03:00
oobabooga
4a190a98fd
Update GPTQ-models-(4-bit-mode).md 2023-05-29 14:56:05 -03:00
matatonic
2b7ba9586f
Fixes #2326, KeyError: 'assistant' (#2382) 2023-05-29 14:19:57 -03:00
oobabooga
6de727c524 Improve Eta Sampling preset 2023-05-29 13:56:15 -03:00
oobabooga
f34d20922c Minor fix 2023-05-29 13:31:17 -03:00
oobabooga
983eef1e29 Attempt at evaluating falcon perplexity (failed) 2023-05-29 13:28:25 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
Forkoz
60ae80cf28
Fix hang in tokenizer for AutoGPTQ llama models. (#2399) 2023-05-28 23:10:10 -03:00
oobabooga
2f811b1bdf Change a warning message 2023-05-28 22:48:20 -03:00
oobabooga
9ee1e37121 Fix return message when no model is loaded 2023-05-28 22:46:32 -03:00
oobabooga
f27135bdd3 Add Eta Sampling preset
Also remove some presets that I do not consider relevant
2023-05-28 22:44:35 -03:00
oobabooga
00ebea0b2a Use YAML for presets and settings 2023-05-28 22:34:12 -03:00
Elias Vincent Simon
2cf711f35e
update SpeechRecognition dependency (#2345) 2023-05-26 00:34:57 -03:00
jllllll
78dbec4c4e
Add 'scipy' to requirements.txt #2335 (#2343)
Unlisted dependency of bitsandbytes
2023-05-25 23:26:25 -03:00
Luis Lopez
0dbc3d9b2c
Fix get_documents_ids_distances return error when n_results = 0 (#2347) 2023-05-25 23:25:36 -03:00
jllllll
07a4f0569f
Update README.md to account for BnB Windows wheel (#2341) 2023-05-25 18:44:26 -03:00
oobabooga
acfd876f29 Some qol changes to "Perplexity evaluation" 2023-05-25 15:06:22 -03:00
oobabooga
8efdc01ffb Better default for compute_dtype 2023-05-25 15:05:53 -03:00
oobabooga
fc33216477 Small fix for n_ctx in llama.cpp 2023-05-25 13:55:51 -03:00
oobabooga
35009c32f0 Beautify all CSS 2023-05-25 13:12:34 -03:00
oobabooga
231305d0f5
Update README.md 2023-05-25 12:05:08 -03:00
oobabooga
37d4ad012b Add a button for rendering markdown for any model 2023-05-25 11:59:27 -03:00
oobabooga
9a43656a50
Add bitsandbytes note 2023-05-25 11:21:52 -03:00
oobabooga
548f05e106 Add windows bitsandbytes wheel by jllllll 2023-05-25 10:48:22 -03:00
DGdev91
cf088566f8
Make llama.cpp read prompt size and seed from settings (#2299) 2023-05-25 10:29:31 -03:00
Luis Lopez
ee674afa50
Add superbooga time weighted history retrieval (#2080) 2023-05-25 10:22:45 -03:00
oobabooga
a04266161d
Update README.md 2023-05-25 01:23:46 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter (#2320) 2023-05-25 01:14:13 -03:00
oobabooga
63ce5f9c28 Add back a missing bos token 2023-05-24 13:54:36 -03:00
Alex "mcmonkey" Goodwin
3cd7c5bdd0
LoRA Trainer: train_only_after option to control which part of your input to train on (#2315) 2023-05-24 12:43:22 -03:00
eiery
9967e08b1f
update llama-cpp-python to v0.1.53 for ggml v3, fixes #2245 (#2264) 2023-05-24 10:25:28 -03:00
Gabriel Terrien
e50ade438a
FIX silero_tts/elevenlabs_tts activation/deactivation (#2313) 2023-05-24 10:06:38 -03:00
Gabriel Terrien
fc116711b0
FIX save_model_settings function to also update shared.model_config (#2282) 2023-05-24 10:01:07 -03:00
flurb18
d37a28730d
Beginning of multi-user support (#2262)
Adds a lock to generate_reply
2023-05-24 09:38:20 -03:00
Anthony K
7dc87984a2
Fix spelling mistake in new name var of chat api (#2309) 2023-05-23 23:03:03 -03:00
oobabooga
1490c0af68 Remove RWKV from requirements.txt 2023-05-23 20:49:20 -03:00
Gabriel Terrien
7aed53559a
Support of the --gradio-auth flag (#2283) 2023-05-23 20:39:26 -03:00
Atinoda
4155aaa96a
Add mention to alternative docker repository (#2145) 2023-05-23 20:35:53 -03:00
matatonic
9714072692
[extensions/openai] use instruction templates with chat_completions (#2291) 2023-05-23 19:58:41 -03:00
oobabooga
74aae34beb Allow passing your name to the chat API 2023-05-23 19:39:18 -03:00
oobabooga
fb6a00f4e5 Small AutoGPTQ fix 2023-05-23 15:20:01 -03:00
oobabooga
c2d2ef7c13
Update Generation-parameters.md 2023-05-23 02:11:28 -03:00
oobabooga
b0845ae4e8
Update RWKV-model.md 2023-05-23 02:10:08 -03:00
oobabooga
cd3618d7fb Add support for RWKV in Hugging Face format 2023-05-23 02:07:28 -03:00