From 23d3f6909a8ddb722bb7b81c9a67f64e950af449 Mon Sep 17 00:00:00 2001 From: oobabooga <112222186+oobabooga@users.noreply.github.com> Date: Thu, 11 May 2023 10:21:20 -0300 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 3dc8d883..6724f196 100644 --- a/README.md +++ b/README.md @@ -99,7 +99,7 @@ pip install -r requirements.txt #### 4. Install GPTQ-for-LLaMa and the monkey patch -The base installation covers regular transformers models as well as llama.cpp (GGML) models. +The base installation covers [transformers](https://github.com/huggingface/transformers) models (`AutoModelForCausalLM` and `AutoModelForSeq2SeqLM` specifically) and [llama.cpp](https://github.com/ggerganov/llama.cpp) (GGML) models. To use 4-bit GPU models, the additional installation steps below are necessary: