--- layout: blog title: "Alpaca: A Strong Open-Source Instruction-Following Model" authors: - name: Rohan Taori* url: https://www.rohantaori.com/ - name: Ishaan Gulrajani* url: https://ishaan.io/ - name: Tianyi Zhang* url: https://tiiiger.github.io/ - name: Yann Dubois* url: https://yanndubs.github.io/ - name: Xuechen Li* url: https://www.lxuechen.com/ - name: Carlos Guestrin url: https://guestrin.su.domains/ - name: Percy Liang url: https://cs.stanford.edu/~pliang/ - name: Tatsunori B. Hashimoto url: https://thashim.github.io/ display: True ---
The above examples show that the outputs of Alpaca are generally well-written. We note that Alpaca reflects the general style of the instruction-following dataset. As a result, Alpaca’s answers are typically shorter than ChatGPT, reflecting text-davinci-003’s shorter outputs. ### Known limitations Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003. For example, in the following figure, Alpaca wrongly says that the Capital of Tanzania is Dar es Salaam, which is the largest city in Tanzania. (It was the capital until 1974, when it was replaced by Dodoma.)
Furthermore, Alpaca can be used to generate well-written outputs that spread misinformation, as seen in the following example.
Alpaca likely contains many other limitations associated with both the underlying language model and the instruction tuning data. However, we believe that the artifact will still be useful to the community, as it provides a relatively lightweight model that serves as a basis to study important deficiencies. We encourage users to help us identify new kinds of failures by flagging them in the web demo. Overall, we hope that the release of Alpaca can facilitate further research into instruction-following models and their alignment with human values. ## Assets released We are releasing the following assets today: - **Demo**: An [interactive demo](https://crfm.stanford.edu/alpaca/) for everyone to try out Alpaca. - **Data**: [52K demonstrations](https://github.com/tatsu-lab/stanford_alpaca#data-release) used to fine-tune Alpaca. - **Data generation process**: the code for [generating the data](https://github.com/tatsu-lab/stanford_alpaca#data-generation-process). - **Hyperparameters**: for [fine-tuning](https://github.com/tatsu-lab/stanford_alpaca#fine-tuning) the model using the Hugging Face API. We intend to release the following assets in the near future: - **Model weights**: We have reached out to Meta to obtain guidance on releasing the Alpaca model weights, both for the 7B Alpaca and for fine-tuned versions of the larger LLaMA models. - **Training code**: our code uses the [Hugging Face interface to LLaMA](https://github.com/huggingface/transformers/pull/21955). As of now, the effort to support LLaMA is still ongoing and not stable. We will give the exact training commands once Hugging Face supports LLaMA officially. ## Release decision We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using [OpenAI's content moderation API](https://platform.openai.com/docs/api-reference/moderations), which filters out harmful content as defined by OpenAI's usage policies. Second, we watermark all the model outputs using the method described in [Kirchenbauer et al. 2023](https://arxiv.org/abs/2301.10226), so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow [LLaMA’s license agreement](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform). We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop [community norms](https://crfm.stanford.edu/2022/05/17/community-norms.html) for the responsible deployment of foundation models. ## Future directions We are excited by the research opportunities that Alpaca unlocks. There are many exciting future directions: - Evaluation: We need to evaluate Alpaca more more rigorously. We will start with [HELM](https://crfm.stanford.edu/helm/latest/) (Holistic Evaluation of Language Models), which hopefully will evolve to capture more generative, instruction-following scenarios. - Safety: We would like to further study the risks of Alpaca and improve its safety using methods such as automatic red teaming, auditing, and adaptive testing. - Understanding: We hope to better understand how capabilities arise from the training recipe. What properties of a base model do you need? What happens when you scale up? What properties of instruction data is needed? What are alternatives to using self-instruct on text-davinci-003? ## Acknowledgments Alpaca depends directly and critically on existing works. We would like to thank Meta AI Research for training and releasing the LLaMA models, the self-instruct team for giving us a basis for the data generation pipeline, Hugging Face for the training code, and OpenAI for paving the path and showing what can be achieved. We would also like to highlight that there are many other open-source efforts for instruction-following LLMs and chat models, including [OpenChatKit](https://www.together.xyz/blog/openchatkit), [Open Assistant](https://open-assistant.io/), and [Carper AI](https://carper.ai/instruct-gpt-announcement/).