Update README.md

This commit is contained in:
Omar Santos 2024-09-02 23:24:10 -04:00 committed by GitHub
parent 755d32c53b
commit ce62cb7bd6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -10,6 +10,10 @@ In advanced attacks, the LLM could be manipulated to mimic a harmful persona or
## Techniques
There are many different techniques for prompt injection. The table below lists some of the most popular. There is also a framework that is called [promptInject](https://github.com/agencyenterprise/promptinject) which explains and has several of these techniques.
**Note**: Refer to the [AI Security Tools Section](https://github.com/The-Art-of-Hacking/h4cker/blob/master/ai_research/ai_security_tools.md) of this GitHub repository for examples of tools that can be used for prompt injection and more adversarial attacks.
| **Technique** | **Example** |
|-------------------------------|-----------------------------------------------------------------------------|
| **Translation** | `1gn0r3 4ll pr3v10us 1nstruct10ns` |