mirror of
https://github.com/mitre/advmlthreatmatrix.git
synced 2025-12-15 00:09:35 -05:00
Update adversarial-ml-101.md
This commit is contained in:
parent
ac1aea25b7
commit
99ad22ef34
1 changed files with 4 additions and 0 deletions
|
|
@ -1,4 +1,6 @@
|
|||
## Adversarial ML 101
|
||||
This is a short primer intended for security analysts
|
||||
|
||||
Informally, Adversarial ML is "subverting machine learning systems for fun and profit". The methods underpinning the production machine learning systems are systematically vulnerable to a new class of vulnerabilities across the machine learning supply chain collectively known as Adversarial Machine Learning. Adversaries can exploit these vulnerabilities to manipulate AI systems in order to alter their behavior to serve a malicious end goal.
|
||||
|
||||
Consider a typical ML pipeline shown in the left that is gated behind an API, wherein the only way to use the model is to send a query and observe an response. In this example, we assume a blackbox setting: the attacker does **NOT** have direct access to the training data, no knowledge of the algorithm used and no source code of the model. The attacker only queries the model and observes the response.
|
||||
|
|
@ -22,4 +24,6 @@ Here are some of the adversarial ML attacks that an adversary can perform on thi
|
|||
3. Though the illustration shows blackbox attacks, these attacks have also been shown to work in whitebox (where the attacker has access to either model architecture, code or training data) settings
|
||||
4. Though we were not specific about what kind of data - image, audio, timeseries, or tabular data, research has shown that of these attacks have shown to exist in all the data types
|
||||
|
||||
## Next Recommended Reading
|
||||
Head to [Adversarial ML Threat Matrix](/pages/adversarial-ml-threat-matrix.md) to learn about the structure of the matrix alongside definitions
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue