mirror of
https://github.com/mitre/advmlthreatmatrix.git
synced 2025-12-10 14:06:44 -05:00
Update adversarial-ml-101.md
This commit is contained in:
parent
3ce4a43300
commit
33bd9f1a66
1 changed files with 3 additions and 1 deletions
|
|
@ -3,7 +3,9 @@ This is a short primer intended for security analysts
|
|||
|
||||
Informally, Adversarial ML is "subverting machine learning systems for fun and profit". The methods underpinning the production machine learning systems are systematically vulnerable to a new class of vulnerabilities across the machine learning supply chain collectively known as Adversarial Machine Learning. Adversaries can exploit these vulnerabilities to manipulate AI systems in order to alter their behavior to serve a malicious end goal.
|
||||
|
||||
Consider a typical ML pipeline shown in the left that is gated behind an API, wherein the only way to use the model is to send a query and observe an response. In this example, we assume a blackbox setting: the attacker does **NOT** have direct access to the training data, no knowledge of the algorithm used and no source code of the model. The attacker only queries the model and observes the response.
|
||||
Consider a typical ML pipeline shown below that is gated behind an API, wherein the only way to use the model is to send a query and observe an response. In this example, we assume a blackbox setting: the attacker does **NOT** have direct access to the training data, no knowledge of the algorithm used and no source code of the model. The attacker only queries the model and observes the response.
|
||||
|
||||

|
||||
|
||||
Here are some of the adversarial ML attacks that an adversary can perform on this system:
|
||||
| Attack | Overview |
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue