mirror of
https://github.com/mitre/advmlthreatmatrix.git
synced 2025-07-13 10:29:21 -04:00
All images.
This commit is contained in:
parent
65e182611b
commit
817d092955
1 changed files with 7 additions and 8 deletions
|
@ -27,8 +27,7 @@ We welcome any comments, feedback, or new case-studies! Before submitting a case
|
|||
- In this scenario, a security researcher gained initial access to via a "Valid Account" that was created through a misconfiguration. No Adversarial ML techniques were used.
|
||||
- these kinds of attacks illustrate that any attempt to secure ML system should be on top of "traditional" good cybersecurity hygiene such as locking down the system with least privileges, multi factor authentication and monitoring and auditing.
|
||||
|
||||
``````
|
||||
<img src="/images/ClearviewAI.png" alt="ClearviewAI" width="200"/>
|
||||
<img src="/images/ClearviewAI.png" alt="ClearviewAI" width="300" height="400"/>
|
||||
|
||||
### GPT-2 Model Replication
|
||||
**Summary of Incident:** : OpenAI built GPT-2, a powerful natural language model and calling it "too dangerous to release" adopted a staged-release process to incrementally release 1.5 Billion parameter model. Before the 1.5B parameter model could be released by OpenAI, two ML researchers replicated the model and released it to the public. *Note this is a model replication attack: Here, attacker is able to recover functionally equivalent model (but generally with lower fidelity), perhaps to do reconnaissance (See proof point attack). In Model stealing, the fidelity of the model is comparable to the original, victim model.*
|
||||
|
@ -46,7 +45,7 @@ We welcome any comments, feedback, or new case-studies! Before submitting a case
|
|||
- The researchers then trained the modified Grover on the dataset they curated, using Grover's initial hyperparameters, which
|
||||
resulted in their replicated model.
|
||||
|
||||

|
||||
<img src="/images/OpenAI.png" alt="ClearviewAI" width="300" height="400"/>
|
||||
|
||||
### ProofPoint Evasion
|
||||
**Summary of Incident:** : CVE-2019-20634 describes how ML researchers evaded ProofPoint's email protection system by first building a copy-cat email protection ML model, and using the insights to evade the live system.
|
||||
|
@ -62,7 +61,7 @@ We welcome any comments, feedback, or new case-studies! Before submitting a case
|
|||
- Finally, these insights from the offline model allowed the researchers to create malicious emails that received preferrable
|
||||
scores from the real ProofPoint email protection system, hence bypassing it.
|
||||
|
||||

|
||||
<img src="/images/ProofPoint.png" alt="ClearviewAI" width="300" height="400"/>
|
||||
|
||||
### Tay Poisoning
|
||||
**Summary of Incident:** Microsoft created Tay, a twitter chatbot for 18- to 24- year-olds in the U.S. for entertainment purposes. Within 24 hours of its deployment, Tay had to be decommissioned because it tweeted reprehrensible words.
|
||||
|
@ -75,7 +74,7 @@ We welcome any comments, feedback, or new case-studies! Before submitting a case
|
|||
- Average users of Twitter coordinated together with the intent of defacing Tay bot by exploiting this feedback loop
|
||||
- As a result of this coordinated attack, Tay's training data was poisoned which led its conversation algorithms to generate more reprehensible material
|
||||
|
||||

|
||||
<img src="/images/Tay.png" alt="ClearviewAI" width="300" height="400"/>
|
||||
|
||||
|
||||
### Microsoft - Azure Service
|
||||
|
@ -89,7 +88,7 @@ We welcome any comments, feedback, or new case-studies! Before submitting a case
|
|||
- Via an exposed API interface, the team performed an online evasion attack by replaying the adversarial examples, which helped achieve this goal.
|
||||
- This operation had a combination of traditional ATT&CK enterprise techniques such as finding Valid account, and Executing code via an API -- all interleaved with adversarial ML specific steps such as offline and online evasion examples.
|
||||
|
||||

|
||||
<img src="/images/Msft1.PNG" alt="ClearviewAI" width="300" height="400"/>
|
||||
|
||||
### Bosch - EdgeAI
|
||||
**Summary of Incident:** : Bosch team performed a research exercise on an internal edge AI system with a dual intention to extract the model and craft adversarial example to evade
|
||||
|
@ -103,7 +102,7 @@ We welcome any comments, feedback, or new case-studies! Before submitting a case
|
|||
- offline and online evasion examples.
|
||||
- The team was also able to reconstruct the edge ai system with extracted model
|
||||
|
||||

|
||||
<img src="/images/Bosch1.PNG" alt="ClearviewAI" width="300" height="400"/>
|
||||
|
||||
### Microsoft – EdgeAI
|
||||
**Summary of Incident:** The Azure Red Team performed a red team exercise on a new Microsoft product designed for running AI workloads at the Edge.
|
||||
|
@ -116,7 +115,7 @@ We welcome any comments, feedback, or new case-studies! Before submitting a case
|
|||
- Feeding this perturbed image, the red team was able to evade the ML model into misclassifying the input image.
|
||||
- This operation had one step in the traditional ATTACK MITRE technique to do reconnaissance on the ML model being used in the product, and then the rest of the techniques was to use Offline evasion, followed by online evasion of the targeted product.
|
||||
|
||||

|
||||
<img src="/images/msft2.png" alt="ClearviewAI" width="300" height="400"/>
|
||||
|
||||
### MITRE Physical Adversarial Examples
|
||||
- TBD
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue