From 09f67d2eeb235c01a0933d4bbcf4ae96488749da Mon Sep 17 00:00:00 2001 From: ramtherunner <34756719+ramtherunner@users.noreply.github.com> Date: Fri, 16 Oct 2020 00:53:18 -0700 Subject: [PATCH] Update case-studies-page.md --- pages/case-studies-page.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/pages/case-studies-page.md b/pages/case-studies-page.md index 7393553..949557c 100644 --- a/pages/case-studies-page.md +++ b/pages/case-studies-page.md @@ -18,6 +18,8 @@ We chose **real world attacks on production ML systems** based on: - In this scenario, a security researcher gained initial access to via a "Valid Account" that was created through a misconfiguration. No Adversarial ML techniques were used. - these kinds of attacks illustrate that any attempt to secure ML system should be on top of "traditional" good cybersecurity hygiene such as locking down the system with least privileges, multi factor authentication and monitoring and auditing. +[ClearviewAI](images/ClearviewAI.png) + ### GPT-2 Model Replication **Summary of Incident:** : OpenAI built GPT-2, a powerful natural language model and calling it "too dangerous to release" adopted a staged-release process to incrementally release 1.5 Billion parameter model. Before the 1.5B parameter model could be released by OpenAI, two ML researchers replicated the model and released it to the public. *Note this is a model replication attack: Here, attacker is able to recover functionally equivalent model (but generally with lower fidelity), perhaps to do reconnaissance (See proof point attack). In Model stealing, the fidelity of the model is comparable to the original, victim model.* @@ -33,6 +35,8 @@ We chose **real world attacks on production ML systems** based on: GPT-2's objective function, - The researchers then trained the modified Grover on the dataset they curated, using Grover's initial hyperparameters, which resulted in their replicated model. + + [OpenAI](images/OpenAI.png) ### ProofPoint Evasion **Summary of Incident:** : CVE-2019-20634 describes how ML researchers evaded ProofPoint's email protection system by first building a copy-cat email protection ML model, and using the insights to evade the live system.