From d90baac3f619f5a12f932b6204c193c8f10de344 Mon Sep 17 00:00:00 2001 From: ramtherunner <34756719+ramtherunner@users.noreply.github.com> Date: Fri, 16 Oct 2020 00:26:43 -0700 Subject: [PATCH] Update adversarial-ml-101.md --- pages/adversarial-ml-101.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/adversarial-ml-101.md b/pages/adversarial-ml-101.md index 78f7e6e..d69ed0c 100644 --- a/pages/adversarial-ml-101.md +++ b/pages/adversarial-ml-101.md @@ -5,7 +5,7 @@ Informally, Adversarial ML is "subverting machine learning systems for fun and p Consider a typical ML pipeline shown below that is gated behind an API, wherein the only way to use the model is to send a query and observe an response. In this example, we assume a blackbox setting: the attacker does **NOT** have direct access to the training data, no knowledge of the algorithm used and no source code of the model. The attacker only queries the model and observes the response. -![alt text](images/AdvML101) +![Adversarial ML 101](/images/AdvML101.PNG) Here are some of the adversarial ML attacks that an adversary can perform on this system: | Attack | Overview |