From 7461a10876f1f746a60c5fafcf86425305fc5cf9 Mon Sep 17 00:00:00 2001 From: "Joshua D. Harguess" Date: Thu, 15 Oct 2020 21:27:42 -0700 Subject: [PATCH] fixed links and added some edits to why section --- pages/why-adversarial-ml-threat-matrix.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/pages/why-adversarial-ml-threat-matrix.md b/pages/why-adversarial-ml-threat-matrix.md index 9c40b3b..b44cd9b 100644 --- a/pages/why-adversarial-ml-threat-matrix.md +++ b/pages/why-adversarial-ml-threat-matrix.md @@ -1,7 +1,7 @@ ## Why Adversarial ML Threat Matrix? -1. In the last three years, major companies such as [Google](https://www.zdnet.com/article/googles-best-image-recognition-system-flummoxed-by-fakes/), [Amazon] (https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds), [Microsoft](https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai- chatbot-gets-a-crash-course-in-racism-from-twitter), and [Tesla](https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on- road-can-steer-tesla-autopilot-into-oncoming-lane), have had their ML systems tricked, evaded, or misled. +1. In the last three years, major companies such as [Google](https://www.zdnet.com/article/googles-best-image-recognition-system-flummoxed-by-fakes/), [Amazon](https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds), [Microsoft](https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai- chatbot-gets-a-crash-course-in-racism-from-twitter), and [Tesla](https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on- road-can-steer-tesla-autopilot-into-oncoming-lane), have had their ML systems tricked, evaded, or misled. 2. This trend is only set to rise: According to [Gartner report](https://www.gartner.com/doc/3939991). 30% of cyberattacks by 2022 will involve data poisoning, model theft or adversarial examples. -3. However, industry is underprepared. In a [survey](https://arxiv.org/pdf/2002.05646.pdf) of 28 organizations spanning small as well as large organizations, 25 organizations did not know how to secure their ML systems. +3. However, industry is underprepared. In a [survey](https://arxiv.org/pdf/2002.05646.pdf) of 28 organizations spanning small as well as large organizations, 25 of the 28 organizations did not know how to secure their ML systems. Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. As a result, data can now be weaponized in new ways requiring that we extend the way we model cyber adversary behavior, reflecting emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle.