diff --git a/pages/case-studies-page.md b/pages/case-studies-page.md index c0bfd97..70a6a67 100644 --- a/pages/case-studies-page.md +++ b/pages/case-studies-page.md @@ -7,6 +7,7 @@ - [Microsoft - Azure Service - Evasion](/pages/case-studies-page.md#microsoft---azure-service) - [Microsoft Edge AI - Evasion](/pages/case-studies-page.md#microsoft---edge-ai) - [MITRE - Physical Adversarial Attack on Face Identification](/pages/case-studies-page.md#mitre---physical-adversarial-attack-on-face-identification) + - [Imitating and Attacking Machine Translation Systems - Google, Bing, and Systran](/pages/case-studies-page.md#imitating-and-attacking-machine-translation-systems---google,-bing,-and-systran) Attacks on machine learning (ML) systems are being developed and released with increased regularity. Historically, attacks against ML systems have been performed in a controlled academic settings, but as these case-studies demonstrate, attacks are being seen in-the-wild. In production settings ML systems are trained on personally identifiable information (PII), trusted to make critical decisions with little oversight, and have little to no logging and alerting attached to their use. The case-studies were selected because of the impact to production ML systems, and each demonstrates one of the following characteristics.