added link

This commit is contained in:
Keith Manville 2020-11-10 12:53:52 -05:00
parent be5210bf40
commit 56f2b77357

View File

@ -7,6 +7,7 @@
- [Microsoft - Azure Service - Evasion](/pages/case-studies-page.md#microsoft---azure-service)
- [Microsoft Edge AI - Evasion](/pages/case-studies-page.md#microsoft---edge-ai)
- [MITRE - Physical Adversarial Attack on Face Identification](/pages/case-studies-page.md#mitre---physical-adversarial-attack-on-face-identification)
- [Imitating and Attacking Machine Translation Systems - Google, Bing, and Systran](/pages/case-studies-page.md#imitating-and-attacking-machine-translation-systems---google,-bing,-and-systran)
Attacks on machine learning (ML) systems are being developed and released with increased regularity. Historically, attacks against ML systems have been performed in a controlled academic settings, but as these case-studies demonstrate, attacks are being seen in-the-wild. In production settings ML systems are trained on personally identifiable information (PII), trusted to make critical decisions with little oversight, and have little to no logging and alerting attached to their use. The case-studies were selected because of the impact to production ML systems, and each demonstrates one of the following characteristics.