added Cylance case study

This commit is contained in:
keithmanville 2020-12-03 14:44:54 -05:00 committed by GitHub
parent 2868f011ad
commit 2a9a4494c3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 22 additions and 1 deletions

BIN
images/cylance.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 81 KiB

View File

@ -9,6 +9,7 @@
- [MITRE - Physical Adversarial Attack on Face Identification](/pages/case-studies-page.md#mitre---physical-adversarial-attack-on-face-identification)
- [Attack on Machine Translation Service - Google Translate, Bing Translator, and Systran Translate](/pages/case-studies-page.md#attack-on-machine-translation-service---google-translate-bing-translator-and-systran-translate)
- [VirusTotal Poisoning](/pages/case-studies-page.md#virustotal-poisoning)
- [Bypassing Cylance's AI Malware Detection](/pages/case-studies-page.md#bypassing-cylances-ai-malware-detection)
Attacks on machine learning (ML) systems are being developed and released with increased regularity. Historically, attacks against ML systems have been performed in a controlled academic settings, but as these case-studies demonstrate, attacks are being seen in-the-wild. In production settings ML systems are trained on personally identifiable information (PII), trusted to make critical decisions with little oversight, and have little to no logging and alerting attached to their use. The case-studies were selected because of the impact to production ML systems, and each demonstrates one of the following characteristics.
@ -208,6 +209,25 @@ Machine translation services (such as Google Translate, Bing Translator, and Sys
**Source:**
- McAfee Advanced Threat Research
----
## Bypassing Cylance's AI Malware Detection
**Summary of Incident:** Researchers at Skylight were able to create a universal bypass string that when appended to a malicious file evades detection by Cylance's AI Malware detector.
**Mapping to Adversarial Threat Matrix :**
- The researchers read publicly available information and enabled verbose logging to understand the inner workings of the ML model, particularly around reputation scoring.
- The researchers reverse-engineered the ML model to understand which attributes provided what level of positive or negative reputation. Along the way, they discovered a secondary model which was an override for the first model. Positive assessments from the second model overrode the decision of the core ML model.
- Using this knowledge, the researchers fused attributes of known good files with malware. Due to the secondary model overriding the primary, the researchers were effectively able to bypass the ML model.
<img src="/images/cylance.png" alt="Cylance" height="150"/>
**Reported by:**
Research and work by Adi Ashkenazy, Shahar Zini, and SkyLight Cyber team. Notified to us by Ken Luu (@devianz_)
**Source:**
- https://skylightcyber.com/2019/07/18/cylance-i-kill-you/
----
# Contributing New Case Studies

View File

@ -37,7 +37,7 @@ To see the Matrix in action, we recommend seeing the curated case studies
- [MITRE - Physical Adversarial Attack on Face Identification](/pages/case-studies-page.md#mitre---physical-adversarial-attack-on-face-identification)
- [Attack on Machine Translation Service - Google Translate, Bing Translator, and Systran Translate](/pages/case-studies-page.md#attack-on-machine-translation-service---google-translate-bing-translator-and-systran-translate)
- [VirusTotal Poisoning](/pages/case-studies-page.md#virustotal-poisoning)
- [Bypassing Cylance's AI Malware Detection](/pages/case-studies-page.md#bypassing-cylances-ai-malware-detection)
![alt text](images/AdvMLThreatMatrix.jpg)
@ -64,6 +64,7 @@ To see the Matrix in action, we recommend seeing the curated case studies
| Berryville Institute of Machine Learning | Gary McGraw, Harold Figueroa, Victor Shepardson, Richie Bonett|
| Citadel AI | Kenny Song |
| McAfee | Christiaan Beek |
| Unaffiliated | Ken Luu |
## Feedback and Getting Involved