ramtherunner 96ec9c52bf
AdvML101
2020-10-13 13:25:17 -07:00
2020-10-13 00:23:40 -07:00
2020-10-13 13:25:17 -07:00
2020-10-13 00:40:43 -07:00
2020-10-13 09:10:45 -07:00

Table of Contents

  1. Adversarial ML 101
  2. Adversarial ML Threat Matrix
  3. Case Studies Page
  4. Contributors
  5. Feedback and Contact Information

The goal of this project is to position attacks on ML systems in an ATT&CK-style framework so that security analysts can orient themselves in this new and upcoming threats.

Contributors

Want to get involved? See Feedback and Contact Information

Organization Contributors
Microsoft Ram Shankar Siva Kumar, Hyrum Anderson, Will Pearce, Suzy Shapperle, Blake Strom, Madeline Carmichael, Matt Swann, Mark Russinovich, Nick Beede, Kathy Vu, Andi Comissioneru, Sharon Xia, Mario Goertzel, Jeffrey Snover, Derek Adam, Deepak Manohar, Bhairav Mehta, Peter Waxman, Abhishek Gupta, Ann Johnson
MITRE Mikel D. Rodriguez, Christina E Liaghati, Keith R. Manville, Michael R Krumdick
Bosch Manojkumar Parmar
IBM Pin-Yu Chen
NVIDIA David Reber Jr., Keith Kozo, Christopher Cottrell, Daniel Rohrer
Airbus Adam Wedgbury
Deep Instinct Nadav Maman
TwoSix David Slater
University of Toronto Adelin Travers, Jonas Guan, Nicolas Papernot
Cardiff University Pete Burnap
Software Engineering Institute/Carnegie Mellon University Nathan M. VanHoudnos
Berryville Institute of Machine Learning Gary McGraw, Harold Figueroa, Victor Shepardson, Richie Bonett

Feedback and Contact Information

The Adversarial ML Threat Matrix is a first-cut attempt at collating a knowledge base of how ML systems can be attacked. We need your help to make it holistic and fill in the missing gaps!

Please submit a Pull Request with suggested changes! We are excited to make this system better with you!

Join our Adversarial ML Threat Matrix Google Group

  • For discussions around Adversarial ML Threat Matrix, we invite everyone to join our Google Group here
  • If you want to access this forum using your corporate email (as opposed to your gmail)
    • Open your browser in Incognito mode.
    • Once you sign up with your corporate, and complete captcha, you may
    • Get an error, ignore it!
    • Also note, emails from Google Forums generally go to "Other"/"Spam" folder. So, you may want to create a rule to go into your inbox instead

Want to work with us on the next iteration of the framework?

  • We are partnering with Defcon's AI Village to open up the framework to all community members to get feedback and make it better. Current thinking is to have this event circa
  • Jan/Feb 2021.
    • Please register here for the workshop for more hands on feedback session

Contact Information

Description
Adversarial Threat Landscape for AI Systems
Readme 2.4 MiB
Languages
Markdown 100%