diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 0000000..c604f6a Binary files /dev/null and b/.DS_Store differ diff --git a/Notes/ML_A_Probabilistic_Perspective_Murphy.md b/Notes/ML_A_Probabilistic_Perspective_Murphy.md deleted file mode 100644 index 99688c3..0000000 --- a/Notes/ML_A_Probabilistic_Perspective_Murphy.md +++ /dev/null @@ -1,8 +0,0 @@ -# Intro - -* Predictive or Supervised: learn a mappping from inputs x to outputs u, given a labeled set of input-output paris (the training set). - - The training input x_i is called features, attributes, covariates. - - If y_i assumes a value from a finite set, it's called categorical or nominal, and the problem is classification or pattern recognition. If y_i us real-valued scalar, it is regression. - -* Descriptive or unsupervised learning: find patterns in the data (knowledge discovery). c - diff --git a/Notes/.DS_Store b/Papers/.DS_Store similarity index 96% rename from Notes/.DS_Store rename to Papers/.DS_Store index 4f675b7..efe8c90 100644 Binary files a/Notes/.DS_Store and b/Papers/.DS_Store differ diff --git a/Papers/Adversarial_examples_1607.02533v1.md b/Papers/Adversarial_examples_1607.02533v1.md deleted file mode 100644 index 5364552..0000000 --- a/Papers/Adversarial_examples_1607.02533v1.md +++ /dev/null @@ -1,12 +0,0 @@ -# Adversarial Examples in the Physical World - -## Kurakin, Goodfellow, Bengio -http://arxiv.org/pdf/1607.02533v1.pdf - -* An adversarial example is a sample of input data which has been modified -very slightly in a way that is intended to cause a machine learning classifier -to misclassify it. - -* Adversarial examples pose security concerns because they could be -used to perform an attack on machine learning systems, even if the adversary has -no access to the underlying model \ No newline at end of file