mirror of
https://github.com/autistic-symposium/tensorflow-for-deep-learning-py.git
synced 2025-05-10 10:45:04 -04:00
Move notes to evernote
This commit is contained in:
parent
43ec206144
commit
7c4d6e0e3b
4 changed files with 0 additions and 20 deletions
BIN
.DS_Store
vendored
Normal file
BIN
.DS_Store
vendored
Normal file
Binary file not shown.
|
@ -1,8 +0,0 @@
|
||||||
# Intro
|
|
||||||
|
|
||||||
* Predictive or Supervised: learn a mappping from inputs x to outputs u, given a labeled set of input-output paris (the training set).
|
|
||||||
- The training input x_i is called features, attributes, covariates.
|
|
||||||
- If y_i assumes a value from a finite set, it's called categorical or nominal, and the problem is classification or pattern recognition. If y_i us real-valued scalar, it is regression.
|
|
||||||
|
|
||||||
* Descriptive or unsupervised learning: find patterns in the data (knowledge discovery). c
|
|
||||||
|
|
BIN
Notes/.DS_Store → Papers/.DS_Store
vendored
BIN
Notes/.DS_Store → Papers/.DS_Store
vendored
Binary file not shown.
|
@ -1,12 +0,0 @@
|
||||||
# Adversarial Examples in the Physical World
|
|
||||||
|
|
||||||
## Kurakin, Goodfellow, Bengio
|
|
||||||
http://arxiv.org/pdf/1607.02533v1.pdf
|
|
||||||
|
|
||||||
* An adversarial example is a sample of input data which has been modified
|
|
||||||
very slightly in a way that is intended to cause a machine learning classifier
|
|
||||||
to misclassify it.
|
|
||||||
|
|
||||||
* Adversarial examples pose security concerns because they could be
|
|
||||||
used to perform an attack on machine learning systems, even if the adversary has
|
|
||||||
no access to the underlying model
|
|
Loading…
Add table
Add a link
Reference in a new issue