Update machine learning note

This commit is contained in:
pe3zx 2018-01-28 15:57:30 +07:00
parent 20305c3571
commit 6c9ad03a64
3 changed files with 117 additions and 142 deletions

View File

@ -1,131 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Machine Learning by Standford University\n",
"\n",
"## Week 1\n",
"\n",
"### Introduction\n",
"\n",
"#### What is Machine Learning?\n",
"\n",
"- Definition of machine learning defined by many computer scientists:\n",
" - Arthur Samuel (1959): Machine learning is field of study that gives computers the ability to learn without being explicitly programmed.\n",
" - Tom Mitchell (1998): Well-posed learning problem: A computer program is said to *learn* from experience $E$ with respect to some task $T$ and some performance measure $P$, if its performance on $T$, as measured by $P$, improves with experience $E$.\n",
"- Types of machine learning algorithms:\n",
" - **Supervised learning**: teach the computer how to do something\n",
" - **Unsupervices learning**: let computer learn but itself\n",
" - Others:\n",
" - Reinforcement learning\n",
" - Recommender systems\n",
"\n",
"#### Supervised Learning\n",
"\n",
"- **Definition**: Give the computer a data set in which the right answer were given. Computer then resposible for producing *more* right answer from what we were given.\n",
"- Type of problems on supervised learning\n",
" - **Regression problem**: try to predict continuous (real) valued output e.g. house pricing.\n",
" - **Classification problem**: discrete valued output(s) e.g. probability of breast cancer (nalignant, benign) based on tumor size as attribute or feature. \n",
"\n",
"#### Unsupervised Learning\n",
"\n",
"- **Definition**: Data have the same labels or no labels. Let computer find the structure of data\n",
"- By: **clustering algorithm** and **non-clustering algorithm**\n",
"\n",
"### Model and Cost Function\n",
"\n",
"#### Model Representation\n",
"\n",
"- This training set will be used in the following section:\n",
"\n",
"| Size in feet^2 (x) \t| Price ($) in 1000's (y) \t|\n",
"|:------------------:\t|:-----------------------:\t|\n",
"| 2104 \t| 460 \t|\n",
"| 1416 \t| 232 \t|\n",
"| 1534 \t| 315 \t|\n",
"| 852 \t| 178 \t|\n",
"| ... \t| ... \t|\n",
"\n",
"- To represent the model, these are basic description of notation:\n",
" - $m$ = Number of training exmaples\n",
" - $x$'s = input variable/features\n",
" - $y$'s = output variable/\"target\" variable\n",
" - $(x, y)$ = one training example for corresponding $x$ and $y$\n",
" - $(x^i, y^i); i=1,...,m$ = training examples from row on table when $i$ is an index into the training set\n",
" - $X$ = space of input values, for example: $X = R$\n",
" - $Y$ = space of output values, for example: $Y = R$\n",
"- Supervised learning (on house pricing problem) is consists of\n",
" - Training set or data set $(x^i, y^i); i=1,...,m$\n",
" - Learning algorithm, to output $h$ or *hypothesis function*\n",
" - $h$ or *hypothesis function* takes input and try to output the estimated value of $y$, corresponding to $x$ or $h: X \\rightarrow Y$\n",
"- There are many ways to represent $h$ based on learning algorithm, for example, for house pricing problem, supervised, regression problem, the hypothesis can be described as \n",
"\n",
"$$h_\\theta(x) = \\theta_0 + \\theta_1x$$\n",
"\n",
"which is called *linear regression model with one variable* or *univariate linear regression*.\n",
"\n",
"#### Cost Function\n",
"\n",
"Cost function is the function that tell *accuracy* of hypothesis.\n",
"\n",
"According to the training set of house pricing problem below where $m = 47$\n",
"\n",
"| Size in feet^2 (x) \t| Price ($) in 1000's (y) \t|\n",
"|:------------------:\t|:-----------------------:\t|\n",
"| 2104 \t| 460 \t|\n",
"| 1416 \t| 232 \t|\n",
"| 1534 \t| 315 \t|\n",
"| 852 \t| 178 \t|\n",
"| ... \t| ... \t|\n",
"\n",
"The hypothesis of this linear regression problem can be notated as:\n",
"\n",
"$$h_\\theta(x) = \\theta_0 + \\theta_1x$$\n",
"\n",
"For house pricing linear regression problem, we need to choose $\\theta_0$ and $\\theta_1$ so that the hyopothesis $h_\\theta(x_i)$ (predicted value) is close to $y$ (actual value), or $h_\\theta(x_i) - y_i$ must be small. In this situation, **mean squared error (MSE)** or **mean squared division (MSD)** can be used to measure the average of the squares of the errors or deviations. The cost function of this problem can be described by the MSE as:\n",
"\n",
"$$J(\\theta_0, \\theta_1) = \\dfrac {1}{2m} \\displaystyle \\sum _{i=1}^m \\left ( \\hat{y}_{i}- y_{i} \\right)^2 = \\dfrac {1}{2m} \\displaystyle \\sum _{i=1}^m \\left (h_\\theta (x_{i}) - y_{i} \\right)^2$$\n",
"\n",
"#### Cost Function Intuition I\n",
"\n",
"To find the best hypothesis, best straight line from linear equation that can be used to predict an output, for house pricing problem, result from the cost function of best fit hypothesis must closer to zero or ideally zero.\n",
"\n",
"#### Cost Function Intuition II\n",
"\n",
"This section explains about contour plot which use to conviniently describe more complex hypothesis.\n",
"\n",
"![Example of hypothesis with contour plots to find the best hypothesis based on result of cost function](images/1.png)\n",
"\n",
"\n",
"### Parameter Learning\n",
"\n",
"#### Gradient Descent\n",
"\n",
"#### Gradient Descent Intuition"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

View File

@ -138,17 +138,123 @@
"#### Gradient Descent for Linear Regression\n",
"\n",
"- Linear regression => convex function => bowl-shaped cost function => no local optimum (only one left)\n",
"- *Batch* gradient descennt: each step of gradient descent uses all the training examples"
"- *Batch* gradient descennt: each step of gradient descent uses all the training examples\n",
"\n",
"### Linear Algebra Review\n",
"\n",
"#### Matrices and Vector\n",
"\n",
"- Matrix: rectangular array of numbers\n",
"- Dimension of matrix: number of rows * number of columns\n",
"- $A_ij$ refers to the element in the $i_{th}$ row and $j_{th}$ column of matrix A.\n",
"- A vector with 'n' rows is referred to as an 'n'-dimensional vector. Only one column.\n",
"- $v_i$ refers to the element in the $i_{th}$ row of the vector.\n",
"- In general, all our vectors and matrices will be 1-indexed. Note that for some programming languages, the arrays are 0-indexed.\n",
"- Matrices are usually denoted by uppercase names while vectors are lowercase.\n",
"- \"Scalar\" means that an object is a single value, not a vector or matrix.\n",
"- $$ refers to the set of scalar real numbers.\n",
"- $^𝕟$ refers to the set of n-dimensional vectors of real numbers.\n",
"\n",
"#### Addition and Scalar Multiplication\n",
"\n",
"- Addition and substraction on matrix can be done by taking each $A_ij$ and $B_ij$ and add together, but $A$ and $B$ must be same diemnsion.\n",
"- Scalar multiplication and division can be done by taking the number and multiply each element on $A$ one at a time.\n",
"\n",
"#### Multi-vector Multiplication\n",
"\n",
"We map the column of the vector onto each row of the matrix, multiplying each element and summing the result.\n",
"\n",
"$$\\begin{bmatrix} a & b \\newline c & d \\newline e & f \\end{bmatrix} *\\begin{bmatrix} x \\newline y \\newline \\end{bmatrix} =\\begin{bmatrix} a*x + b*y \\newline c*x + d*y \\newline e*x + f*y\\end{bmatrix}$$\n",
"\n",
"The result is a vector. The number of columns of the matrix must equal the number of rows of the vector. An m x n matrix multiplied by an n x 1 vector results in an m x 1 vector.\n",
"\n",
"#### Matrix-matrix multiplication\n",
"\n",
"We multiply two matrices by breaking it into several vector multiplications and concatenating the result.\n",
"\n",
"$$\\begin{bmatrix} a & b \\newline c & d \\newline e & f \\end{bmatrix} *\\begin{bmatrix} w & x \\newline y & z \\newline \\end{bmatrix} =\\begin{bmatrix} a*w + b*y & a*x + b*z \\newline c*w + d*y & c*x + d*z \\newline e*w + f*y & e*x + f*z\\end{bmatrix}$$\n",
"\n",
"An m x n matrix multiplied by an n x o matrix results in an m x o matrix. In the above example, a 3 x 2 matrix times a 2 x 2 matrix resulted in a 3 x 2 matrix.\n",
"\n",
"To multiply two matrices, the number of columns of the first matrix must equal the number of rows of the second matrix.\n",
"\n",
"#### Matrix Multiplication Properties\n",
"\n",
"- Matrices are not commutative: $AB \\neq BA$\n",
"- Matrices are associative: $(AB)C = A(BC)$\n",
"\n",
"The **identity matrix**, when multiplied by any matrix of the same dimensions, results in the original matrix. It's just like multiplying numbers by 1. The identity matrix simply has 1's on the diagonal (upper left to lower right diagonal) and 0's elsewhere.\n",
"\n",
"$$\\begin{bmatrix} 1 & 0 & 0 \\newline 0 & 1 & 0 \\newline 0 & 0 & 1 \\newline \\end{bmatrix}$$\n",
"\n",
"When multiplying the identity matrix after some matrix (AI), the square identity matrix's dimension should match the other matrix's columns. When multiplying the identity matrix before some other matrix (IA), the square identity matrix's dimension should match the other matrix's rows.\n",
"\n",
"#### Inverse and Transpose\n",
"\n",
"The **inverse** of a matrix A is denoted $A^{-1}$. Multiplying by the inverse results in the identity matrix.\n",
"\n",
"A non square matrix does not have an inverse matrix. We can compute inverses of matrices in octave with the $pinv(A)$ function and in Matlab with the $inv(A)$ function. Matrices that don't have an inverse are *singular* or *degenerate*.\n",
"\n",
"The **transposition** of a matrix is like rotating the matrix 90° in clockwise direction and then reversing it. We can compute transposition of matrices in matlab with the transpose(A) function or A':\n",
"\n",
"$$A = \\begin{bmatrix} a & b \\newline c & d \\newline e & f \\end{bmatrix}; A^T = \\begin{bmatrix} a & c & e \\newline b & d & f \\newline \\end{bmatrix}$$\n",
"\n",
"In other words:\n",
"\n",
"$$A_{ij} = A^T_{ji}$$\n",
"\n",
"## Week 2\n",
"\n",
"### Multivariate Linear Regression\n",
"\n",
"#### Multiple Features\n",
"\n",
"Linear regression with multiple variables is also known as \"multivariate linear regression\".\n",
"\n",
"We now introduce notation for equations where we can have any number of input variables.\n",
"\n",
"$$\\begin{align*}x_j^{(i)} &= \\text{value of feature } j \\text{ in the }i^{th}\\text{ training example} \\newline x^{(i)}& = \\text{the input (features) of the }i^{th}\\text{ training example} \\newline m &= \\text{the number of training examples} \\newline n &= \\text{the number of features} \\end{align*}$$\n",
"\n",
"The multivariable form of the hypothesis function accommodating these multiple features is as follows:\n",
"\n",
"$$h_\\theta (x) = \\theta_0 + \\theta_1 x_1 + \\theta_2 x_2 + \\theta_3 x_3 + \\cdots + \\theta_n x_n$$\n",
"\n",
"In order to develop intuition about this function, we can think about $θ_0$ as the basic price of a house, $θ_1$ as the price per square meter, $θ_2$ as the price per floor, etc. $x_1$ will be the number of square meters in the house, $x_2$ the number of floors, etc.\n",
"\n",
"Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:\n",
"\n",
"$$\\begin{align*}h_\\theta(x) =\\begin{bmatrix}\\theta_0 \\hspace{2em} \\theta_1 \\hspace{2em} ... \\hspace{2em} \\theta_n\\end{bmatrix}\\begin{bmatrix}x_0 \\newline x_1 \\newline \\vdots \\newline x_n\\end{bmatrix}= \\theta^T x\\end{align*}$$\n",
"\n",
"This is a vectorization of our hypothesis function for one training example; see the lessons on vectorization to learn more.\n",
"\n",
"Remark: Note that for convenience reasons in this course we assume $x(i)0=1 for (i∈1,…,m)$. This allows us to do matrix operations with theta and $x$. Hence making the two vectors '$θ$' and $x^(i)$ match each other element-wise (that is, have the same number of elements: $n+1$).]\n",
"\n",
"#### Gradient Descent for Multiple Variables\n",
"\n",
"The gradient descent equation itself is generally the same form; we just have to repeat it for our 'n' features:\n",
"\n",
"$$\\begin{align*} & \\text{repeat until convergence:} \\; \\lbrace \\newline \\; & \\theta_0 := \\theta_0 - \\alpha \\frac{1}{m} \\sum\\limits_{i=1}^{m} (h_\\theta(x^{(i)}) - y^{(i)}) \\cdot x_0^{(i)}\\newline \\; & \\theta_1 := \\theta_1 - \\alpha \\frac{1}{m} \\sum\\limits_{i=1}^{m} (h_\\theta(x^{(i)}) - y^{(i)}) \\cdot x_1^{(i)} \\newline \\; & \\theta_2 := \\theta_2 - \\alpha \\frac{1}{m} \\sum\\limits_{i=1}^{m} (h_\\theta(x^{(i)}) - y^{(i)}) \\cdot x_2^{(i)} \\newline & \\cdots \\newline \\rbrace \\end{align*}$$\n",
"\n",
"In other words:\n",
"\n",
"$$\\begin{align*}& \\text{repeat until convergence:} \\; \\lbrace \\newline \\; & \\theta_j := \\theta_j - \\alpha \\frac{1}{m} \\sum\\limits_{i=1}^{m} (h_\\theta(x^{(i)}) - y^{(i)}) \\cdot x_j^{(i)} \\; & \\text{for j := 0...n}\\newline \\rbrace\\end{align*}$$\n",
"\n",
"The following image compares gradient descent with one variable to gradient descent with multiple variables:\n",
"\n",
"![Gradien Descent for Multiple Variables](images/5.png)\n",
"\n",
"#### Gradient Descent in Practice I - Feature Scaling\n",
"\n",
"#### Gradient Descent in Practice II - Learning Rate\n",
"\n",
"#### Features and Polynomial Regression\n",
"\n",
"### Computing Parameters Analytically\n",
"\n",
"#### Normal Equation\n",
"\n",
"#### Normal Equation Noninvertibility"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
@ -167,7 +273,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
"version": "3.6.4"
}
},
"nbformat": 4,