Add week 1 machine learning course note

This commit is contained in:
pe3zx 2018-01-17 13:44:13 +07:00
parent 6278ccb444
commit 7ab36a12aa
6 changed files with 306 additions and 0 deletions

View File

@ -0,0 +1,131 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Machine Learning by Standford University\n",
"\n",
"## Week 1\n",
"\n",
"### Introduction\n",
"\n",
"#### What is Machine Learning?\n",
"\n",
"- Definition of machine learning defined by many computer scientists:\n",
" - Arthur Samuel (1959): Machine learning is field of study that gives computers the ability to learn without being explicitly programmed.\n",
" - Tom Mitchell (1998): Well-posed learning problem: A computer program is said to *learn* from experience $E$ with respect to some task $T$ and some performance measure $P$, if its performance on $T$, as measured by $P$, improves with experience $E$.\n",
"- Types of machine learning algorithms:\n",
" - **Supervised learning**: teach the computer how to do something\n",
" - **Unsupervices learning**: let computer learn but itself\n",
" - Others:\n",
" - Reinforcement learning\n",
" - Recommender systems\n",
"\n",
"#### Supervised Learning\n",
"\n",
"- **Definition**: Give the computer a data set in which the right answer were given. Computer then resposible for producing *more* right answer from what we were given.\n",
"- Type of problems on supervised learning\n",
" - **Regression problem**: try to predict continuous (real) valued output e.g. house pricing.\n",
" - **Classification problem**: discrete valued output(s) e.g. probability of breast cancer (nalignant, benign) based on tumor size as attribute or feature. \n",
"\n",
"#### Unsupervised Learning\n",
"\n",
"- **Definition**: Data have the same labels or no labels. Let computer find the structure of data\n",
"- By: **clustering algorithm** and **non-clustering algorithm**\n",
"\n",
"### Model and Cost Function\n",
"\n",
"#### Model Representation\n",
"\n",
"- This training set will be used in the following section:\n",
"\n",
"| Size in feet^2 (x) \t| Price ($) in 1000's (y) \t|\n",
"|:------------------:\t|:-----------------------:\t|\n",
"| 2104 \t| 460 \t|\n",
"| 1416 \t| 232 \t|\n",
"| 1534 \t| 315 \t|\n",
"| 852 \t| 178 \t|\n",
"| ... \t| ... \t|\n",
"\n",
"- To represent the model, these are basic description of notation:\n",
" - $m$ = Number of training exmaples\n",
" - $x$'s = input variable/features\n",
" - $y$'s = output variable/\"target\" variable\n",
" - $(x, y)$ = one training example for corresponding $x$ and $y$\n",
" - $(x^i, y^i); i=1,...,m$ = training examples from row on table when $i$ is an index into the training set\n",
" - $X$ = space of input values, for example: $X = R$\n",
" - $Y$ = space of output values, for example: $Y = R$\n",
"- Supervised learning (on house pricing problem) is consists of\n",
" - Training set or data set $(x^i, y^i); i=1,...,m$\n",
" - Learning algorithm, to output $h$ or *hypothesis function*\n",
" - $h$ or *hypothesis function* takes input and try to output the estimated value of $y$, corresponding to $x$ or $h: X \\rightarrow Y$\n",
"- There are many ways to represent $h$ based on learning algorithm, for example, for house pricing problem, supervised, regression problem, the hypothesis can be described as \n",
"\n",
"$$h_\\theta(x) = \\theta_0 + \\theta_1x$$\n",
"\n",
"which is called *linear regression model with one variable* or *univariate linear regression*.\n",
"\n",
"#### Cost Function\n",
"\n",
"Cost function is the function that tell *accuracy* of hypothesis.\n",
"\n",
"According to the training set of house pricing problem below where $m = 47$\n",
"\n",
"| Size in feet^2 (x) \t| Price ($) in 1000's (y) \t|\n",
"|:------------------:\t|:-----------------------:\t|\n",
"| 2104 \t| 460 \t|\n",
"| 1416 \t| 232 \t|\n",
"| 1534 \t| 315 \t|\n",
"| 852 \t| 178 \t|\n",
"| ... \t| ... \t|\n",
"\n",
"The hypothesis of this linear regression problem can be notated as:\n",
"\n",
"$$h_\\theta(x) = \\theta_0 + \\theta_1x$$\n",
"\n",
"For house pricing linear regression problem, we need to choose $\\theta_0$ and $\\theta_1$ so that the hyopothesis $h_\\theta(x_i)$ (predicted value) is close to $y$ (actual value), or $h_\\theta(x_i) - y_i$ must be small. In this situation, **mean squared error (MSE)** or **mean squared division (MSD)** can be used to measure the average of the squares of the errors or deviations. The cost function of this problem can be described by the MSE as:\n",
"\n",
"$$J(\\theta_0, \\theta_1) = \\dfrac {1}{2m} \\displaystyle \\sum _{i=1}^m \\left ( \\hat{y}_{i}- y_{i} \\right)^2 = \\dfrac {1}{2m} \\displaystyle \\sum _{i=1}^m \\left (h_\\theta (x_{i}) - y_{i} \\right)^2$$\n",
"\n",
"#### Cost Function Intuition I\n",
"\n",
"To find the best hypothesis, best straight line from linear equation that can be used to predict an output, for house pricing problem, result from the cost function of best fit hypothesis must closer to zero or ideally zero.\n",
"\n",
"#### Cost Function Intuition II\n",
"\n",
"This section explains about contour plot which use to conviniently describe more complex hypothesis.\n",
"\n",
"![Example of hypothesis with contour plots to find the best hypothesis based on result of cost function](images/1.png)\n",
"\n",
"\n",
"### Parameter Learning\n",
"\n",
"#### Gradient Descent\n",
"\n",
"#### Gradient Descent Intuition"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

View File

@ -0,0 +1,175 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Machine Learning by Standford University\n",
"\n",
"## Week 1\n",
"\n",
"### Introduction\n",
"\n",
"#### What is Machine Learning?\n",
"\n",
"- Definition of machine learning defined by many computer scientists:\n",
" - Arthur Samuel (1959): Machine learning is field of study that gives computers the ability to learn without being explicitly programmed.\n",
" - Tom Mitchell (1998): Well-posed learning problem: A computer program is said to *learn* from experience $E$ with respect to some task $T$ and some performance measure $P$, if its performance on $T$, as measured by $P$, improves with experience $E$.\n",
"- Types of machine learning algorithms:\n",
" - **Supervised learning**: teach the computer how to do something\n",
" - **Unsupervices learning**: let computer learn but itself\n",
" - Others:\n",
" - Reinforcement learning\n",
" - Recommender systems\n",
"\n",
"#### Supervised Learning\n",
"\n",
"- **Definition**: Give the computer a data set in which the right answer were given. Computer then resposible for producing *more* right answer from what we were given.\n",
"- Type of problems on supervised learning\n",
" - **Regression problem**: try to predict continuous (real) valued output e.g. house pricing.\n",
" - **Classification problem**: discrete valued output(s) e.g. probability of breast cancer (nalignant, benign) based on tumor size as attribute or feature. \n",
"\n",
"#### Unsupervised Learning\n",
"\n",
"- **Definition**: Data have the same labels or no labels. Let computer find the structure of data\n",
"- By: **clustering algorithm** and **non-clustering algorithm**\n",
"\n",
"### Model and Cost Function\n",
"\n",
"#### Model Representation\n",
"\n",
"- This training set will be used in the following section:\n",
"\n",
"| Size in feet^2 (x) \t| Price ($) in 1000's (y) \t|\n",
"|:------------------:\t|:-----------------------:\t|\n",
"| 2104 \t| 460 \t|\n",
"| 1416 \t| 232 \t|\n",
"| 1534 \t| 315 \t|\n",
"| 852 \t| 178 \t|\n",
"| ... \t| ... \t|\n",
"\n",
"- To represent the model, these are basic description of notation:\n",
" - $m$ = Number of training exmaples\n",
" - $x$'s = input variable/features\n",
" - $y$'s = output variable/\"target\" variable\n",
" - $(x, y)$ = one training example for corresponding $x$ and $y$\n",
" - $(x^i, y^i); i=1,...,m$ = training examples from row on table when $i$ is an index into the training set\n",
" - $X$ = space of input values, for example: $X = R$\n",
" - $Y$ = space of output values, for example: $Y = R$\n",
"- Supervised learning (on house pricing problem) is consists of\n",
" - Training set or data set $(x^i, y^i); i=1,...,m$\n",
" - Learning algorithm, to output $h$ or *hypothesis function*\n",
" - $h$ or *hypothesis function* takes input and try to output the estimated value of $y$, corresponding to $x$ or $h: X \\rightarrow Y$\n",
"- There are many ways to represent $h$ based on learning algorithm, for example, for house pricing problem, supervised, regression problem, the hypothesis can be described as \n",
"\n",
"$$h_\\theta(x) = \\theta_0 + \\theta_1x$$\n",
"\n",
"which is called *linear regression model with one variable* or *univariate linear regression*.\n",
"\n",
"#### Cost Function\n",
"\n",
"Cost function is the function that tell *accuracy* of hypothesis.\n",
"\n",
"According to the training set of house pricing problem below where $m = 47$\n",
"\n",
"| Size in feet^2 (x) \t| Price ($) in 1000's (y) \t|\n",
"|:------------------:\t|:-----------------------:\t|\n",
"| 2104 \t| 460 \t|\n",
"| 1416 \t| 232 \t|\n",
"| 1534 \t| 315 \t|\n",
"| 852 \t| 178 \t|\n",
"| ... \t| ... \t|\n",
"\n",
"The hypothesis of this linear regression problem can be notated as:\n",
"\n",
"$$h_\\theta(x) = \\theta_0 + \\theta_1x$$\n",
"\n",
"For house pricing linear regression problem, we need to choose $\\theta_0$ and $\\theta_1$ so that the hyopothesis $h_\\theta(x_i)$ (predicted value) is close to $y$ (actual value), or $h_\\theta(x_i) - y_i$ must be small. In this situation, **mean squared error (MSE)** or **mean squared division (MSD)** can be used to measure the average of the squares of the errors or deviations. The cost function of this problem can be described by the MSE as:\n",
"\n",
"$$J(\\theta_0, \\theta_1) = \\dfrac {1}{2m} \\displaystyle \\sum _{i=1}^m \\left ( \\hat{y}_{i}- y_{i} \\right)^2 = \\dfrac {1}{2m} \\displaystyle \\sum _{i=1}^m \\left (h_\\theta (x_{i}) - y_{i} \\right)^2$$\n",
"\n",
"#### Cost Function Intuition I\n",
"\n",
"To find the best hypothesis, best straight line from linear equation that can be used to predict an output, for house pricing problem, result from the cost function of best fit hypothesis must closer to zero or ideally zero.\n",
"\n",
"#### Cost Function Intuition II\n",
"\n",
"This section explains about contour plot which use to conviniently describe more complex hypothesis.\n",
"\n",
"![Example of hypothesis with contour plots to find the best hypothesis based on result of cost function](images/1.png)\n",
"\n",
"\n",
"### Parameter Learning\n",
"\n",
"#### Gradient Descent\n",
"\n",
"![Gradient descent algorithm](images/2.png)\n",
"\n",
"Gradient descent is algorithm which can be used to minimize cost function $J$, and other type of problems. The basic concept of gradent descent algorithm is:\n",
"\n",
"- Start with some $\\theta_0$, $\\theta_1$\n",
"- Keep changing $\\theta_0$, $\\theta_1$ to reduce $J(\\theta_0, \\theta_1)$ until minimum\n",
"\n",
"The gradeint descent algorithm is:\n",
"\n",
"$$Repeat\\, until\\, convergence\\, for\\, (j=0\\, and\\, j=1)\\, \\{\\theta_j := \\theta_j - \\alpha \\frac{\\partial}{\\partial \\theta_j} J(\\theta_0, \\theta_1)\\}$$\n",
"\n",
"Unpacking algorithm:\n",
"\n",
"- $:=$ is assignment operator\n",
"- $=$ is truth assertion\n",
"- $\\alpha$ is learning rate, or simply *the big of step we take downhill with creating descent*\n",
"- $\\frac{\\partial}{\\partial \\theta_j} J(\\theta_0, \\theta_1)$ is derivative term\n",
"- $for\\, (j=0\\, and\\, j=1)$ is updater\n",
"- Assignment to $\\theta_j$ must be simaltaneously happened from $\\theta_0$ and $\\theta_1$ \n",
"\n",
"![Gradient descent: correct way](images/3.png)\n",
"\n",
"#### Gradient Descent Intuition\n",
"\n",
"The derivative term in gradient descent $\\frac{\\partial}{\\partial \\theta_j} J(\\theta_0, \\theta_1)$ is responsible to finding the slope of specfic point on $J$ until found the minimum point.\n",
"\n",
"![The derivative term explanation](images/4.png)\n",
"\n",
"The learning rate $\\alpha$ is responsible to define move rate until convergence. If $\\alpha$ is too small, gredient descent can be slow, but if $\\alpha$ is too large, gradient descent can overshoot the minimum. It may fail to converge, or even diverge.\n",
"\n",
"As we approach a local minimum, gradient descent will automatically take smaller steps. So, no need to decrease learning rate $\\alpha$ over time.\n",
"\n",
"#### Gradient Descent for Linear Regression\n",
"\n",
"- Linear regression => convex function => bowl-shaped cost function => no local optimum (only one left)\n",
"- *Batch* gradient descennt: each step of gradient descent uses all the training examples"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}