Machine Learning: Linear Regression

catetan..

  • Hypothesis function:

$latex h_\theta(x)=\theta_0+\theta_1 x&s=1$

    • Cost function:

$latex J(\theta_0,\theta_1)=\frac{1}{2m}\sum\limits_{i=1}^m (h_\theta(x^{(i)}) – y^{(i)})^2&s=1$

  • Gradient descent for linear regression

repeat until convergence $latex \{&s=1$
$latex \theta_0:=\theta_0-\alpha\frac{1}{m}\sum\limits_{i=1}^m (h_\theta(x^{(i)}) – y^{(i)})&s=1$
$latex \theta_1:=\theta_1-\alpha\frac{1}{m}\sum\limits_{i=1}^m (h_\theta(x^{(i)}) – y^{(i)}x^{(i)})&s=1$
$latex \}&s=1$

 

learningmodel

Leave a Reply

Your email address will not be published. Required fields are marked *