Machine Learning: Multivariate Linear Regression

catetan lagi..

$latex n=&s=1$ number of features

$latex x^{(i)}=&s=1$ input (features) of $latex i^{th}&s=1$ training example

$latex x_j^{(i)}n=&s=1$ value of feature j in $latex i^{th}&s=1$ training example

  • Hypothesis function:

$latex h_\theta(x)=\theta_0+\theta_1x_1+\theta_2x_2+…+\theta_nx_n&s=1$

  • Gradient descent:

repeat until convergence $latex \{&s=1$
$latex \theta_j:=\theta_j-\alpha\frac{1}{m}\sum\limits_{i=1}^m (h_\theta(x^{(i)}) – y^{(i)})x_j^{(i)}&s=1$
$latex \}&s=1$

(simultaneously update $latex \theta_j&s=1$ for j=0,1,…,n)

  • Feature Scaling
  • Mean Normalization
  • Normal Equation

$latex \theta=(X^TX)^{-1}X^Ty&s=1$

  • Gradient Descent vs Normal Equation

Leave a Reply

Your email address will not be published. Required fields are marked *