Machine Learning: Multivariate Linear Regression

catetan lagi..

n= number of features

x^{(i)}= input (features) of i^{th} training example

x_j^{(i)}n= value of feature j in i^{th} training example

  • Hypothesis function:

h_\theta(x)=\theta_0+\theta_1x_1+\theta_2x_2+...+\theta_nx_n

  • Gradient descent:

repeat until convergence \{
\theta_j:=\theta_j-\alpha\frac{1}{m}\sum\limits_{i=1}^m (h_\theta(x^{(i)}) - y^{(i)})x_j^{(i)}
\}

(simultaneously update \theta_j for j=0,1,…,n)

  • Feature Scaling
  • Mean Normalization
  • Normal Equation

\theta=(X^TX)^{-1}X^Ty

  • Gradient Descent vs Normal Equation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s