Python Forum

Full Version: How to build linear regression by implementing Gradient Descent using only linear alg
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I'm studying this dataset: https://archive.ics.uci.edu/ml/datasets/...ower+Plant
These are the features:
- AT: Ambiant Temperature
- V: Exhaust Vacuum
- AP: Ambient Pressure
- RH: Relative Humidity
- PE: Energy Output (the label to predict)

We have many parameters such as:
- m: number of observations in the training set
- n: number of features (without the y offset)
- w: the vector of weights of your model (w0, w1, …., wn)
- x: an observation vector (features and offset x0=1 ) . Dimension : (n+1, 1)
- X: matrix of observations. Dimension : (m, n+1)
- y: label (‘answers’) vector . Dimension : (m, 1)
- cost: the cost function J
- delta: the stop condition
- iterations: max number of iterations of the gradient descent (default value = 1000)
- alpha: learning rate of the gradient descent (default value = 0.03)
So I initialize all the vectors first. However, how will I add values inside these vectors? How to do matrix multiplications and the transposed matrix in Python?

w = array([])
x = array([])
X = array([])
y = array([])
I don't really know what to do next.