Python Forum

Full Version: LSTM Model accuracy caps and I can't improve it
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I am trying to do a proof of concept LSTM model for forex prediction.

After lots of reading I came up with the following model (I believe its called stacked)

model = Sequential()
model.add(LSTM(64, return_sequences=True, input_shape=(None, x_train.shape[2])))
model.add(LSTM(128, return_sequences=True))
model.add(LSTM(64, return_sequences=True))
model.add(LSTM(n_features, return_sequences=True))

model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train, y_train, epochs=100, batch_size=1, verbose=2, validation_data=(x_test, y_test))
Everything else I did performed worse.

Loss stops to improve after around epoch 70. And after that further training has no effect. I use MinMaxScaler on the data
    self.scaler = MinMaxScaler(feature_range=(0, 1))
    self.scaler = self.scaler.fit(self.raw)
    self.raw = self.scaler.transform(self.raw)
Without scaling the differences in the prediction become so small that the predicted line looks like a straight horizontal line.

Is there anything I can do to improve the model? How do I chose the right number of LSTM layers and hidden layer size for each one of them. Tried adding Dropout layers, as several online resources suggested and there wasn't any improvement.

If I need to provide other parts of the code just let me know.
It may not be possible. The data you are training on will have random, unpredictable variation - otherwise everyone would be able to predict the price and there would be no market.