Hi all,
I'm working my way through Francios Chollet's 'Deep Learning with Python', which teaches Deep Learning through the Keras frontend for Tensorflow, using Jupyter Notebook. I'm running into a bit of an issue when trying out one of the early practical implementations, where after successfully training a neural network on some data (a dataset of IMDB reviews), the Jupyter kernel crashes when I try plotting the resulting accuracy and loss through matplotlib.
I have successfully plotted graphs in other separate code, and the neural network seems to train with no problem by itself, but when I try to do one after the other it seems to overload something and cause a crash. This may be a processing issue with my laptop (a fairly decent Macbook Pro), but could somebody who has experience with this check the code for themselves and see if they can get it to run successfully? At least then I would know if it's an issue with my hardware and not the code itself.
Thanks in advance if anyone can help. Code is below, sorry if it should be formatted or shared differently, I'm new to all this.
Thanks buran for the tags prompt, will make sure to do this properly from now on.
I'm working my way through Francios Chollet's 'Deep Learning with Python', which teaches Deep Learning through the Keras frontend for Tensorflow, using Jupyter Notebook. I'm running into a bit of an issue when trying out one of the early practical implementations, where after successfully training a neural network on some data (a dataset of IMDB reviews), the Jupyter kernel crashes when I try plotting the resulting accuracy and loss through matplotlib.
I have successfully plotted graphs in other separate code, and the neural network seems to train with no problem by itself, but when I try to do one after the other it seems to overload something and cause a crash. This may be a processing issue with my laptop (a fairly decent Macbook Pro), but could somebody who has experience with this check the code for themselves and see if they can get it to run successfully? At least then I would know if it's an issue with my hardware and not the code itself.
Thanks in advance if anyone can help. Code is below, sorry if it should be formatted or shared differently, I'm new to all this.
# setup from keras.datasets import imdb (train_data, train_labels), (test_data, test_labels) = imdb.load_data( num_words=10000) import numpy as np import matplotlib.pyplot as plt from keras import models from keras import layers from keras import optimizers from keras import losses from keras import metrics # check training data train_data[0] # check training labels train_labels[0] # test that maximum number of unique words is 10000 max([max(sequence) for sequence in train_data]) # read original reviews for kicks word_index = imdb.get_word_index() reverse_word_index = dict( [(value, key) for (key, value) in word_index.items()]) decoded_review = ' '.join( [reverse_word_index.get(i - 3, '?') for i in train_data[950]]) decoded_review def vectorise_sequences(sequences, dimension=10000): results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. return results x_train = vectorise_sequences(train_data) x_test = vectorise_sequences(test_data) y_train = np.asarray(train_labels).astype('float32') y_test = np.asarray(test_labels).astype('float32') # view the transformed samples x_train[0] # view the labels y_train[0] model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss=losses.binary_crossentropy, metrics=['acc']) # set up a validation set x_val=x_train[:10000] partial_x_train=x_train[10000:] y_val=y_train[:10000] partial_y_train=y_train[10000:] history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val)) history_dict = history.history loss_values = history_dict['loss'] val_loss_values = history_dict['val_loss'] epochs = range(1, len('acc') + 1) plt.plot(epochs, loss_values, 'bo', label='Training Loss') plt.plot(epochs, val_loss_values, 'b', label='Validation Loss') plt.title('Training & Validation Loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show()
Thanks buran for the tags prompt, will make sure to do this properly from now on.