Bottom Page

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
 Regarding index out of bounds error and implementing Loop error in backpropagation al
#1
I am fairly new to Python. I have spent some time trying to implement this and wanted to ask the community for some advice.

I am receiving an index out of bounds error. I would like help addressing that as well as any advice on a way to write this code more efficiently because mine is very rough and I am not sure it is doing what I think it is doing...

Here I have attached program what i am trying and what output I got.

I have implemented a backpropagation algorithm neural network. In that, I am using relu activation function.

I am getting an error in it.Tried many times but not getting a solution for it.

Thanking you in advance.

import numpy as np
import math

X = np.array(([65,   65,   77,  76,   80,   69,  78,  69,   84,  68,   68,   76,   71,   75,   65,   73,   75,   83,  68,  67,   73,   64,   71,   66,   75,   73,  65,   65,   70,   59,   70,   72,   83,  76,   77,  67],
              [195,  200,  188, 187,  204,  203, 183, 175,  176, 203,  206,  190,  180,  193,  199,  196,  203,  198, 202, 208,  192,  176,  179,  180,  183,  198, 199,  184,  195,  188,  177,  177,  167, 160,  170, 198],
              [19,   19,   19,  19,   20,   20,  20,  20,   19,  19,   19,   19,   19,   20,   20,   20,   20,   19,  19,  19,   19,   20,   20,   20,   20,   19,  19,   19,   19,   19,   20,   20,   20,  20,   19,  20],
              [223,  207,  175, 185,  180,  204, 194, 196,  216, 176,  220,  219,  204,  183,  221,  212,  194,  212, 129, 198,  226,  220,  185,  208,  199,  187, 213,  132,  157,  182,  220,  194,  181, 167,  187, 173]), dtype=float)
y = np.array(([10], [12], [15], [17]), dtype=float)
xPredicted = np.array(([19,   19,   19,  19,   20,   20,  20,  20,   19,  19,   19,   19,   19,   20,   20,   20,   20,   19,  19,  19,   19,   20,   20,   20,   20,   19,  19,   19,   19,   19,   20,   20,   20,  20,   19,  20]), dtype=float)

# scale units
X = X/np.amax(X, axis=0)
xPredicted = xPredicted/np.amax(xPredicted, axis=0)
y = y/20

class Neural_Network(object):
  def __init__(self):
    self.inputSize = 36
    self.outputSize = 1
    self.hiddenSize = 18
    self.W1 = np.random.randn(self.inputSize, self.hiddenSize)
    self.W2 = np.random.randn(self.hiddenSize, self.outputSize)

  def forward(self, X):

    self.z = np.dot(X, self.W1)
    self.z2 = self.relu(self.z)
    self.z3 = np.dot(self.z2, self.W2)
    o = self.relu(self.z3)
    return o



  def relu(self, s):

    for i in range(0, len(str(s))):
            for k in range(len(str(s[i]))):
                if s[i][k] > 0:
                    s[i][k] = 1
                else:
                    s[i][k] = 0
                    return s

  def reluPrime(self, s) :

     for i in range(0, len(str(s))):
        for k in range(0, len(str(s[i]))):
            if s[i][k] > 0:
                pass  # do nothing since it would be effectively replacing x with x
            else:
                s[i][k] = 0
                return s


  def backward(self, X, y, o):

    self.o_error = y - o
    self.o_delta = self.o_error*self.reluPrime(o)

    self.z2_error = self.o_delta.dot(self.W2.T)
    self.z2_delta = self.z2_error*self.reluPrime(self.z2)

    self.W1 += X.T.dot(self.z2_delta)
    self.W2 += self.z2.T.dot(self.o_delta)

  def train (self, X, y):
    o = self.forward(X)
    self.backward(X, y, o)

  def saveWeights(self):
    np.savetxt("w1.txt", self.W1, fmt="%s")
    np.savetxt("w2.txt", self.W2, fmt="%s")

  def predict(self) :
    print("Predicted data based on trained weights: ")
    print("Input (scaled): \n" + str(xPredicted))
    print("Output: \n" + str(self.forward(xPredicted)))


NN = Neural_Network()


for i in range(120):
  print("Input: \n" + str(X))
  print("Actual Output: \n" + str(y))
  print("Predicted Output: \n" + str(NN.forward(X)))
  print("Loss: \n" + str(np.mean(np.square(y - NN.forward(X)))))
  print("\n")
  NN.train(X, y)
  NN.predict()
  NN.saveWeights()
output:

$python main.py
Input: 
[[ 0.29147982  0.31400966  0.40957447  0.40641711  0.39215686  0.33823529
   0.40206186  0.35204082  0.38888889  0.33497537  0.30909091  0.34703196
   0.34803922  0.38860104  0.29411765  0.34433962  0.36945813  0.39150943
   0.33663366  0.32211538  0.32300885  0.29090909  0.38378378  0.31730769
   0.37688442  0.36868687  0.30516432  0.35326087  0.35897436  0.31382979
   0.31818182  0.37113402  0.45856354  0.45508982  0.41176471  0.33838384]
 [ 0.87443946  0.96618357  1.          1.          1.          0.99509804
   0.94329897  0.89285714  0.81481481  1.          0.93636364  0.86757991
   0.88235294  1.          0.90045249  0.9245283   1.          0.93396226
   1.          1.          0.84955752  0.8         0.96756757  0.86538462
   0.91959799  1.          0.9342723   1.          1.          1.
   0.80454545  0.91237113  0.92265193  0.95808383  0.90909091  1.        ]
 [ 0.08520179  0.09178744  0.10106383  0.10160428  0.09803922  0.09803922
   0.10309278  0.10204082  0.08796296  0.09359606  0.08636364  0.08675799
   0.09313725  0.10362694  0.09049774  0.09433962  0.09852217  0.08962264
   0.09405941  0.09134615  0.0840708   0.09090909  0.10810811  0.09615385
   0.10050251  0.0959596   0.08920188  0.10326087  0.0974359   0.10106383
   0.09090909  0.10309278  0.11049724  0.11976048  0.10160428  0.1010101 ]
 [ 1.          1.          0.93085106  0.98930481  0.88235294  1.          1.
   1.          1.          0.86699507  1.          1.          1.
   0.94818653  1.          1.          0.95566502  1.          0.63861386
   0.95192308  1.          1.          1.          1.          1.
   0.94444444  1.          0.7173913   0.80512821  0.96808511  1.          1.
   1.          1.          1.          0.87373737]]
Actual Output: 
[[ 0.5 ]
 [ 0.6 ]
 [ 0.75]
 [ 0.85]]
Traceback (most recent call last):
  File "main.py", line 86, in <module>
    print("Predicted Output: \n" + str(NN.forward(X)))
  File "main.py", line 29, in forward
    o = self.relu(self.z3)
  File "main.py", line 38, in relu
    if s[i][k] > 0:
IndexError: index 1 is out of bounds for axis 0 with size 1[python][python][python]
[/python][/python][/python]
Quote
#2
Quote:I am receiving an index out of bounds error.
Please post the error traceback in it's entirety.
It contains valuable information.
Quote
#3
Can you please explain what exactly it means?
Quote
#4
the complete error message (python includes a 'traceback', showing several steps prior to the error).
Quote
#5
Here the complete error message (python includes a 'traceback', showing several steps prior to the error).

import numpy as np
# The aim is to train the neural network using backpropagation algorithm to determine a class as output.

#Here 4 classes are taken as input


X = np.array(([65,   65,   77,  76,   80,   69,  78,  69,   84,  68,   68,   76,   71,   75,   65,   73,   75,   83,  68,  67,   73,   64,   71,   66,   75,   73,  65,   65,   70,   59,   70,   72,   83,  76,   77,  67],
              [195,  200,  188, 187,  204,  203, 183, 175,  176, 203,  206,  190,  180,  193,  199,  196,  203,  198, 202, 208,  192,  176,  179,  180,  183,  198, 199,  184,  195,  188,  177,  177,  167, 160,  170, 198],
              [19,   19,   19,  19,   20,   20,  20,  20,   19,  19,   19,   19,   19,   20,   20,   20,   20,   19,  19,  19,   19,   20,   20,   20,   20,   19,  19,   19,   19,   19,   20,   20,   20,  20,   19,  20],
             [223,  207,  175, 185,  180,  204, 194, 196,  216, 176,  220,  219,  204,  183,  221,  212,  194,  212, 129, 198,  226,  220,  185,  208,  199,  187, 213,  132,  157,  182,  220,  194,  181, 167,  187, 173]), dtype=float)

# As this is supervised learning I am providing output values already to train the network
y = np.array(([10], [12], [15], [17]), dtype=float)

#This is the class for which network will determine class as output.
xPredicted = np.array(([19,   19,   19,  19,   20,   20,  20,  20,   19,  19,   19,   19,   19,   20,   20,   20,   20,   19,  19,  19,   19,   20,   20,   20,   20,   19,  19,   19,   19,   19,   20,   20,   20,  20,   19,  20]), dtype=float)

# scale units
#We convert here the arrays to float values
X = X/np.amax(X, axis=0)
xPredicted = xPredicted/np.amax(xPredicted, axis=0)
y = y/20

#Here we define the forward propagation of neural network.the weights are taken randomly
class Neural_Network(object):
  def __init__(self):
    self.inputSize = 36
    self.outputSize = 1
    self.hiddenSize = 18
    self.W1 = np.random.randn(self.inputSize, self.hiddenSize)
    self.W2 = np.random.randn(self.hiddenSize, self.outputSize)

  def forward(self, X):

    self.z = np.dot(X, self.W1)
    self.z2 = self.relu(self.z)
    self.z3 = np.dot(self.z2, self.W2)
    o = self.relu(self.z3)
    return o

# This is the code for defining activation function which is applied on self.z in line 36.
    #s = np.array(self.z)
  def relu(self, s):
      for i in range(0, len(s)):
          for k in range(0, len(s[i])):
              if s[i][k] > 0:
                  pass
              else:
                  s[i][k] = 0
      return s

# This is  the derivative of the activation function which we are applying in backpropagation step in line 66 and 69.
  def reluPrime(self, s):
      for i in range(0, len(s)):
          for k in range(0, len(s[i])):
              if s[i][k] > 0:
                  s[i][k] = 1
              else:
                  s[i][k] = 0
      return s


# Below code represents the backpropagation function which try to minimize the error.
  def backward(self, X, y, o):

    self.o_error = y - o
    self.o_delta = self.o_error*self.reluPrime(o)

    self.z2_error = self.o_delta.dot(self.W2.T)
    self.z2_delta = self.z2_error*self.reluPrime(self.z2)

    self.W1 += X.T.dot(self.z2_delta)
    self.W2 += self.z2.T.dot(self.o_delta)

#The randomly taken weights are updated in w1 and w2 for future reference.
  def saveWeights(self):
      np.savetxt("w1.txt", self.W1, fmt="%s")
      np.savetxt("w2.txt", self.W2, fmt="%s")

# This step defines the training function for neural network.
  def train (self, X, y):
    o = self.forward(X)
    self.backward(X, y, o)

#It determines the output for xpredicted define in line 21.
  def predict(self) :
    print("Predicted data based on trained weights: ")
    print("Input (scaled): \n" + str(xPredicted))
    print("Output: \n" + str(self.forward(xPredicted)))


NN = Neural_Network()

#This are all the values that we want to print and define the range.
for i in range(1000):
  print("Input: \n" + str(X))
  print("Actual Output: \n" + str(y))
  print("Predicted Output: \n" + str(NN.forward(X)))
  print("Loss: \n" + str(np.mean(np.square(y - NN.forward(X)))))
  print("\n")
  NN.train(X, y)
  NN.saveWeights()
  NN.predict()
"""The main problem lies in for loop as for other activation function such as sigmoid provides correct output.

The error after running the program which I am getting is as follows:

C:\Users\Admin\PycharmProjects\backp\venv\Scripts\python.exe C:/Users/Admin/Desktop/mini/project/af_mini/relu2.py
Input:
[[0.29147982 0.31400966 0.40957447 0.40641711 0.39215686 0.33823529
  0.40206186 0.35204082 0.38888889 0.33497537 0.30909091 0.34703196
  0.34803922 0.38860104 0.29411765 0.34433962 0.36945813 0.39150943
  0.33663366 0.32211538 0.32300885 0.29090909 0.38378378 0.31730769
  0.37688442 0.36868687 0.30516432 0.35326087 0.35897436 0.31382979
  0.31818182 0.37113402 0.45856354 0.45508982 0.41176471 0.33838384]
 [0.87443946 0.96618357 1.         1.         1.         0.99509804
  0.94329897 0.89285714 0.81481481 1.         0.93636364 0.86757991
  0.88235294 1.         0.90045249 0.9245283  1.         0.93396226
  1.         1.         0.84955752 0.8        0.96756757 0.86538462
  0.91959799 1.         0.9342723  1.         1.         1.
  0.80454545 0.91237113 0.92265193 0.95808383 0.90909091 1.        ]
 [0.08520179 0.09178744 0.10106383 0.10160428 0.09803922 0.09803922
  0.10309278 0.10204082 0.08796296 0.09359606 0.08636364 0.08675799
  0.09313725 0.10362694 0.09049774 0.09433962 0.09852217 0.08962264
  0.09405941 0.09134615 0.0840708  0.09090909 0.10810811 0.09615385
  0.10050251 0.0959596  0.08920188 0.10326087 0.0974359  0.10106383
  0.09090909 0.10309278 0.11049724 0.11976048 0.10160428 0.1010101 ]
 [1.         1.         0.93085106 0.98930481 0.88235294 1.
  1.         1.         1.         0.86699507 1.         1.
  1.         0.94818653 1.         1.         0.95566502 1.
  0.63861386 0.95192308 1.         1.         1.         1.
  1.         0.94444444 1.         0.7173913  0.80512821 0.96808511
  1.         1.         1.         1.         1.         0.87373737]]
Actual Output:
[[0.5 ]
 [0.6 ]
 [0.75]
 [0.85]]
Predicted Output:
[[13.17178417]
 [36.60589828]
 [ 3.73935079]
 [39.75698654]]
Loss:
744.9221610514712


Predicted data based on trained weights:
Traceback (most recent call last):
Input (scaled):
[0.95 0.95 0.95 0.95 1.   1.   1.   1.   0.95 0.95 0.95 0.95 0.95 1.
  File "C:/Users/Admin/Desktop/mini/project/af_mini/relu2.py", line 89, in <module>
 1.   1.   1.   0.95 0.95 0.95 0.95 1.   1.   1.   1.   0.95 0.95 0.95
    NN.predict()
 0.95 0.95 1.   1.   1.   1.   0.95 1.  ]
  File "C:/Users/Admin/Desktop/mini/project/af_mini/relu2.py", line 75, in predict
    print("Output: \n" + str(self.forward(xPredicted)))
  File "C:/Users/Admin/Desktop/mini/project/af_mini/relu2.py", line 26, in forward
    self.z2 = self.relu(self.z)
  File "C:/Users/Admin/Desktop/mini/project/af_mini/relu2.py", line 35, in relu
    for k in range(0, len(s[i])):
TypeError: object of type 'numpy.float64' has no len()

Process finished with exit code 1"""
Quote
#6
(Apr-19-2018, 05:03 AM)Larz60+ Wrote: the complete error message (python includes a 'traceback', showing several steps prior to the error).

Can you please check
Quote
#7
The line numbers in the program don't match the error.
The error is on line 45, not 35
So there are 10 lines of code missing from the listing.
add the following before the 2nd loop
      for i in range(0, len(s)):
          print('Before 2nd loop, i: {}, len(s): {}'.format(i, len(s)))
          for k in range(0, len(s[i])):
and show printout just before failure.
Quote

Top Page

Possibly Related Threads...
Thread Author Replies Views Last Post
  IndexError: index out of bounds LeoGER 3 312 Sep-05-2019, 02:05 PM
Last Post: LeoGER
  Unable to identify Tuple index error appmkl 4 460 Jun-28-2019, 10:12 AM
Last Post: appmkl
  Can you help me with this error? IndexError: invalid index of a 0-dim tensor. DerBerliner 1 940 Feb-28-2019, 05:47 PM
Last Post: Larz60+
  Using pandas, index error fyec 1 1,225 Aug-01-2018, 09:25 AM
Last Post: volcano63
  type error and access violation error pyopengl hsunteik 0 1,133 Nov-04-2017, 04:51 AM
Last Post: hsunteik
  Unable to understand reason for error IndexError: tuple index out of range rajat2504 4 34,973 Dec-09-2016, 11:04 AM
Last Post: Kebap

Forum Jump:


Users browsing this thread: 1 Guest(s)