Python Forum

Full Version: issue displaying summary of whole keras CNN model on TensorFlow with python
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I have been trying to view the filters and features maps of a trained image classification system, using the 'mobilenet v2' model with 'imagenet' weights, however, i have been having issues doing this. I am pretty sure that i know the reason but i just don't know how to implement this.

I was originally following an example from Tensorflow (https://www.tensorflow.org/tutorials/ima...r_learning) wherein i made a classification model and i wanted to view the filters and features maps once trained on my own image set.

I found several examples of how to view the layers online, with the best being (https://www.kaggle.com/arpitjain007/guid...aps-in-cnn).

Sadly when i try to view the filter and feature maps of my trained model i am unable to find any convoluted layers. when i summarise my model:

model_1 = tf.keras.models.load_model('saved_model/my_model')
model_1.summary()

which prints:

Model: "model1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 160, 160, 3)]     0         
_________________________________________________________________
sequential (Sequential)      (None, 160, 160, 3)       0         
_________________________________________________________________
tf.math.truediv (TFOpLambda) (None, 160, 160, 3)       0         
_________________________________________________________________
tf.math.subtract (TFOpLambda (None, 160, 160, 3)       0         
_________________________________________________________________
mobilenetv2_1.00_160 (Functi (None, 5, 5, 1280)        2257984   
_________________________________________________________________
global_average_pooling2d (Gl (None, 1280)              0         
_________________________________________________________________
dropout (Dropout)            (None, 1280)              0         
_________________________________________________________________
dense (Dense)                (None, 1)                 1281      
=================================================================
Total params: 2,259,265
Trainable params: 1,862,721
Non-trainable params: 396,544
____________________________________
and when i try to view the layers i get nothing:

for layer in model_1.layers:

if 'conv' not in layer.name:
    continue    
filters , bias = layer.get_weights()
print(layer.name , filters.shape)
this should show all the conv layers for which the mobilenet model has many yet it returns nothing.

i can view the mobilenet layers by itself by calling something like this:


base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
                                           include_top=False,
                                           weights='imagenet')
image_batch, label_batch = next(iter(train_dataset))
feature_batch = base_model(image_batch)
print(feature_batch.shape)
base_model.trainable = False
base_model.summary()

    Model: "mobilenetv2_1.00_160"
    __________________________________________________________________________________________________
    Layer (type)                    Output Shape         Param #     Connected to                     
    ==================================================================================================
    input_1 (InputLayer)            [(None, 160, 160, 3) 0                                            
    __________________________________________________________________________________________________
    Conv1 (Conv2D)                  (None, 80, 80, 32)   864         input_1[0][0]                    
    __________________________________________________________________________________________________
    bn_Conv1 (BatchNormalization)   (None, 80, 80, 32)   128         Conv1[0][0]                      
    __________________________________________________________________________________________________
    Conv1_relu (ReLU)               (None, 80, 80, 32)   0           bn_Conv1[0][0]                   
    __________________________________________________________________________________________________
    expanded_conv_depthwise (Depthw (None, 80, 80, 32)   288         Conv1_relu[0][0]                 
    __________________________________________________________________________________________________
    expanded_conv_depthwise_BN (Bat (None, 80, 80, 32)   128         expanded_conv_depthwise[0][0]    
    __________________________________________________________________________________________________
    expanded_conv_depthwise_relu (R (None, 80, 80, 32)   0           expanded_conv_depthwise_BN[0][0] 
    __________________________________________________________________________________________________
    expanded_conv_project (Conv2D)  (None, 80, 80, 16)   512         expanded_conv_depthwise_relu[0][0
   __________________________________________________________________________________________________
    expanded_conv_project_BN (Batch (None, 80, 80, 16)   64          expanded_conv_project[0][0]      
    __________________________________________________________________________________________________
    block_1_expand (Conv2D)         (None, 80, 80, 96)   1536        expanded_conv_project_BN[0][0]   
   __________________________________________________________________________________________________
    block_1_expand_BN (BatchNormali (None, 80, 80, 96)   384         block_1_expand[0][0]             
    __________________________________________________________________________________________________
    block_1_expand_relu (ReLU)      (None, 80, 80, 96)   0           block_1_expand_BN[0][0]          
   __________________________________________________________________________________________________
    block_1_pad (ZeroPadding2D)     (None, 81, 81, 96)   0           block_1_expand_relu[0][0]        
   __________________________________________________________________________________________________
    block_1_depthwise (DepthwiseCon (None, 40, 40, 96)   864         block_1_pad[0][0]                
    __________________________________________________________________________________________________
   block_1_depthwise_BN (BatchNorm (None, 40, 40, 96)   384         block_1_depthwise[0][0]          

Total params: 2,257,984
Trainable params: 0
Non-trainable params: 2,257,984
(please note i left out a lot of the layers from the mobilenet summary as there was lots and i don't believe they are relevant in this issue)

I believe the issue lies when defining and summarising the model. I need to be able to define the model so that it shows all the layers from 'model_1' and the mobilenet layers from the base_model summary

I hoped it would be as simple as calling 'model2 = model_1 + base_model' but this did not work.

i hope this makes sense and that someone can help!