Reduce the number of trainable parameters in C NN

I created 1D cnn that treats 2 branches ( its input are 2 different type of data. However I"m not really convinced about the number of trainable parameters resulted which really reduce the accuracy result to 70%. I tried to run the same data with a very simple MLP with 45.000 trainable parameters and its accuracy was better (about 82%). I thought to reduce the number of parameters of my CNN but I couldn’t find which layer can I remove. In study I found that that I can delete a dense layer and in an other I found that I have to maintain the 2 dense layers. This is my cnn summury

    Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input1 (InputLayer)             [(None, 8, 4)]       0                                            
__________________________________________________________________________________________________
input2 (InputLayer)             [(None, 8, 8)]       0                                            
__________________________________________________________________________________________________
conv1_input1 (Conv1D)           (None, 6, 128)       1664        input1[0][0]                     
__________________________________________________________________________________________________
conv1_input2 (Conv1D)           (None, 6, 128)       3200        input2[0][0]                     
__________________________________________________________________________________________________
bn1_input1 (BatchNormalization) (None, 6, 128)       512         conv1_input1[0][0]               
__________________________________________________________________________________________________
bn1_input2 (BatchNormalization) (None, 6, 128)       512         conv1_input2[0][0]               
__________________________________________________________________________________________________
dropOut1_input1 (Dropout)       (None, 6, 128)       0           bn1_input1[0][0]                 
__________________________________________________________________________________________________
dropOut1_input2 (Dropout)       (None, 6, 128)       0           bn1_input2[0][0]                 
__________________________________________________________________________________________________
conv2_input1 (Conv1D)           (None, 4, 128)       49280       dropOut1_input1[0][0]            
__________________________________________________________________________________________________
conv2_input2 (Conv1D)           (None, 4, 128)       49280       dropOut1_input2[0][0]            
__________________________________________________________________________________________________
bn2_input1 (BatchNormalization) (None, 4, 128)       512         conv2_input1[0][0]               
__________________________________________________________________________________________________
bn2_input2 (BatchNormalization) (None, 4, 128)       512         conv2_input2[0][0]               
__________________________________________________________________________________________________
dropOut2_input1 (Dropout)       (None, 4, 128)       0           bn2_input1[0][0]                 
__________________________________________________________________________________________________
dropOut2_input2 (Dropout)       (None, 4, 128)       0           bn2_input2[0][0]                 
__________________________________________________________________________________________________
conv3_input1 (Conv1D)           (None, 2, 256)       98560       dropOut2_input1[0][0]            
__________________________________________________________________________________________________
conv3_input2 (Conv1D)           (None, 2, 256)       98560       dropOut2_input2[0][0]            
__________________________________________________________________________________________________
bn3_input1 (BatchNormalization) (None, 2, 256)       1024        conv3_input1[0][0]               
__________________________________________________________________________________________________
bn3_input2 (BatchNormalization) (None, 2, 256)       1024        conv3_input2[0][0]               
__________________________________________________________________________________________________
dropOut3_input1 (Dropout)       (None, 2, 256)       0           bn3_input1[0][0]                 
__________________________________________________________________________________________________
dropOut3_input2 (Dropout)       (None, 2, 256)       0           bn3_input2[0][0]                 
__________________________________________________________________________________________________
conv4_input1 (Conv1D)           (None, 2, 256)       65792       dropOut3_input1[0][0]            
__________________________________________________________________________________________________
conv4_input2 (Conv1D)           (None, 2, 256)       65792       dropOut3_input2[0][0]            
__________________________________________________________________________________________________
bn4_input1 (BatchNormalization) (None, 2, 256)       1024        conv4_input1[0][0]               
__________________________________________________________________________________________________
bn4_input2 (BatchNormalization) (None, 2, 256)       1024        conv4_input2[0][0]               
__________________________________________________________________________________________________
dropOut4_input1 (Dropout)       (None, 2, 256)       0           bn4_input1[0][0]                 
__________________________________________________________________________________________________
dropOut4_input2 (Dropout)       (None, 2, 256)       0           bn4_input2[0][0]                 
__________________________________________________________________________________________________
global_average_pooling1d (Globa (None, 256)          0           dropOut4_input1[0][0]            
__________________________________________________________________________________________________
global_average_pooling1d_1 (Glo (None, 256)          0           dropOut4_input2[0][0]            
__________________________________________________________________________________________________
concat_layer (Concatenate)      (None, 512)          0           global_average_pooling1d[0][0]   
                                                                 global_average_pooling1d_1[0][0] 
__________________________________________________________________________________________________
dense (Dense)                   (None, 512)          262656      concat_layer[0][0]               
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 512)          262656      dense[0][0]                      
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 7)            3591        dense_1[0][0]                    
==================================================================================================
Total params: 967,175
Trainable params: 964,103
Non-trainable params: 3,072

Source: Python Questions

LEAVE A COMMENT