Keras-vis gives the following error: AttributeError: multiple inbound nodes

I am trying to run this example with my own model, which looks like this:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         (None, 150, 150, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 150, 150, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 150, 150, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 75, 75, 64)        0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 75, 75, 128)       73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 75, 75, 128)       147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 37, 37, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 37, 37, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 37, 37, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 37, 37, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 18, 18, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 18, 18, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 18, 18, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 18, 18, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 9, 9, 512)         0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 9, 9, 512)         2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 9, 9, 512)         2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 9, 9, 512)         2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 4, 4, 512)         0         
_________________________________________________________________
sequential_1 (Sequential)    (None, 1)                 2097665   
=================================================================

But I get this error:

AttributeError: Layer sequential_2 has several incoming nodes, so the concept of "level exit" is not defined. Use instead get_output_at(node_index).

I don’t know where to start. After some searches, I think this is due to the fact that the last layer is a sequential layer instead of the Dense layer, which is in the VGG16 model in the example.

The model is made as an example of "Cat" or "Dog" from Keras with decoration.

Any help or ideas, as I could come from here, would be greatly appreciated!

EDIT: If this helps to see the code:

model = load_model('final_finetuned_model.h5')

layer_idx = utils.find_layer_idx(model, 'sequential_1')

model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)

plt.rcParams['figure.figsize'] = (18, 6)

img1 = utils.load_img('test1/cat/5.jpg', target_size=(150, 150))
img2 = utils.load_img('test1/cat/6.jpg', target_size=(150, 150))

for modifier in [None, 'guided', 'relu']:
    plt.figure()
    f, ax = plt.subplots(1, 2)
    plt.suptitle("vanilla" if modifier is None else modifier)
    for i, img in enumerate([img1, img2]):
        # 20 is the imagenet index corresponding to `ouzel`
        grads = visualize_cam(model, layer_idx, filter_indices=20,
                              seed_input=img, backprop_modifier=modifier)
        # Lets overlay the heatmap onto original image.
        jet_heatmap = np.uint8(cm.jet(grads)[..., :3] * 255)
        ax[i].imshow(overlay(jet_heatmap, img))

plt.show()
+4
1

, dense_1_1/Relu: 0 sequential_2/dense_1/Relu: 0. , loss.py layer_output = self.layer.output layer_output = self.layer.get_output_at(-1). , . node, [-1] , , , . . layer_output = self.layer.get_output_at (0) , . .

0

Source: https://habr.com/ru/post/1689704/


All Articles