Difference in activation maps calculation using convolution2dLayer and conv2 method
2 views (last 30 days)
Show older comments
Activations calculated using the convolution2dLayer (Path1) and conv2 (Path2) method differ each other for a given same image and same filter weights.
Path 1 : image1 -> convolution2dLayer (using Alexnet conv1 layer configurations (weights, biases. stride=4,padding=0)) -> Output 1
Path 2 : image1 -> mean2 (zero centered normalization) -> conv2 (valid) (using Alexnet conv1 layer weights, biases) -> stride operation ) -> Output 2
Following steps has being compared separately and they give the same results:
- Zero centered normalization using mean2 and convolution2dLayer
- Stride and pooling operations used in convolution2dLayer and path 2
Following are the observations of output 1 and output 2
- Same dimensionality (55,55,96)
- Certain differences in the values (magnitude as well as spacial)
- Value distribution (histogram) differs for Output1 and Output2 but with same mean
- When Output1 and Output2 sent through the Relu layer, more sparsity can be seen in output 1
Adding to this at the first layers the differences are trivial but when propagating along the network calculating the activations using conv2 the differences increases.
Any reasons for these kind of value variations given these test conditions?
0 Comments
Answers (0)
See Also
Categories
Find more on Image Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!