How to adapt the output size of a given feature map in the Deep Learning Toolbox by using the "pool" operation?

4 views (last 30 days)
I understand that there are currently "averagePooling2dLayer, maxPooling2dLayer, globalAveragePooling2dLayer, globalMaxPooling2dLayer" and so on for the "pool" operation. globalMaxPooling2dLayer", "maxpool" and other direct call functions, but nothing like pytorch's "adaptive_max_pool2d". function that can directly specify the size of the output feature map for a pool operation?
The following simple example is pytorch code, how can matlab achieve the same purpose?
input = torch.rand(8,3,224,224) # input tensor, NCHW
outSize = (20,20) # specify output tensor size, H_out*W_out
output = F.adaptive_max_pool2d(input,outSize) # adaptive output
print(output.shape) # result--> torch.Size([8, 3, 20, 20])

Accepted Answer

KaSyow Riyuu
KaSyow Riyuu on 15 Apr 2022
Stride = ( InputSize / OutputSize )
KernelSize = InputSize - ( OutputSize - 1) * Stride
Padding = 0
you can do adaptivepool like this
function Output = AdaptiveMaxPool(Input,OuputSize)
InputSize = size(Input)
Stride = floor( InputSize ./ OutputSize ) ;
KernelSize = InputSize - ( OutputSize -1) .* Stride ;
Output = maxpool(Input , KernelSize, "Stride", Stide) ;
end

More Answers (0)

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Products


Release

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!