mxnet 中的 DepthwiseConv2D API
生活随笔
收集整理的這篇文章主要介紹了
mxnet 中的 DepthwiseConv2D API
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
在?https://blog.csdn.net/zhqh100/article/details/90376732?中介紹MobileNet時,提到?DepthwiseConv2D,這是keras中的API,那mxnet中該 API 叫什么名字呢?
我跟蹤了一下代碼,以及打印summary之類的,基本搞清楚了,mxnet 中無論是?DepthwiseConv2D 還是?Conv2D,統一都叫做?Conv2D,參考如下調用代碼(/home/luke/miniconda3/lib/python3.6/site-packages/gluoncv/model_zoo/mobilenet.py):
# pylint: disable= too-many-arguments
def _add_conv(out, channels=1, kernel=1, stride=1, pad=0,num_group=1, active=True, relu6=False, norm_layer=BatchNorm, norm_kwargs=None):out.add(nn.Conv2D(channels, kernel, stride, pad, groups=num_group, use_bias=False))print("kernel = %d, groups = %d, channels = %d" % (kernel, num_group, channels))out.add(norm_layer(scale=True, **({} if norm_kwargs is None else norm_kwargs)))if active:out.add(RELU6() if relu6 else nn.Activation('relu'))
區別在于其中的?groups 參數,http://mxnet.incubator.apache.org/api/python/gluon/nn.html
- groups?(int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.
而這里在調用時,如果需要分離卷積的話,channels值和groups值相等就可以了
_add_conv(self.out,in_channels * t,kernel=3,stride=stride,pad=1,num_group=in_channels * t,relu6=True,norm_layer=BatchNorm, norm_kwargs=None)
值都為?in_channels * t。
總結
以上是生活随笔為你收集整理的mxnet 中的 DepthwiseConv2D API的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: MXNet中x.grad源码追溯
- 下一篇: darknet53的网络结构笔记