ResNext

​ 今天我们介绍的是ResNeXt。其命名暗示了它是ResNet和Inception的结合体。ResNext利用分组卷积(Group convolution)模块,取得了很好的效果。

介绍-分组卷积

​ 我们从全连接层开始讲起。众所周知啊,全连接就是将多个输入进行加权求和。

image-20210212101717405
image-20210212101747919

图1.全连接的公式和形像化的图示。

​ 其中xi通常是实数,而作者将其拓展为更一般的神经网络输出(例如多维的张量)。

image-20210212102131920

图2. ResNet模块(左)和ResNeXt模块(右)的对比。

​ 在上图可以看到,ResNext相比于ResNet,中间增加了很多并行的网络,并且在跳接节点前将这些并行模块的输出进行合并。合并网络输出的节点,就相当于图一中提到的全连接层。所谓分组卷积的“分割-变换-合并”(split-transform-merge),指的就是先将输入x分割为多个特征,再对每个特征进行一个线性变换,最后通过线性合成得到最后的输出。

​ 可以看到,不论是这种分割的思想,还是具体的网络结构,ResNext都和Inception v4比较相似。不同的是,Inception模块中,并列的部分是不同尺寸的卷积核,而ResNext中都是相同结构的网络。一般来说,Inception v4效果更好一些,而计算速度上则是ResNeXt占优。

代码

代码来自https://github.com/weiaicunzai/pytorch-cifar100/blob/master/models/resnext.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
"""resnext in pytorch
[1] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He.
Aggregated Residual Transformations for Deep Neural Networks
https://arxiv.org/abs/1611.05431
"""

import math
import torch
import torch.nn as nn
import torch.nn.functional as F

#only implements ResNext bottleneck c


#"""This strategy exposes a new dimension, which we call “cardinality”
#(the size of the set of transformations), as an essential factor
#in addition to the dimensions of depth and width."""
CARDINALITY = 32
DEPTH = 4
BASEWIDTH = 64

#"""The grouped convolutional layer in Fig. 3(c) performs 32 groups
#of convolutions whose input and output channels are 4-dimensional.
#The grouped convolutional layer concatenates them as the outputs
#of the layer."""

class ResNextBottleNeckC(nn.Module):

def __init__(self, in_channels, out_channels, stride):
super().__init__()

C = CARDINALITY #How many groups a feature map was splitted into

#"""We note that the input/output width of the template is fixed as
#256-d (Fig. 3), We note that the input/output width of the template
#is fixed as 256-d (Fig. 3), and all widths are dou- bled each time
#when the feature map is subsampled (see Table 1)."""
D = int(DEPTH * out_channels / BASEWIDTH) #number of channels per group
self.split_transforms = nn.Sequential(
nn.Conv2d(in_channels, C * D, kernel_size=1, groups=C, bias=False),
nn.BatchNorm2d(C * D),
nn.ReLU(inplace=True),
nn.Conv2d(C * D, C * D, kernel_size=3, stride=stride, groups=C, padding=1, bias=False),
nn.BatchNorm2d(C * D),
nn.ReLU(inplace=True),
nn.Conv2d(C * D, out_channels * 4, kernel_size=1, bias=False),
nn.BatchNorm2d(out_channels * 4),
)

self.shortcut = nn.Sequential()

if stride != 1 or in_channels != out_channels * 4:
self.shortcut = nn.Sequential(
nn.Conv2d(in_channels, out_channels * 4, stride=stride, kernel_size=1, bias=False),
nn.BatchNorm2d(out_channels * 4)
)

def forward(self, x):
return F.relu(self.split_transforms(x) + self.shortcut(x))

class ResNext(nn.Module):

def __init__(self, block, num_blocks, class_names=100):
super().__init__()
self.in_channels = 64

self.conv1 = nn.Sequential(
nn.Conv2d(3, 64, 3, stride=1, padding=1, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(inplace=True)
)

self.conv2 = self._make_layer(block, num_blocks[0], 64, 1)
self.conv3 = self._make_layer(block, num_blocks[1], 128, 2)
self.conv4 = self._make_layer(block, num_blocks[2], 256, 2)
self.conv5 = self._make_layer(block, num_blocks[3], 512, 2)
self.avg = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * 4, 100)

def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.conv5(x)
x = self.avg(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x

def _make_layer(self, block, num_block, out_channels, stride):
"""Building resnext block
Args:
block: block type(default resnext bottleneck c)
num_block: number of blocks per layer
out_channels: output channels per block
stride: block stride
Returns:
a resnext layer
"""
strides = [stride] + [1] * (num_block - 1)
layers = []
for stride in strides:
layers.append(block(self.in_channels, out_channels, stride))
self.in_channels = out_channels * 4

return nn.Sequential(*layers)

def resnext50():
""" return a resnext50(c32x4d) network
"""
return ResNext(ResNextBottleNeckC, [3, 4, 6, 3])

def resnext101():
""" return a resnext101(c32x4d) network
"""
return ResNext(ResNextBottleNeckC, [3, 4, 23, 3])

def resnext152():
""" return a resnext101(c32x4d) network
"""
return ResNext(ResNextBottleNeckC, [3, 4, 36, 3])

祝大家新春快乐,学业有成。我们下次再见。