DenseNet

今天我们介绍DenseNet。其是CVPR 2017的最佳论文,借鉴了ResNet和Inception网络的思想,通过密集的跳接,鼓励参数复用;利用更少的参数量提升了网络的性能。

介绍

​ ResNet解决了网络过深时梯度消失的问题,提升性能的思路是加深网络;Inception的思路是加宽网络(一个Inception模块会用到不同大小的卷积核)。DenseNet则是通过加强ResNet中的跳接,增强feature的传递,鼓励网络参数的复用,达到了既能够减轻梯度消失,减少参数数量,又能够做到更好的效果。

image-20210129110135796

图1.DenseNet中的Dense Block。每一个子块都接受前面所有子模块的输出作为输入。

​ 不过需要注意的是,这种密集的跳接只出现在Dense Block内部,整体的网络结构如下图所示。

image-20210129110535857

图2.整体架构,可以看到Block之间是没有密集跳接的。在块与块中间的Batch_Normal层+卷积+池化层被称为transition layers

image-20210129111458108

图3.网络的具体参数

​ 由于Dense Block中,需要将每个子块的feature都连接起来传给下一个模块,经过了数个模块之后,feature map的通道数量会变得很大。为了解决这个问题,作者使用了1×1卷积(Bottleneck layers)和transition layer中的卷积来降低特征图的大小。

代码

代码来自https://github.com/weiaicunzai/pytorch-cifar100/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
"""dense net in pytorch
[1] Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger.
Densely Connected Convolutional Networks
https://arxiv.org/abs/1608.06993v5
"""

import torch
import torch.nn as nn



#"""Bottleneck layers. Although each layer only produces k
#output feature-maps, it typically has many more inputs. It
#has been noted in [37, 11] that a 1×1 convolution can be in-
#troduced as bottleneck layer before each 3×3 convolution
#to reduce the number of input feature-maps, and thus to
#improve computational efficiency."""
class Bottleneck(nn.Module):
def __init__(self, in_channels, growth_rate):
super().__init__()
#"""In our experiments, we let each 1×1 convolution
#produce 4k feature-maps."""
inner_channel = 4 * growth_rate

#"""We find this design especially effective for DenseNet and
#we refer to our network with such a bottleneck layer, i.e.,
#to the BN-ReLU-Conv(1×1)-BN-ReLU-Conv(3×3) version of H ` ,
#as DenseNet-B."""
self.bottle_neck = nn.Sequential(
nn.BatchNorm2d(in_channels),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels, inner_channel, kernel_size=1, bias=False),
nn.BatchNorm2d(inner_channel),
nn.ReLU(inplace=True),
nn.Conv2d(inner_channel, growth_rate, kernel_size=3, padding=1, bias=False)
)

def forward(self, x):
return torch.cat([x, self.bottle_neck(x)], 1)

#"""We refer to layers between blocks as transition
#layers, which do convolution and pooling."""
class Transition(nn.Module):
def __init__(self, in_channels, out_channels):
super().__init__()
#"""The transition layers used in our experiments
#consist of a batch normalization layer and an 1×1
#convolutional layer followed by a 2×2 average pooling
#layer""".
self.down_sample = nn.Sequential(
nn.BatchNorm2d(in_channels),
nn.Conv2d(in_channels, out_channels, 1, bias=False),
nn.AvgPool2d(2, stride=2)
)

def forward(self, x):
return self.down_sample(x)

#DesneNet-BC
#B stands for bottleneck layer(BN-RELU-CONV(1x1)-BN-RELU-CONV(3x3))
#C stands for compression factor(0<=theta<=1)
class DenseNet(nn.Module):
def __init__(self, block, nblocks, growth_rate=12, reduction=0.5, num_class=100):
super().__init__()
self.growth_rate = growth_rate

#"""Before entering the first dense block, a convolution
#with 16 (or twice the growth rate for DenseNet-BC)
#output channels is performed on the input images."""
inner_channels = 2 * growth_rate

#For convolutional layers with kernel size 3×3, each
#side of the inputs is zero-padded by one pixel to keep
#the feature-map size fixed.
self.conv1 = nn.Conv2d(3, inner_channels, kernel_size=3, padding=1, bias=False)

self.features = nn.Sequential()

for index in range(len(nblocks) - 1):
self.features.add_module("dense_block_layer_{}".format(index), self._make_dense_layers(block, inner_channels, nblocks[index]))
inner_channels += growth_rate * nblocks[index]

#"""If a dense block contains m feature-maps, we let the
#following transition layer generate θm output feature-
#maps, where 0 < θ ≤ 1 is referred to as the compression
#fac-tor.
out_channels = int(reduction * inner_channels) # int() will automatic floor the value
self.features.add_module("transition_layer_{}".format(index), Transition(inner_channels, out_channels))
inner_channels = out_channels

self.features.add_module("dense_block{}".format(len(nblocks) - 1), self._make_dense_layers(block, inner_channels, nblocks[len(nblocks)-1]))
inner_channels += growth_rate * nblocks[len(nblocks) - 1]
self.features.add_module('bn', nn.BatchNorm2d(inner_channels))
self.features.add_module('relu', nn.ReLU(inplace=True))

self.avgpool = nn.AdaptiveAvgPool2d((1, 1))

self.linear = nn.Linear(inner_channels, num_class)

def forward(self, x):
output = self.conv1(x)
output = self.features(output)
output = self.avgpool(output)
output = output.view(output.size()[0], -1)
output = self.linear(output)
return output

def _make_dense_layers(self, block, in_channels, nblocks):
dense_block = nn.Sequential()
for index in range(nblocks):
dense_block.add_module('bottle_neck_layer_{}'.format(index), block(in_channels, self.growth_rate))
in_channels += self.growth_rate
return dense_block

def densenet121():
return DenseNet(Bottleneck, [6,12,24,16], growth_rate=32)

def densenet169():
return DenseNet(Bottleneck, [6,12,32,32], growth_rate=32)

def densenet201():
return DenseNet(Bottleneck, [6,12,48,32], growth_rate=32)

def densenet161():
return DenseNet(Bottleneck, [6,12,36,24], growth_rate=48)

我们下次再见。