如果我想在Keras中使用BatchNormalization函数,那么我只需要在开始时调用它一次吗?
我阅读了它的文档:http://keras.io/layers/normalization/
我不知道该怎么称呼它。下面是我试图使用它的代码:
model = Sequential()
keras.layers.normalization.BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None)
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(Activation('softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd)
model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2)
我问是因为如果我运行包含批处理规格化的第二行代码,如果我不运行第二行代码,我会得到类似的输出。所以要么我没有在正确的地方调用函数,要么我猜这没有太大的区别。
关于BN应该应用在当前层的非线性之前还是应用在前一层的激活之前,这个线程有一些相当大的争论。
虽然没有正确答案,但批处理规范化的作者是这么说的
它应立即应用于当前层的非线性之前。原因(引自原文)-
"We add the BN transform immediately before the
nonlinearity, by normalizing x = Wu+b. We could have
also normalized the layer inputs u, but since u is likely
the output of another nonlinearity, the shape of its distribution
is likely to change during training, and constraining
its first and second moments would not eliminate the covariate
shift. In contrast, Wu + b is more likely to have
a symmetric, non-sparse distribution, that is “more Gaussian”
(Hyv¨arinen & Oja, 2000); normalizing it is likely to
produce activations with a stable distribution."
为了更详细地回答这个问题,正如Pavel所说,批处理规范化只是另一层,因此您可以使用它来创建所需的网络架构。
一般的用例是在网络中的线性层和非线性层之间使用BN,因为它将激活函数的输入归一化,这样您就位于激活函数的线性部分的中心(例如Sigmoid)。这里有一个小讨论
在你上面的例子中,这可能是这样的:
# import BatchNormalization
from keras.layers.normalization import BatchNormalization
# instantiate model
model = Sequential()
# we can think of this chunk as the input layer
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Dropout(0.5))
# we can think of this chunk as the hidden layer
model.add(Dense(64, init='uniform'))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Dropout(0.5))
# we can think of this chunk as the output layer
model.add(Dense(2, init='uniform'))
model.add(BatchNormalization())
model.add(Activation('softmax'))
# setting up the optimization of our weights
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd)
# running the fitting
model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2)
希望这能让你更清楚一些。
批处理归一化是通过调整激活的平均值和缩放来归一化输入层和隐藏层。由于这种在深度神经网络中附加层的归一化效应,网络可以使用更高的学习率而不会消失或爆炸梯度。此外,批归一化对网络进行了正则化,使其更容易泛化,因此不需要使用dropout来缓解过拟合。
在使用Keras中的Dense()或Conv2D()计算线性函数后,我们使用BatchNormalization()来计算层中的线性函数,然后使用Activation()将非线性添加到层中。
from keras.layers.normalization import BatchNormalization
model = Sequential()
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None))
model.add(Activation('softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd)
model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True,
validation_split=0.2, verbose = 2)
如何应用批处理规范化?
假设我们向层l输入a[l-1],并且我们有层l的权值W[l]和偏置单位b[l]。添加非线性后)为层l, z[l]为添加非线性前的向量
Using a[l-1] and W[l] we can calculate z[l] for the layer l
Usually in feed-forward propagation we will add bias unit to the z[l] at this stage like this z[l]+b[l], but in Batch Normalization this step of addition of b[l] is not required and no b[l] parameter is used.
Calculate z[l] means and subtract it from each element
Divide (z[l] - mean) using standard deviation. Call it Z_temp[l]
Now define new parameters γ and β that will change the scale of the hidden layer as follows:
z_norm[l] = γ.Z_temp[l] + β
在这段代码摘录中,Dense()取a[l-1],使用W[l]并计算z[l]。然后立即的BatchNormalization()将执行上述步骤以得到z_norm[l]。然后立即激活()将计算tanh(z_norm[l])给出一个[l],即。
a[l] = tanh(z_norm[l])