如果我想在Keras中使用BatchNormalization函数,那么我只需要在开始时调用它一次吗?

我阅读了它的文档:http://keras.io/layers/normalization/

我不知道该怎么称呼它。下面是我试图使用它的代码:

model = Sequential()
keras.layers.normalization.BatchNormalization(epsilon=1e-06, mode=0, momentum=0.9, weights=None)
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(Activation('softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd)
model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2)

我问是因为如果我运行包含批处理规格化的第二行代码,如果我不运行第二行代码,我会得到类似的输出。所以要么我没有在正确的地方调用函数,要么我猜这没有太大的区别。


当前回答

它是另一种类型的层,所以你应该把它作为一个层添加到你的模型的适当位置

model.add(keras.layers.normalization.BatchNormalization())

请看一个例子:https://github.com/fchollet/keras/blob/master/examples/kaggle_otto_nn.py

其他回答

Keras现在支持use_bias=False选项,所以我们可以通过编写这样的代码来节省一些计算

model.add(Dense(64, use_bias=False))
model.add(BatchNormalization(axis=bn_axis))
model.add(Activation('tanh'))

or

model.add(Convolution2D(64, 3, 3, use_bias=False))
model.add(BatchNormalization(axis=bn_axis))
model.add(Activation('relu'))

它是另一种类型的层,所以你应该把它作为一个层添加到你的模型的适当位置

model.add(keras.layers.normalization.BatchNormalization())

请看一个例子:https://github.com/fchollet/keras/blob/master/examples/kaggle_otto_nn.py

这个帖子有误导性。我试着评论卢卡斯·拉马丹的回答,但我还没有权限,所以我把这个放在这里。

Batch normalization works best after the activation function, and here or here is why: it was developed to prevent internal covariate shift. Internal covariate shift occurs when the distribution of the activations of a layer shifts significantly throughout training. Batch normalization is used so that the distribution of the inputs (and these inputs are literally the result of an activation function) to a specific layer doesn't change over time due to parameter updates from each batch (or at least, allows it to change in an advantageous way). It uses batch statistics to do the normalizing, and then uses the batch normalization parameters (gamma and beta in the original paper) "to make sure that the transformation inserted in the network can represent the identity transform" (quote from original paper). But the point is that we're trying to normalize the inputs to a layer, so it should always go immediately before the next layer in the network. Whether or not that's after an activation function is dependent on the architecture in question.

为了更详细地回答这个问题,正如Pavel所说,批处理规范化只是另一层,因此您可以使用它来创建所需的网络架构。

一般的用例是在网络中的线性层和非线性层之间使用BN,因为它将激活函数的输入归一化,这样您就位于激活函数的线性部分的中心(例如Sigmoid)。这里有一个小讨论

在你上面的例子中,这可能是这样的:


# import BatchNormalization
from keras.layers.normalization import BatchNormalization

# instantiate model
model = Sequential()

# we can think of this chunk as the input layer
model.add(Dense(64, input_dim=14, init='uniform'))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Dropout(0.5))

# we can think of this chunk as the hidden layer    
model.add(Dense(64, init='uniform'))
model.add(BatchNormalization())
model.add(Activation('tanh'))
model.add(Dropout(0.5))

# we can think of this chunk as the output layer
model.add(Dense(2, init='uniform'))
model.add(BatchNormalization())
model.add(Activation('softmax'))

# setting up the optimization of our weights 
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd)

# running the fitting
model.fit(X_train, y_train, nb_epoch=20, batch_size=16, show_accuracy=True, validation_split=0.2, verbose = 2)

希望这能让你更清楚一些。

增加了关于批处理归一化应该在非线性激活之前还是之后调用的争论:

除了在激活前使用批量归一化的原始论文外,Bengio的书《深度学习》第8.7.1节给出了一些理由,说明为什么在激活后(或直接在输入到下一层之前)应用批量归一化可能会导致一些问题:

It is natural to wonder whether we should apply batch normalization to the input X, or to the transformed value XW+b. Ioffe and Szegedy (2015) recommend the latter. More specifically, XW+b should be replaced by a normalized version of XW. The bias term should be omitted because it becomes redundant with the β parameter applied by the batch normalization reparameterization. The input to a layer is usually the output of a nonlinear activation function such as the rectified linear function in a previous layer. The statistics of the input are thus more non-Gaussian and less amenable to standardization by linear operations.

换句话说,如果我们使用relu激活,所有的负值都映射为零。这可能会导致一个已经非常接近于零的平均值,但剩余数据的分布将严重向右倾斜。试图将这些数据正常化为一条漂亮的钟形曲线可能不会得到最好的结果。对于relu家族以外的激活,这可能不是一个大问题。

请记住,有些模型在激活之后使用批处理归一化时会得到更好的结果,而其他模型在激活之前使用批处理归一化时会得到最好的结果。最好使用这两种配置来测试您的模型,如果激活后的批处理规范化显著降低了验证损失,则使用该配置。