在tf.nn中“SAME”和“VALID”填充之间的区别是什么?tensorflow的Max_pool ?

在我看来,'VALID'意味着当我们做max pool时,边缘外不会有零填充。

根据深度学习卷积算法指南,它说池操作符中不会有填充,即只使用tensorflow的“VALID”。 但什么是'SAME'填充的最大池张量流量?


当前回答

我引用了官方tensorflow文档https://www.tensorflow.org/api_guides/python/nn#Convolution中的答案 对于'SAME'填充,输出高度和宽度的计算如下:

out_height = ceil(float(in_height) / float(strides[1]))
out_width  = ceil(float(in_width) / float(strides[2]))

顶部和左侧的填充被计算为:

pad_along_height = max((out_height - 1) * strides[1] +
                    filter_height - in_height, 0)
pad_along_width = max((out_width - 1) * strides[2] +
                   filter_width - in_width, 0)
pad_top = pad_along_height // 2
pad_bottom = pad_along_height - pad_top
pad_left = pad_along_width // 2
pad_right = pad_along_width - pad_left

对于'VALID'填充,输出高度和宽度的计算如下:

out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width  = ceil(float(in_width - filter_width + 1) / float(strides[2]))

填充值总是0。

其他回答

总之,“有效”填充意味着没有填充。卷积层的输出大小根据输入大小和内核大小而缩小。

相反,“相同”填充意味着使用填充。当stride设置为1时,卷积层的输出大小保持为输入大小,在计算卷积时在输入数据周围附加一定数量的“0-border”。

希望这个直观的描述能有所帮助。

TensorFlow Convolution的例子概述了SAME和VALID的区别:

对于相同的填充,输出的高度和宽度计算如下: Out_height = ceil(float(in_height) / float(strides[1])) Out_width = ceil(float(in_width) / float(strides[2]))

And

对于VALID填充,输出高度和宽度的计算如下: Out_height = ceil(float(in_height - filter_height + 1) / float(strides[1])) Out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))

根据这里的解释和Tristan的回答,我通常使用这些快速函数进行完整性检查。

# a function to help us stay clean
def getPaddings(pad_along_height,pad_along_width):
    # if even.. easy..
    if pad_along_height%2 == 0:
        pad_top = pad_along_height / 2
        pad_bottom = pad_top
    # if odd
    else:
        pad_top = np.floor( pad_along_height / 2 )
        pad_bottom = np.floor( pad_along_height / 2 ) +1
    # check if width padding is odd or even
    # if even.. easy..
    if pad_along_width%2 == 0:
        pad_left = pad_along_width / 2
        pad_right= pad_left
    # if odd
    else:
        pad_left = np.floor( pad_along_width / 2 )
        pad_right = np.floor( pad_along_width / 2 ) +1
        #
    return pad_top,pad_bottom,pad_left,pad_right

# strides [image index, y, x, depth]
# padding 'SAME' or 'VALID'
# bottom and right sides always get the one additional padded pixel (if padding is odd)
def getOutputDim (inputWidth,inputHeight,filterWidth,filterHeight,strides,padding):
    if padding == 'SAME':
        out_height = np.ceil(float(inputHeight) / float(strides[1]))
        out_width  = np.ceil(float(inputWidth) / float(strides[2]))
        #
        pad_along_height = ((out_height - 1) * strides[1] + filterHeight - inputHeight)
        pad_along_width = ((out_width - 1) * strides[2] + filterWidth - inputWidth)
        #
        # now get padding
        pad_top,pad_bottom,pad_left,pad_right = getPaddings(pad_along_height,pad_along_width)
        #
        print 'output height', out_height
        print 'output width' , out_width
        print 'total pad along height' , pad_along_height
        print 'total pad along width' , pad_along_width
        print 'pad at top' , pad_top
        print 'pad at bottom' ,pad_bottom
        print 'pad at left' , pad_left
        print 'pad at right' ,pad_right

    elif padding == 'VALID':
        out_height = np.ceil(float(inputHeight - filterHeight + 1) / float(strides[1]))
        out_width  = np.ceil(float(inputWidth - filterWidth + 1) / float(strides[2]))
        #
        print 'output height', out_height
        print 'output width' , out_width
        print 'no padding'


# use like so
getOutputDim (80,80,4,4,[1,1,1,1],'SAME')

为了补充YvesgereY的回答,我发现这个可视化非常有用:

填充'valid'是第一个数字。滤镜窗口停留在图像内部。

填充'same'是第三个数字。输出是相同的大小。


在这篇文章里找到的

可视化致谢:vdumoulin@GitHub

Padding on/off. Determines the effective size of your input. VALID: No padding. Convolution etc. ops are only performed at locations that are "valid", i.e. not too close to the borders of your tensor. With a kernel of 3x3 and image of 10x10, you would be performing convolution on the 8x8 area inside the borders. SAME: Padding is provided. Whenever your operation references a neighborhood (no matter how big), zero values are provided when that neighborhood extends outside the original tensor to allow that operation to work also on border values. With a kernel of 3x3 and image of 10x10, you would be performing convolution on the full 10x10 area.