.view()对x张量有什么作用?负值是什么意思?

x = x.view(-1, 16 * 5 * 5)

当前回答

参数-1是什么意思?

您可以将-1读取为动态参数数或“任何东西”。因此,在view()中只能有一个参数-1。

如果你问x.view(-1,1),这将输出张量形状[任何东西,1]取决于x中的元素数量。例如:

import torch
x = torch.tensor([1, 2, 3, 4])
print(x,x.shape)
print("...")
print(x.view(-1,1), x.view(-1,1).shape)
print(x.view(1,-1), x.view(1,-1).shape)

将输出:

tensor([1, 2, 3, 4]) torch.Size([4])
...
tensor([[1],
        [2],
        [3],
        [4]]) torch.Size([4, 1])
tensor([[1, 2, 3, 4]]) torch.Size([1, 4])

其他回答

我们来做一些例题,从简单到难。

The view method returns a tensor with the same data as the self tensor (which means that the returned tensor has the same number of elements), but with a different shape. For example: a = torch.arange(1, 17) # a's shape is (16,) a.view(4, 4) # output below 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [torch.FloatTensor of size 4x4] a.view(2, 2, 4) # output below (0 ,.,.) = 1 2 3 4 5 6 7 8 (1 ,.,.) = 9 10 11 12 13 14 15 16 [torch.FloatTensor of size 2x2x4] Assuming that -1 is not one of the parameters, when you multiply them together, the result must be equal to the number of elements in the tensor. If you do: a.view(3, 3), it will raise a RuntimeError because shape (3 x 3) is invalid for input with 16 elements. In other words: 3 x 3 does not equal 16 but 9. You can use -1 as one of the parameters that you pass to the function, but only once. All that happens is that the method will do the math for you on how to fill that dimension. For example a.view(2, -1, 4) is equivalent to a.view(2, 2, 4). [16 / (2 x 4) = 2] Notice that the returned tensor shares the same data. If you make a change in the "view" you are changing the original tensor's data: b = a.view(4, 4) b[0, 2] = 2 a[2] == 3.0 False Now, for a more complex use case. The documentation says that each new view dimension must either be a subspace of an original dimension, or only span d, d + 1, ..., d + k that satisfy the following contiguity-like condition that for all i = 0, ..., k - 1, stride[i] = stride[i + 1] x size[i + 1]. Otherwise, contiguous() needs to be called before the tensor can be viewed. For example: a = torch.rand(5, 4, 3, 2) # size (5, 4, 3, 2) a_t = a.permute(0, 2, 3, 1) # size (5, 3, 2, 4) # The commented line below will raise a RuntimeError, because one dimension # spans across two contiguous subspaces # a_t.view(-1, 4) # instead do: a_t.contiguous().view(-1, 4) # To see why the first one does not work and the second does, # compare a.stride() and a_t.stride() a.stride() # (24, 6, 2, 1) a_t.stride() # (24, 2, 1, 6) Notice that for a_t, stride[0] != stride[1] x size[1] since 24 != 2 x 3

View()在不复制内存的情况下重塑张量,类似于numpy的重塑()。

给定一个包含16个元素的张量a:

import torch
a = torch.range(1, 16)

为了重塑这个张量,使它成为一个4 x 4张量,使用:

a = a.view(4, 4)

现在a就是一个4 x 4张量。注意,在重塑之后,元素的总数需要保持不变。将张量a重塑为3 x 5张量是不合适的。

参数-1是什么意思?

如果有任何情况,你不知道你想要多少行,但确定列的数量,那么你可以指定这个-1。(注意,你可以将其扩展到更多维度的张量。只有一个轴值可以是-1)。这是一种告诉库的方式:“给我一个张量,它有这么多列,然后你计算出实现这一点所需的适当行数”。

这可以在模型定义代码中看到。在forward函数中的x = self.pool(F.relu(self.conv2(x)))行之后,您将得到一个16深度的特征映射。你必须把它压平,让它成为完全连接的层。所以你告诉PyTorch重塑你得到的张量,让它有特定的列数,并让它自己决定行数。

权重。重塑(a, b)将返回一个新的张量,其数据与大小为(a, b)的权重相同,因为它将数据复制到内存的另一部分。

权重。Resize_ (a, b)返回不同形状的相同张量。然而,如果新的形状产生的元素比原来的张量少,一些元素将从张量中删除(但不从内存中删除)。如果新的形状产生的元素比原来的张量更多,那么新的元素将在内存中未初始化。

权重。视图(a, b)将返回一个新的张量,其数据与权重(a, b)相同

让我们通过下面的例子来理解view:

    a=torch.range(1,16)

print(a)

    tensor([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10., 11., 12., 13., 14.,
            15., 16.])

print(a.view(-1,2))

    tensor([[ 1.,  2.],
            [ 3.,  4.],
            [ 5.,  6.],
            [ 7.,  8.],
            [ 9., 10.],
            [11., 12.],
            [13., 14.],
            [15., 16.]])

print(a.view(2,-1,4))   #3d tensor

    tensor([[[ 1.,  2.,  3.,  4.],
             [ 5.,  6.,  7.,  8.]],

            [[ 9., 10., 11., 12.],
             [13., 14., 15., 16.]]])
print(a.view(2,-1,2))

    tensor([[[ 1.,  2.],
             [ 3.,  4.],
             [ 5.,  6.],
             [ 7.,  8.]],

            [[ 9., 10.],
             [11., 12.],
             [13., 14.],
             [15., 16.]]])

print(a.view(4,-1,2))

    tensor([[[ 1.,  2.],
             [ 3.,  4.]],

            [[ 5.,  6.],
             [ 7.,  8.]],

            [[ 9., 10.],
             [11., 12.]],

            [[13., 14.],
             [15., 16.]]])

-1作为参数值是计算x值的一种简单方法,前提是我们知道y和z的值,反之亦然,对于3d和2d,同样是计算x值的一种简单方法,前提是我们知道y的值,反之亦然。

参数-1是什么意思?

您可以将-1读取为动态参数数或“任何东西”。因此,在view()中只能有一个参数-1。

如果你问x.view(-1,1),这将输出张量形状[任何东西,1]取决于x中的元素数量。例如:

import torch
x = torch.tensor([1, 2, 3, 4])
print(x,x.shape)
print("...")
print(x.view(-1,1), x.view(-1,1).shape)
print(x.view(1,-1), x.view(1,-1).shape)

将输出:

tensor([1, 2, 3, 4]) torch.Size([4])
...
tensor([[1],
        [2],
        [3],
        [4]]) torch.Size([4, 1])
tensor([[1, 2, 3, 4]]) torch.Size([1, 4])