如何像在Keras中使用model.summary()那样在PyTorch中打印模型的摘要呢?
Model Summary:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1, 15, 27) 0
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 8, 15, 27) 872 input_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 8, 7, 27) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 1512) 0 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 1) 1513 flatten_1[0][0]
====================================================================================================
Total params: 2,385
Trainable params: 2,385
Non-trainable params: 0
torchinfo(以前的torchsummary)包产生类似Keras1的输出(对于给定的输入形状):2
from torchinfo import summary
model = ConvNet()
batch_size = 16
summary(model, input_size=(batch_size, 1, 28, 28))
==========================================================================================
Layer (type:depth-idx) Output Shape Param #
==========================================================================================
├─Conv2d (conv1): 1-1 [5, 10, 24, 24] 260
├─Conv2d (conv2): 1-2 [5, 20, 8, 8] 5,020
├─Dropout2d (conv2_drop): 1-3 [5, 20, 8, 8] --
├─Linear (fc1): 1-4 [5, 50] 16,050
├─Linear (fc2): 1-5 [5, 10] 510
==========================================================================================
Total params: 21,840
Trainable params: 21,840
Non-trainable params: 0
Total mult-adds (M): 7.69
==========================================================================================
Input size (MB): 0.05
Forward/backward pass size (MB): 0.91
Params size (MB): 0.09
Estimated Total Size (MB): 1.05
==========================================================================================
Notes:
Torchinfo provides information complementary to what is provided by print(your_model) in PyTorch, similar to Tensorflow's model.summary()...
Unlike Keras, PyTorch has a dynamic computational graph which can adapt to any compatible input shape across multiple calls e.g. any sufficiently large image size (for a fully convolutional network).
As such, it cannot present an inherent set of input/output shapes for each layer, as these are input-dependent, and why in the above package you must specify the input dimensions.
torchinfo(以前的torchsummary)包产生类似Keras1的输出(对于给定的输入形状):2
from torchinfo import summary
model = ConvNet()
batch_size = 16
summary(model, input_size=(batch_size, 1, 28, 28))
==========================================================================================
Layer (type:depth-idx) Output Shape Param #
==========================================================================================
├─Conv2d (conv1): 1-1 [5, 10, 24, 24] 260
├─Conv2d (conv2): 1-2 [5, 20, 8, 8] 5,020
├─Dropout2d (conv2_drop): 1-3 [5, 20, 8, 8] --
├─Linear (fc1): 1-4 [5, 50] 16,050
├─Linear (fc2): 1-5 [5, 10] 510
==========================================================================================
Total params: 21,840
Trainable params: 21,840
Non-trainable params: 0
Total mult-adds (M): 7.69
==========================================================================================
Input size (MB): 0.05
Forward/backward pass size (MB): 0.91
Params size (MB): 0.09
Estimated Total Size (MB): 1.05
==========================================================================================
Notes:
Torchinfo provides information complementary to what is provided by print(your_model) in PyTorch, similar to Tensorflow's model.summary()...
Unlike Keras, PyTorch has a dynamic computational graph which can adapt to any compatible input shape across multiple calls e.g. any sufficiently large image size (for a fully convolutional network).
As such, it cannot present an inherent set of input/output shapes for each layer, as these are input-dependent, and why in the above package you must specify the input dimensions.
你可以使用
from torchsummary import summary
你可以指定设备
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
您可以创建一个网络,如果您正在使用MNIST数据集,那么以下命令将工作并显示摘要
model = Network().to(device)
summary(model,(1,28,28))