我如何初始化网络的权重和偏差(通过例如He或Xavier初始化)?


当前回答

抱歉这么晚才来,希望我的回答能有所帮助。

用正态分布初始化权重:

torch.nn.init.normal_(tensor, mean=0, std=1)

或者使用常数分布:

torch.nn.init.constant_(tensor, value)

或者使用均匀分布:

torch.nn.init.uniform_(tensor, a=0, b=1) # a: lower_bound, b: upper_bound

你可以用其他方法来初始化张量

其他回答

迭代参数

如果模型没有直接实现Sequential,则不能使用apply for instance:

所有人都一样

# see UNet at https://github.com/milesial/Pytorch-UNet/tree/master/unet


def init_all(model, init_func, *params, **kwargs):
    for p in model.parameters():
        init_func(p, *params, **kwargs)

model = UNet(3, 10)
init_all(model, torch.nn.init.normal_, mean=0., std=1) 
# or
init_all(model, torch.nn.init.constant_, 1.) 

取决于形状

def init_all(model, init_funcs):
    for p in model.parameters():
        init_func = init_funcs.get(len(p.shape), init_funcs["default"])
        init_func(p)

model = UNet(3, 10)
init_funcs = {
    1: lambda x: torch.nn.init.normal_(x, mean=0., std=1.), # can be bias
    2: lambda x: torch.nn.init.xavier_normal_(x, gain=1.), # can be weight
    3: lambda x: torch.nn.init.xavier_uniform_(x, gain=1.), # can be conv1D filter
    4: lambda x: torch.nn.init.xavier_uniform_(x, gain=1.), # can be conv2D filter
    "default": lambda x: torch.nn.init.constant(x, 1.), # everything else
}

init_all(model, init_funcs)

你可以试试torch.nn.init。Constant_ (x, len(x.shape))来检查它们是否正确初始化:

init_funcs = {
    "default": lambda x: torch.nn.init.constant_(x, len(x.shape))
}
import torch.nn as nn        

# a simple network
rand_net = nn.Sequential(nn.Linear(in_features, h_size),
                         nn.BatchNorm1d(h_size),
                         nn.ReLU(),
                         nn.Linear(h_size, h_size),
                         nn.BatchNorm1d(h_size),
                         nn.ReLU(),
                         nn.Linear(h_size, 1),
                         nn.ReLU())

# initialization function, first checks the module type,
# then applies the desired changes to the weights
def init_normal(m):
    if type(m) == nn.Linear:
        nn.init.uniform_(m.weight)

# use the modules apply function to recursively apply the initialization
rand_net.apply(init_normal)

这是更好的方法,传递你的整个模型

import torch.nn as nn
def initialize_weights(model):
    # Initializes weights according to the DCGAN paper
    for m in model.modules():
        if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d, nn.BatchNorm2d)):
            nn.init.normal_(m.weight.data, 0.0, 0.02)
        # if you also want for linear layers ,add one more elif condition 

如果您看到deprecation警告(@Fábio Perez)…

def init_weights(m):
    if type(m) == nn.Linear:
        torch.nn.init.xavier_uniform_(m.weight)
        m.bias.data.fill_(0.01)

net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
net.apply(init_weights)

抱歉这么晚才来,希望我的回答能有所帮助。

用正态分布初始化权重:

torch.nn.init.normal_(tensor, mean=0, std=1)

或者使用常数分布:

torch.nn.init.constant_(tensor, value)

或者使用均匀分布:

torch.nn.init.uniform_(tensor, a=0, b=1) # a: lower_bound, b: upper_bound

你可以用其他方法来初始化张量