How to perform model weight regularization in PyTorch?

In PyTorch, you can use the parameters() method in the torch.nn.Module class to access the weight parameters of a model, and then apply regularization techniques to constrain these parameters. Below is an example code demonstrating how to apply L2 regularization to a model’s weights.

import torch
import torch.nn as nn
import torch.optim as optim

# 定义一个简单的神经网络模型
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(10, 5)
        self.fc2 = nn.Linear(5, 1)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

# 创建模型实例
model = Net()

# 定义L2正则化参数
l2_lambda = 0.01

# 定义优化器和损失函数
optimizer = optim.Adam(model.parameters(), lr=0.01)
criterion = nn.MSELoss()

# 训练模型
for epoch in range(100):
    optimizer.zero_grad()
    
    # 正向传播
    output = model(torch.randn(1, 10))
    loss = criterion(output, torch.randn(1, 1))
    
    # 添加L2正则化项
    l2_reg = torch.tensor(0.)
    for param in model.parameters():
        l2_reg += torch.norm(param)
    
    loss += l2_lambda * l2_reg
    
    # 反向传播
    loss.backward()
    optimizer.step()

In the above example, we first defined a simple neural network model called Net, and then created an instance of the model. In the training loop, we used optimizer.zero_grad() to clear the previous gradients, followed by forward propagation and loss calculation. Next, we computed the L2 norm of all weight parameters and added it to the loss function as a regularization term. Finally, we performed backpropagation and updated the model parameters.

Leave a Reply 0

Your email address will not be published. Required fields are marked *


广告
Closing in 10 seconds
bannerAds