How to use GPU acceleration in PyTorch?
Accelerating calculations using a GPU in PyTorch can be achieved through the following steps:
- Check for available GPU devices.
import torch
if torch.cuda.is_available():
print("GPU is available!")
else:
print("GPU is not available.")
- Move the Tensor object to the GPU device:
# 创建一个Tensor对象
x = torch.randn(3, 3)
# 将Tensor对象移动到GPU设备上
x = x.to('cuda')
- Specify the use of GPU devices when defining the model.
import torch.nn as nn
# 定义一个简单的神经网络模型
class SimpleNN(nn.Module):
def __init__(self):
super(SimpleNN, self).__init__()
self.fc1 = nn.Linear(10, 5)
self.fc2 = nn.Linear(5, 2)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
# 创建模型实例并将其移动到GPU设备上
model = SimpleNN()
model = model.to('cuda')
- Utilize GPU devices during the training process.
# 将输入数据和标签移动到GPU设备上
inputs = inputs.to('cuda')
labels = labels.to('cuda')
# 使用GPU设备进行前向传播和反向传播
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
By following these steps, one can use GPU acceleration in PyTorch to increase the speed and efficiency of training models.