How to implement GPU acceleration in PyTorch?
To enable GPU acceleration in PyTorch, make sure you have installed a PyTorch version that supports GPU. Then you can run PyTorch code on the GPU by following these steps:
- Check for available GPU devices.
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print("GPU is available")
else:
device = torch.device("cpu")
print("GPU is not available, using CPU instead")
- Move the model and tensors to the GPU device.
model = YourModel().to(device)
input_tensor = torch.randn(1, 3, 224, 224).to(device)
- Training and inference of models on GPU:
output = model(input_tensor)
- parameters of the model
- someone or something that seeks to improve or enhance a particular process or outcome
- execute()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
optimizer.zero_grad()
output = model(input_tensor)
loss = loss_function(output, target)
loss.backward()
optimizer.step()
By following the steps above, you can implement GPU acceleration in PyTorch and leverage the parallel computing power of GPUs to speed up model training and inference.