How to utilize GPU for accelerated computation in PyTorch?

Accelerating calculations using a GPU in PyTorch is very simple, just follow these steps:

  1. Check to see if there is a GPU available.

To begin with, check for available GPU devices. You can obtain a list of available GPU devices using the following code:

import torch

if torch.cuda.is_available():
    device = torch.device('cuda')
    print('GPU is available')
else:
    device = torch.device('cpu')
    print('No GPU available, using CPU instead')
  1. Move the model and data to the GPU.

Next, move the model and data to the GPU device. You can move the model to the GPU with the following code:

model = model.to(device)

To move a Tensor to the GPU, you can use the following code for data.

data = data.to(device)
  1. Perform calculations on the GPU.

After both the model and data have been moved to the GPU device, the next computations will be accelerated on the GPU. During training, you can specify the device to be used through the following code:

model.train()
for batch in data_loader:
    inputs, targets = batch[0].to(device), batch[1].to(device)
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = loss_function(outputs, targets)
    loss.backward()
    optimizer.step()

By following the above steps, you can accelerate computations in PyTorch using the GPU.

Leave a Reply 0

Your email address will not be published. Required fields are marked *