Posit AI Blog: torch 0.10.0


We are happy to announce that torch v0.10.0 is now on CRAN. In this blog post we
highlight some of the changes that have been introduced in this version. You can
check the full changelog here.

Automatic Mixed Precision

Automatic Mixed Precision (AMP) is a technique that enables faster training of deep learning models, while maintaining model accuracy by using a combination of single-precision (FP32) and half-precision (FP16) floating-point formats.

In order to use automatic mixed precision with torch, you will need to use the with_autocast
context switcher to allow torch to use different implementations of operations that can run
with half-precision. In general it’s also recommended to scale the loss function in order to
preserve small gradients, as they get closer to zero in half-precision.

Here’s a minimal example, ommiting the data generation process. You can find more information in the amp article.

...
loss_fn <- nn_mse_loss()$cuda()
net <- make_model(in_size, out_size, num_layers)
opt <- optim_sgd(net$parameters, lr=0.1)
scaler <- cuda_amp_grad_scaler()

for (epoch in seq_len(epochs)) {
  for (i in seq_along(data)) {
    with_autocast(device_type = "cuda", {
      output <- net(data[[i]])
      loss <- loss_fn(output, targets[[i]])  
    })
    
    scaler$scale(loss)$backward()
    scaler$step(opt)
    scaler$update()
    opt$zero_grad()
  }
}

In this example, using mixed precision led to a speedup of around 40%. This speedup is
even bigger if you are just running inference, i.e., don’t need to scale the loss.

Pre-built binaries

With pre-built binaries, installing torch gets a lot easier and faster, especially if
you are on Linux and use the CUDA-enabled builds. The pre-built binaries include
LibLantern and LibTorch, both external dependencies necessary to run torch. Additionally,
if you install the CUDA-enabled builds, the CUDA and
cuDNN libraries are already included..

To install the pre-built binaries, you can use:

issue opened by @egillax, we could find and fix a bug that caused
torch functions returning a list of tensors to be very slow. The function in case
was torch_split().

This issue has been fixed in v0.10.0, and relying on this behavior should be much
faster now. Here’s a minimal benchmark comparing both v0.9.1 with v0.10.0:

recently announced book ‘Deep Learning and Scientific Computing with R torch’.

If you want to start contributing to torch, feel free to reach out on GitHub and see our contributing guide.

The full changelog for this release can be found here.

Leave a Reply

Your email address will not be published. Required fields are marked *