Loss in Machine Learning and Deep Learning algorithms provides a statistical value that enables the model to improve and get optimal results. While training the model, it is very crucial to take the appropriate loss function as an indicator to monitor its performance. PyTorch enables the user to build deep learning models and with that, it provides the loss functions to keep an eye on the performance as well.

Table of Content

This guide explains the following sections:

What is L1/MAE Loss

The L1 loss or Mean Absolute Error (MAE), takes the absolute difference between each prediction and its original value. Take the summation of all these values to get the loss value for the complete dataset. It does not consider the positive or negative error, so the absolute converts the negative values into positive ones. The mathematical representation for the Mean Absolute Error or L1 is as follows:


  • y: values predicted by the model in the data
  • x: Actual values based on the historical data
  • n: Total number of values

How to Calculate L1/MAE Loss in PyTorch

Calculating the L1 loss in pytorch requires the installation of the torch module from Python’s package manager and then importing the libraries to use the torch methods. The torch offers a couple of methods to calculate the L1 loss like L1Loss() and SmoothL1Loss() with different parameters to go along with them. To learn the process of calculating the L1 loss in PyTorch, simply follow this guide:

Note: The Python notebook with the code used in this guide is attached here


Getting into the process of calculating the Mean Absolute Error requires the use of Python code to install the modules and libraries. All these requirements are discussed in the following section of this guide and if any of the steps is already done, then go towards the next one:

  • Access Python Notebook
  • Install Modules
  • Import Libraries

Access Python Notebook

The first step in the prerequisite section is accessing the Google Colab Notebook from the official website:

Install Modules

Once the notebook is created, install the torch module using the pip (Python package manager) command to access the libraries stored in it:

pip install torch

Import Libraries

Now, import the libraries which is torch so it can be used in the Python code to call different methods for calculating the L1 loss:

import torch

Verify the process done until now by printing the installed version of the torch framework to go on with the examples:


The following screenshot displays the installed version that verifies that the user can use the torch in the code further:

Example 1: Using L1Loss() With Random Tensors

The first example imports the “nn” dependency from the torch library to get the set of modules as the neural network layers. After that, call the L1Loss() methods using the nn dependency and store it in the loss variable so the methods can be called using the name of the variable. Create a couple of tensors with random values for input and target fields. Call the loss function and then apply the backpropagation to understand the loss and minimize it:

import torch.nn as nn

loss = nn.L1Loss()
in_ten = torch.randn(3, 5, requires_grad=True)
tar_ten = torch.randn(3, 5)
result = loss(in_ten, tar_ten)

print('input: ', in_ten)
print('target: ', tar_ten)
print('output: ', result)

Print the values stored in the input, target, and output tensors to get the Mean Absolute Error in PyTorch:

Example 2: Using L1Loss() With Custom Dataset

Create two tensors for storing predicted and actual values that are customized values or manually entered from a given dataset. Create and initialize the criterion variable with the L1Loss() method which can be used to calculate the L1 loss value in PyTorch. Calculate the Mean Absolute Error using the predicted and actual values inside the criterion() method:

predicted = torch.tensor([2.5, 4.8, 6.9, 9.5])
actual = torch.tensor([3.0, 5.0, 7.0, 9.0])

criterion = nn.L1Loss()
loss = criterion(predicted, actual)

print("MAE Loss:", loss)

Example 3: Using SmoothL1Loss()

Another method offered by the torch library is SmoothL1Loss() to calculate the Mean Absolute Error in PyTorch. Create two tensors with predicted and actual values to call the SmoothL1Loss() method with the value of the beta as an argument. The beta parameter is used for the model to understand the threshold to change the loss between each learning iteration. Call the methods using their variable and dataset to print the loss value on the screen:

predicted = torch.tensor([6.5, 5.8, 8.9, 9.5])
actual = torch.tensor([6.0, 5.0, 9.0, 10.0])

criterion = nn.SmoothL1Loss(beta=1.0)
loss = criterion(predicted, actual)

print("MAE Smooth Loss:", loss)

That’s all about the process of calculating L1 loss in PyTorch.


To calculate the L1 or Mean Absolute Error (MAE) in PyTorch, install the torch module to get its dependencies like nn and torch. These libraries are required to be imported for using the methods offered by the torch framework to calculate the L1 loss. MAE is the process of finding the summation of the divergence between the input and target values. The L1Loss() and SmoothL1Loss() methods are used to calculate the MAE loss in PyTorch. This guide has elaborated on the process of calculating the error or loss using both methods offered by the torch library.