#
Table of Contents
-
Python: Can't call numpy() on Tensor that requires grad
-
Using the
no_grad()
context manager to solve the error
-
Getting the error when drawing a scatter plot in matplotlib
#
Python: Can't call numpy() on Tensor that requires grad
The Python
"RuntimeError: Can't call numpy() on Tensor that requires grad. Use
tensor.detach().numpy() instead"
occurs when you try to convert a tensor with
a gradient to a NumPy array.
To solve the error, convert your tensor to one that doesn't require a gradient
by using
detach()
.
Here is an example of how the error occurs.
import torch
t = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
print(t)
print(type(t))
t = t.numpy()
When the
requires_grad
attribute is set to
True
, gradients need to be computed for the Tensor.
To solve the error, use the
tensor.detach
method to convert the tensor to one that doesn't require a gradient before
calling
numpy()
.
import torch
t = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
print(t)
print(type(t))
t = t.detach().numpy()
print(t)
print(type(t))
The
tensor.detach()
method returns a new Tensor that is detached from the
current graph.
The result never requires a gradient.
In other words, the method returns a new tensor that shares the same storage but
doesn't track gradients (
requires_grad
is set to
False
).
The new tensor can safely be converted to a NumPy
ndarray
by calling the
tensor.numpy
method.
If you have a list of tensors, use a
list comprehension
to
iterate over the list and call
detach()
on each tensor.
import torch
t1 = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
t2 = torch.tensor([4.0, 5.0, 6.0], requires_grad=True)
tensors = [t1, t2]
result = [t.detach().numpy() for t in tensors]
print(result)
We used a list comprehension to iterate over the list of tensors.
List comprehensions are used to perform some operation for every element or
select a subset of elements that meet a condition.
On each iteration, we call
detach()
before calling
numpy()
so no error is
raised.
#
Using the
no_grad()
context manager to solve the error
You can also use the
no_grad()
context manager to solve the error.
The context manager disables gradient calculation.
import torch
t = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
print(t)
print(type(t))
with torch.no_grad():
t = t.detach().numpy()
print(t)
print(type(t))
The
no_grad
context manager disables gradient calculation.
In the context manager (the indented block), the result of every computation
will have
requires_grad=False
even if the inputs have
requires_grad=True
.
Calling the
numpy()
method on a tensor that is attached to a computation graph
is not allowed.
We first have to make sure that the tensor is detached before calling
numpy()
.
#
Getting the error when drawing a scatter plot in matplotlib
If you got the error when drawing a scatter plot in
matplotlib
, try using the
torch.no_grad()
method as we did in the previous subheading.
import torch
t = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
with torch.no_grad():
Make sure to add your code to the indented block inside the
no_grad()
context
manager.
The context manager will disable gradient calculation which should resolve the
error as long as your code is indented inside the
with torch.no_grad()
statement.
If the error persists, try to add an import statement for the
fastio.basics