相关文章推荐
鼻子大的人字拖  ·  How to Prevent Out of ...·  1 月前    · 
闷骚的树叶  ·  How Fast is .NET ...·  4 周前    · 
另类的路灯  ·  Array.prototype.splice ...·  昨天    · 
一直单身的跑步鞋  ·  sql ...·  1 年前    · 
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I have a pytorch Tensor of shape [4, 3, 966, 1296] . I want to convert it to numpy array using the following code:

imgs = imgs.numpy()[:, ::-1, :, :]

How does that code work?

your question is extremely confusing. You already have a .numpy() call. What exactly are you confused about? Do you not understand slicing notation in python or what? – Charlie Parker Jul 15, 2020 at 18:35 btw you might need to call .detach() before saving your data e.g. x.detach().numpy() if your tensors have grads...also you might need to call cpu(). I think this should work: x.detach().cpu().numpy() – Charlie Parker Jul 15, 2020 at 18:41 When converting to numpy you should call detach before cpu to prevent superfluous gradient copying. See discuss.pytorch.org/t/… – Charlie Parker Jul 15, 2020 at 18:45

I believe you also have to use .detach(). I had to convert my Tensor to a numpy array on Colab which uses CUDA and GPU. I did it like the following:

# this is just my embedding matrix which is a Torch tensor object
embedding = learn.model.u_weight
embedding_list = list(range(0, 64382))
input = torch.cuda.LongTensor(embedding_list)
tensor_array = embedding(input)
# the output of the line below is a numpy array
tensor_array.cpu().detach().numpy()
                Of course you had to use detach because you originally created a PyTorch Tensor on the GPU.  That doesn't apply if it's created in CPU, as seen in the original post.
– rayryeng
                Feb 8, 2019 at 3:50
                I think that even if your tensor is in the CPU, if you want a raw tensor, you have to .detach().
– muammar
                Mar 26, 2019 at 17:03
                When converting to numpy you should call detach before cpu to prevent superfluous gradient copying.  See discuss.pytorch.org/t/…
– ZaydH
                Sep 14, 2019 at 8:10
                I think there is a difference in the order perhaps this is better? x.detach().cpu().numpy()
– Charlie Parker
                Nov 15, 2021 at 18:11

While other answers perfectly explained the question I will add some real life examples converting tensors to numpy array:

Example: Shared storage

PyTorch tensor residing on CPU shares the same storage as numpy array na

import torch
a = torch.ones((1,2))
print(a)
na = a.numpy()
na[0][0]=10
print(na)
print(a)

Output:

tensor([[1., 1.]])
[[10.  1.]]
tensor([[10.,  1.]])

Example: Eliminate effect of shared storage, copy numpy array first

To avoid the effect of shared storage we need to copy() the numpy array na to a new numpy array nac. Numpy copy() method creates the new separate storage.

import torch
a = torch.ones((1,2))
print(a)
na = a.numpy()
nac = na.copy()
nac[0][0]=10
​print(nac)
print(na)
print(a)

Output:

tensor([[1., 1.]])
[[10.  1.]]
[[1. 1.]]
tensor([[1., 1.]])

Now, just the nac numpy array will be altered with the line nac[0][0]=10, na and a will remain as is.

Example: CPU tensor with requires_grad=True

import torch
a = torch.ones((1,2), requires_grad=True)
print(a)
na = a.detach().numpy()
na[0][0]=10
print(na)
print(a)

Output:

tensor([[1., 1.]], requires_grad=True)
[[10.  1.]]
tensor([[10.,  1.]], requires_grad=True)

In here we call:

na = a.numpy() 

This would cause: RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead., because tensors that require_grad=True are recorded by PyTorch AD. Note that tensor.detach() is the new way for tensor.data.

This explains why we need to detach() them first before converting using numpy().

Example: CUDA tensor with requires_grad=False

a = torch.ones((1,2), device='cuda')
print(a)
na = a.to('cpu').numpy()
na[0][0]=10
print(na)
print(a)

Output:

tensor([[1., 1.]], device='cuda:0')
[[10.  1.]]
tensor([[1., 1.]], device='cuda:0')

Example: CUDA tensor with requires_grad=True

a = torch.ones((1,2), device='cuda', requires_grad=True)
print(a)
na = a.detach().to('cpu').numpy()
na[0][0]=10
​print(na)
print(a)

Output:

tensor([[1., 1.]], device='cuda:0', requires_grad=True)
[[10.  1.]]
tensor([[1., 1.]], device='cuda:0', requires_grad=True)

Without detach() method the error RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. will be set.

Without .to('cpu') method TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. will be set.

You could use cpu() but instead of to('cpu') but I prefer the newer to('cpu').

: means that the first dimension should be copied as it is and converted, same goes for the third and fourth dimension.

::-1 means that for the second axes it reverses the the axes

Are you certain? To me it looks more like axes 0,2,3 are copied as-is, and axis 1 is reversed. – Nils Werner Apr 11, 2018 at 7:07 When converting to numpy you should call detach before cpu to prevent superfluous gradient copying. See discuss.pytorch.org/t/… – Charlie Parker Jul 15, 2020 at 18:45 When converting to numpy you should call detach before cpu to prevent superfluous gradient copying. See discuss.pytorch.org/t/… – Charlie Parker Jul 15, 2020 at 18:43

Your question is very poorly worded. Your code (sort of) already does what you want. What exactly are you confused about? x.numpy() answer the original title of your question:

Pytorch tensor to numpy array

you need improve your question starting with your title.

Anyway, just in case this is useful to others. You might need to call detach for your code to work. e.g.

RuntimeError: Can't call numpy() on Variable that requires grad.

So call .detach(). Sample code:

# creating data and running through a nn and saving it
import torch
import torch.nn as nn
from pathlib import Path
from collections import OrderedDict
import numpy as np
path = Path('~/data/tmp/').expanduser()
path.mkdir(parents=True, exist_ok=True)
num_samples = 3
Din, Dout = 1, 1
lb, ub = -1, 1
x = torch.torch.distributions.Uniform(low=lb, high=ub).sample((num_samples, Din))
f = nn.Sequential(OrderedDict([
    ('f1', nn.Linear(Din,Dout)),
    ('out', nn.SELU())
y = f(x)
# save data
y.numpy()
x_np, y_np = x.detach().cpu().numpy(), y.detach().cpu().numpy()
np.savez(path / 'db', x=x_np, y=y_np)
print(x_np)

cpu goes after detach. See: https://discuss.pytorch.org/t/should-it-really-be-necessary-to-do-var-detach-cpu-numpy/35489/5

Also I won't make any comments on the slicking since that is off topic and that should not be the focus of your question. See this:

Understanding slice notation

Thanks for contributing an answer to Stack Overflow!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.