Pytorch之Tensor学习
Tensors
是与数组和矩阵类似的数据结构,比如它与numpy 的ndarray类似,但
tensors
可以在GPU上运行。实际上,
tensors
和numpy数组经常共用内存,消除了拷贝数据的需要。
Tensors
被优化的可以自动求微分。
import torch
import numpy as np
初始化Tensor
直接从数据
data=[[1,2],[3,4]]
x_data=torch.tensor(data)
x_data
tensor([[1, 2],
[3, 4]])
从numpy数组
np_array=np.array(data)
x_np=torch.tensor(np_array)
tensor([[1, 2],
[3, 4]], dtype=torch.int32)
x_np=torch.from_numpy(np_array)
tensor([[1, 2],
[3, 4]], dtype=torch.int32)
从另一个tensor
新tensor与参数tensor相比,保留了其特性(shape,datatype)等,除非显式的替换:
x_ones=torch.ones_like(x_data);x_ones
tensor([[1, 1],
[1, 1]])
x_rand=torch.rand_like(x_data,dtype=torch.float);x_rand
tensor([[0.1462, 0.1567],
[0.6331, 0.8472]])
随机或者恒定值
shape
是tensor维度的元组
shape=(2,3)
rand_tensor=torch.rand(shape)
ones_tensor=torch.ones(shape)
zeros_tensor=torch.zeros(shape)
print(rand_tensor)
print(ones_tensor)
print(zeros_tensor)
tensor([[0.4811, 0.5744, 0.8909],
[0.6602, 0.9882, 0.1145]])
tensor([[1., 1., 1.],
[1., 1., 1.]])
tensor([[0., 0., 0.],
[0., 0., 0.]])
Tensor的属性
Tensor
属性为shape
,datatype
,被储存在的设备,device
tensor=torch.rand(3,4)
tensor.shape
torch.Size([3, 4])
tensor.dtype
torch.float32
tensor.device
device(type='cpu')
Tensor运算
超过100个tensor运算,包括算术,线性代数,矩阵操作(转置,索引,切片),采样等。每个运算都可以在GPU上进行(常常比在CPU上更快)
默认地,tensors在CPU上被创建。我们需要显式的通过.to
方法来将它移动到GPU上。在设备间拷贝大型tensor对于时间和开销都是高昂的。
if torch.cuda.is_available():
tensor=tensor.to('cuda')
类似numpy的索引和切片:
tensor=torch.ones((4,4));tensor
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]])
tensor[0]
tensor([1., 1., 1., 1.])
tensor[:,0]
tensor([1., 1., 1., 1.])
tensor[...,-1]=100;tensor
tensor([[ 1., 1., 1., 100.],
[ 1., 1., 1., 100.],
[ 1., 1., 1., 100.],
[ 1., 1., 1., 100.]])
tensor[:,1]=10;tensor
tensor([[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.]])
除了常用的索引选择数据,PyTorch还提供了一些高级的选择函数:
help(torch.index_select)
Help on built-in function index_select:
index_select(...)
index_select(input, dim, index, *, out=None) -> Tensor
Returns a new tensor which indexes the :attr:`input` tensor along dimension
:attr:`dim` using the entries in :attr:`index` which is a `LongTensor`.
The returned tensor has the same number of dimensions as the original tensor
(:attr:`input`). The :attr:`dim`\ th dimension has the same size as the length
of :attr:`index`; other dimensions have the same size as in the original tensor.
.. note:: The returned tensor does **not** use the same storage as the original
tensor. If :attr:`out` has a different shape than expected, we
silently change it to the correct shape, reallocating the underlying
storage if necessary.
Args:
input (Tensor): the input tensor.
dim (int): the dimension in which we index
index (IntTensor or LongTensor): the 1-D tensor containing the indices to index
Keyword args:
out (Tensor, optional): the output tensor.
Example::
>>> x = torch.randn(3, 4)
tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],
[-0.4664, 0.2647, -0.1228, -1.1068],
[-1.1734, -0.6571, 0.7230, -0.6004]])
>>> indices = torch.tensor([0, 2])
>>> torch.index_select(x, 0, indices)
tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],
[-1.1734, -0.6571, 0.7230, -0.6004]])
>>> torch.index_select(x, 1, indices)
tensor([[ 0.1427, -0.5414],
[-0.4664, -0.1228],
[-1.1734, 0.7230]])
help(torch.masked_select)
Help on built-in function masked_select:
masked_select(...)
masked_select(input, mask, *, out=None) -> Tensor
Returns a new 1-D tensor which indexes the :attr:`input` tensor according to
the boolean mask :attr:`mask` which is a `BoolTensor`.
The shapes of the :attr:`mask` tensor and the :attr:`input` tensor don't need
to match, but they must be :ref:`broadcastable <broadcasting-semantics>`.
.. note:: The returned tensor does **not** use the same storage
as the original tensor
Args:
input (Tensor): the input tensor.
mask (BoolTensor): the tensor containing the binary mask to index with
Keyword args:
out (Tensor, optional): the output tensor.
Example::
>>> x = torch.randn(3, 4)
tensor([[ 0.3552, -2.3825, -0.8297, 0.3477],
[-1.2035, 1.2252, 0.5002, 0.6248],
[ 0.1307, -2.0608, 0.1244, 2.0139]])
>>> mask = x.ge(0.5)
tensor([[False, False, False, False],
[False, True, True, True],
[False, False, False, True]])
>>> torch.masked_select(x, mask)
tensor([ 1.2252, 0.5002, 0.6248, 2.0139])
help(torch.gather)
Help on built-in function gather:
gather(...)
gather(input, dim, index, *, sparse_grad=False, out=None) -> Tensor
Gathers values along an axis specified by `dim`.
For a 3-D tensor the output is specified by::
out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0
out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1
out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2
:attr:`input` and :attr:`index` must have the same number of dimensions.
It is also required that ``index.size(d) <= input.size(d)`` for all
dimensions ``d != dim``. :attr:`out` will have the same shape as :attr:`index`.
Note that ``input`` and ``index`` do not broadcast against each other.
Args:
input (Tensor): the source tensor
dim (int): the axis along which to index
index (LongTensor): the indices of elements to gather
Keyword arguments:
sparse_grad (bool, optional): If ``True``, gradient w.r.t. :attr:`input` will be a sparse tensor.
out (Tensor, optional): the destination tensor
Example::
>>> t = torch.tensor([[1, 2], [3, 4]])
>>> torch.gather(t, 1, torch.tensor([[0, 0], [1, 0]]))
tensor([[ 1, 1],
[ 4, 3]])
可以用torch.cat
来合并tensor,沿着某个方向,另外还有torch.stack
,这稍微与torch.cat
有些不一样。
t1=torch.cat([tensor,tensor,tensor],dim=1);t1
tensor([[ 1., 10., 1., 100., 1., 10., 1., 100., 1., 10., 1., 100.],
[ 1., 10., 1., 100., 1., 10., 1., 100., 1., 10., 1., 100.],
[ 1., 10., 1., 100., 1., 10., 1., 100., 1., 10., 1., 100.],
[ 1., 10., 1., 100., 1., 10., 1., 100., 1., 10., 1., 100.]])
torch.cat([tensor,tensor,tensor],dim=0)
tensor([[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.],
[ 1., 10., 1., 100.]])
cat
和stack
的区别在于前者会再增加现有维度的值,可以理解为续接
,后者会增加一个维度,可以理解为叠加。
a=torch.arange(0,12).reshape(3,4)
tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
torch.cat([a,a]).shape
torch.Size([6, 4])
torch.stack([a,a]).shape
torch.Size([2, 3, 4])
torch.cat([a,a])
tensor([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
torch.stack([a,a])
tensor([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]]])
tensor=torch.arange(0,9).reshape(3,3);tensor
tensor([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
以下计算了tensor之间的矩阵乘法,y1,y2的值相同
y1=tensor@tensor.T
tensor([[ 5, 14, 23],
[ 14, 50, 86],
[ 23, 86, 149]])
y2=tensor.matmul(tensor.T)
tensor([[ 5, 14, 23],
[ 14, 50, 86],
[ 23, 86, 149]])
y3=torch.empty(3,3)
torch.add(tensor,tensor.T,out=y3)
print(y3)
tensor([[ 0., 4., 8.],
[ 4., 8., 12.],
[ 8., 12., 16.]])
单元素tensor,比如通过aggregate所有值得到一个值,那么就可以通过item()
来得到Python的数值。
agg=tensor.sum();agg
tensor(36)
agg_item=agg.item();agg_item
在位操作,那些把结果储存在运算数的运算被称为在位操作,可以用_
来标识。比如x.copy_(y)
,x.t_()
将会改变x
。
tensor
tensor([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
tensor.add_(5)
tensor([[ 5, 6, 7],
[ 8, 9, 10],
[11, 12, 13]])
tensor
tensor([[ 5, 6, 7],
[ 8, 9, 10],
[11, 12, 13]])
在位运算可能会省存储空间,但当计算导数的时候,会出错,因此不建议使用。
与numpy 数组的相互转换
使用numpy()
和from_numpy()
将tensor和numpy数组相互转换。但需要注意的是:这两个函数所产生的tensor
和Numpy的数组共享相同的内存(所以它们之间的转换很快),改变其中一个就改变了另一个!
Tensor to Numpy array
t=torch.ones(5)
tensor([1., 1., 1., 1., 1.])
n=t.numpy();n
array([ 1., 1., 1., 1., 1.], dtype=float32)
t.add_(1)
tensor([2., 2., 2., 2., 2.])
Numpy array to Tensor
n=np.ones(5)
t=torch.from_numpy(n)
tensor([1., 1., 1., 1., 1.], dtype=torch.float64)
np.add(n,1,out=n)
array([ 2., 2., 2., 2., 2.])
tensor([2., 2., 2., 2., 2.], dtype=torch.float64)
array([ 2., 2., 2., 2., 2.])
此外,除了上面的方法,还有一个常用的方法就算直接使用torch.tensor()
将numpy数组转换为tensor,需要注意的的是该方法总是会进行数据拷贝,返回的tensor和原来的数据不再共享内存。
a=np.arange(9).reshape(3,3)
c=torch.tensor(a)
print(c)
print(a)
tensor([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]], dtype=torch.int32)
[[1 2 3]
[4 5 6]
[7 8 9]]
View()
用view()
来改变tensor
的形状,该方法返回的新tensor与源tensor共享内存(其实是同一个tensor),也即更改其中的一个,另外一个也会跟着改变。具有相同功能的reshape
,也不能保证返回的是其拷贝。
x=torch.randn(5,3);x
tensor([[-0.5722, -0.4844, 1.5515],
[-0.2504, 0.2010, 0.0182],
[ 0.0400, 0.0397, 2.0167],
[ 1.8868, -0.4670, 0.5968],
[ 0.9070, 0.5825, -1.0549]])
y=x.view(15);y
tensor([-0.5722, -0.4844, 1.5515, -0.2504, 0.2010, 0.0182, 0.0400, 0.0397,
2.0167, 1.8868, -0.4670, 0.5968, 0.9070, 0.5825, -1.0549])
y[0]=100
tensor([[ 1.0000e+02, -4.8445e-01, 1.5515e+00],
[-2.5042e-01, 2.0102e-01, 1.8231e-02],
[ 3.9969e-02, 3.9711e-02, 2.0167e+00],
[ 1.8868e+00, -4.6697e-01, 5.9683e-01],
[ 9.0702e-01, 5.8254e-01, -1.0549e+00]])
z=x.view(-1,5);z
tensor([[ 1.0000e+02, -4.8445e-01, 1.5515e+00, -2.5042e-01, 2.0102e-01],
[ 1.8231e-02, 3.9969e-02, 3.9711e-02, 2.0167e+00, 1.8868e+00],
[-4.6697e-01, 5.9683e-01, 9.0702e-01, 5.8254e-01, -1.0549e+00]])
q=x.reshape(15);q
tensor([ 1.0000e+02, -4.8445e-01, 1.5515e+00, -2.5042e-01, 2.0102e-01,
1.8231e-02, 3.9969e-02, 3.9711e-02, 2.0167e+00, 1.8868e+00,
-4.6697e-01, 5.9683e-01, 9.0702e-01, 5.8254e-01, -1.0549e+00])
q[0]=250;x
tensor([[ 2.5000e+02, -4.8445e-01, 1.5515e+00],
[-2.5042e-01, 2.0102e-01, 1.8231e-02],
[ 3.9969e-02, 3.9711e-02, 2.0167e+00],
[ 1.8868e+00, -4.6697e-01, 5.9683e-01],
[ 9.0702e-01, 5.8254e-01, -1.0549e+00]])
如果我们想要返回一个真正新的副本(即不共享内存),可以先用clone
创造一个副本,再用view
x_cp=x.clone().view(15)
print(x)
print(x_cp)
tensor([[ 2.4900e+02, -1.4844e+00, 5.5149e-01],
[-1.2504e+00, -7.9898e-01, -9.8177e-01],
[-9.6003e-01, -9.6029e-01, 1.0167e+00],
[ 8.8677e-01, -1.4670e+00, -4.0317e-01],
[-9.2979e-02, -4.1746e-01, -2.0549e+00]])
tensor([ 2.5000e+02, -4.8445e-01, 1.5515e+00, -2.5042e-01, 2.0102e-01,
1.8231e-02, 3.9969e-02, 3.9711e-02, 2.0167e+00, 1.8868e+00,
-4.6697e-01, 5.9683e-01, 9.0702e-01, 5.8254e-01, -1.0549e+00])
使用clone
还有一个好处就是会记录在计算图中,即梯度回传到副本时也会传到源tensor
.
另外一个常用的函数就是item()
,它可以将一个标量tensor
转换为python number
x=torch.randn(1);x
tensor([-0.9871])
x.item()
-0.9870905876159668
迹:torch.trace
help(torch.trace)
Help on built-in function trace:
trace(...)
trace(input) -> Tensor
Returns the sum of the elements of the diagonal of the input 2-D matrix.
Example::
>>> x = torch.arange(1., 10.).view(3, 3)
tensor([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]])
>>> torch.trace(x)
tensor(15.)
对角线元素:torch.diag
help(torch.diag)
Help on built-in function diag:
diag(...)
diag(input, diagonal=0, *, out=None) -> Tensor
- If :attr:`input` is a vector (1-D tensor), then returns a 2-D square tensor
with the elements of :attr:`input` as the diagonal.
- If :attr:`input` is a matrix (2-D tensor), then returns a 1-D tensor with
the diagonal elements of :attr:`input`.
The argument :attr:`diagonal` controls which diagonal to consider:
- If :attr:`diagonal` = 0, it is the main diagonal.
- If :attr:`diagonal` > 0, it is above the main diagonal.
- If :attr:`diagonal` < 0, it is below the main diagonal.
Args:
input (Tensor): the input tensor.
diagonal (int, optional): the diagonal to consider
Keyword args:
out (Tensor, optional): the output tensor.
.. seealso::
:func:`torch.diagonal` always returns the diagonal of its input.
:func:`torch.diagflat` always constructs a tensor with diagonal elements
specified by the input.
Examples:
Get the square matrix where the input vector is the diagonal::
>>> a = torch.randn(3)
tensor([ 0.5950,-0.0872, 2.3298])
>>> torch.diag(a)
tensor([[ 0.5950, 0.0000, 0.0000],
[ 0.0000,-0.0872, 0.0000],
[ 0.0000, 0.0000, 2.3298]])
>>> torch.diag(a, 1)
tensor([[ 0.0000, 0.5950, 0.0000, 0.0000],
[ 0.0000, 0.0000,-0.0872, 0.0000],
[ 0.0000, 0.0000, 0.0000, 2.3298],
[ 0.0000, 0.0000, 0.0000, 0.0000]])
Get the k-th diagonal of a given matrix::
>>> a = torch.randn(3, 3)
tensor([[-0.4264, 0.0255,-0.1064],
[ 0.8795,-0.2429, 0.1374],
[ 0.1029,-0.6482,-1.6300]])
>>> torch.diag(a, 0)
tensor([-0.4264,-0.2429,-1.6300])
>>> torch.diag(a, 1)
tensor([ 0.0255, 0.1374])
triu 上三角
help(torch.triu)
Help on built-in function triu:
triu(...)
triu(input, diagonal=0, *, out=None) -> Tensor
Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices
:attr:`input`, the other elements of the result tensor :attr:`out` are set to 0.
The upper triangular part of the matrix is defined as the elements on and
above the diagonal.
The argument :attr:`diagonal` controls which diagonal to consider. If
:attr:`diagonal` = 0, all elements on and above the main diagonal are
retained. A positive value excludes just as many diagonals above the main
diagonal, and similarly a negative value includes just as many diagonals below
the main diagonal. The main diagonal are the set of indices
:math:`\lbrace (i, i) \rbrace` for :math:`i \in [0, \min\{d_{1}, d_{2}\} - 1]` where
:math:`d_{1}, d_{2}` are the dimensions of the matrix.
Args:
input (Tensor): the input tensor.
diagonal (int, optional): the diagonal to consider
Keyword args:
out (Tensor, optional): the output tensor.
Example::
>>> a = torch.randn(3, 3)
tensor([[ 0.2309, 0.5207, 2.0049],
[ 0.2072, -1.0680, 0.6602],
[ 0.3480, -0.5211, -0.4573]])
>>> torch.triu(a)
tensor([[ 0.2309, 0.5207, 2.0049],
[ 0.0000, -1.0680, 0.6602],
[ 0.0000, 0.0000, -0.4573]])
>>> torch.triu(a, diagonal=1)
tensor([[ 0.0000, 0.5207, 2.0049],
[ 0.0000, 0.0000, 0.6602],
[ 0.0000, 0.0000, 0.0000]])
>>> torch.triu(a, diagonal=-1)
tensor([[ 0.2309, 0.5207, 2.0049],
[ 0.2072, -1.0680, 0.6602],
[ 0.0000, -0.5211, -0.4573]])
>>> b = torch.randn(4, 6)
tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],
[-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],
[ 0.4333, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],
[-0.9888, 1.0679, -1.3337, -1.6556, 0.4798, 0.2830]])
>>> torch.triu(b, diagonal=1)
tensor([[ 0.0000, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],
[ 0.0000, 0.0000, -1.2919, 1.3378, -0.1768, -1.0857],
[ 0.0000, 0.0000, 0.0000, -1.0432, 0.9348, -0.4410],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.4798, 0.2830]])
>>> torch.triu(b, diagonal=-1)
tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],
[-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],
[ 0.0000, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],
[ 0.0000, 0.0000, -1.3337, -1.6556, 0.4798, 0.2830]])
tril 下三角
help(torch.tril)
Help on built-in function tril:
tril(...)
tril(input, diagonal=0, *, out=None) -> Tensor
Returns the lower triangular part of the matrix (2-D tensor) or batch of matrices
:attr:`input`, the other elements of the result tensor :attr:`out` are set to 0.
The lower triangular part of the matrix is defined as the elements on and
below the diagonal.
The argument :attr:`diagonal` controls which diagonal to consider. If
:attr:`diagonal` = 0, all elements on and below the main diagonal are
retained. A positive value includes just as many diagonals above the main
diagonal, and similarly a negative value excludes just as many diagonals below
the main diagonal. The main diagonal are the set of indices
:math:`\lbrace (i, i) \rbrace` for :math:`i \in [0, \min\{d_{1}, d_{2}\} - 1]` where
:math:`d_{1}, d_{2}` are the dimensions of the matrix.
Args:
input (Tensor): the input tensor.
diagonal (int, optional): the diagonal to consider
Keyword args:
out (Tensor, optional): the output tensor.
Example::
>>> a = torch.randn(3, 3)
tensor([[-1.0813, -0.8619, 0.7105],
[ 0.0935, 0.1380, 2.2112],
[-0.3409, -0.9828, 0.0289]])
>>> torch.tril(a)
tensor([[-1.0813, 0.0000, 0.0000],
[ 0.0935, 0.1380, 0.0000],
[-0.3409, -0.9828, 0.0289]])
>>> b = torch.randn(4, 6)
tensor([[ 1.2219, 0.5653, -0.2521, -0.2345, 1.2544, 0.3461],
[ 0.4785, -0.4477, 0.6049, 0.6368, 0.8775, 0.7145],
[ 1.1502, 3.2716, -1.1243, -0.5413, 0.3615, 0.6864],
[-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0978]])
>>> torch.tril(b, diagonal=1)
tensor([[ 1.2219, 0.5653, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.4785, -0.4477, 0.6049, 0.0000, 0.0000, 0.0000],
[ 1.1502, 3.2716, -1.1243, -0.5413, 0.0000, 0.0000],
[-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0000]])
>>> torch.tril(b, diagonal=-1)
tensor([[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.4785, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 1.1502, 3.2716, 0.0000, 0.0000, 0.0000, 0.0000],
[-0.0614, -0.7344, -1.3164, 0.0000, 0.0000, 0.0000]])
x=torch.arange(1,3).view(1,2);x
tensor([[1, 2]])
y=torch.arange(1,4).view(3,1);y
tensor([[1],
[3]])
tensor([[2, 3],
[3, 4],
[4, 5]])
运算的内存开销
索引,view
是不会开辟新内存,而y=x+y
这样的运算是会新开内存,然后将y
指向新内存。
x=torch.tensor([1,2])
y=torch.tensor([3,4])
id_before=id(y)
y=y+x
id(y)==id_before
False
如果我们想指定结果到原来y
的内存,可以使用索引来进行替换操作。
x=torch.tensor([1,2])
y=torch.tensor([3,4])
id_before=id(y)
y[:]=y+x
id_before==id(y)
我们还可以使用运算符全名函数的out
参数或者自加符号(也即add_):
x=torch.tensor([1,2])
y=torch.tensor([3,4])
id_before=id(y)
torch.add(x,y,out=y)
id(y)==id_before
id(y)==id_before
y.add_(x)
id(y)==id_before
y.requires_grad
False
自动求梯度
Pytorch提供的autograd
包能根据输入和前向传播过程自动构建计算图,并执行反向传播。
如果将Tensor
类的属性.require_grad
设置为True
,它将追踪在其上的所有操作(这样就可以利用链式法则进行梯度传播了)。完成计算后,可以调用.backward()
来完成所有梯度计算。此tensor
的梯度将累积到.grad
属性中。
注意在y.backward()
时,如果y
是标量,则不需要backward()
传入任何参数,否则,需要传入一个与y
同形的tensor
,则此时y.backward(w)
的含义是:先计算L=torch.sum(y*w)
,则L
是个标量,然后求L
对自变量x
的导数。
如果不想要被继续追踪,可以调用.detach()
可将其从追踪记录中分离出来,这样就可以防止将来的计算被追踪,这样梯度就传不过去了。此外,还可以用with torch.no_grad()
将不想被追踪的操作代码块包裹起来,这种方法在评价模型的时候很常用,因为在评估模型时,我们并不需要计算可训练参数(requires_grad=True
)的梯度。
Function
是另外一个很重要的类。Tensor
和Function
互相结合就可以构建一个记录有整个计算过程的有向无环图(DAG)。每个tensor
都有一个.grad_fn
属性,该属性即创建该Tensor
的Function
,也就是说该tensor
是不是通过某些运算得到的,若是,则grad_fn1
返回一个与这些运算相关的对象,否则是None.
x=torch.ones(2,2,requires_grad=True)
print(x)
print(x.grad_fn)
print(x.grad) # 未计算则为None
print(x.dtype)