optimizer.zero_grad()意思是把梯度置零,也就是把loss关于weight的导数变成0.
在学习pytorch的时候注意到,对于每个batch大都执行了这样的操作:
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
对于这些操作我是把它理解成一种梯度下降法,贴一个自己之前手写的简单梯度下降法作为对照:
# gradient descent
weights = [0] * n
alpha = 0.0001
max_Iter = 50000
for i in range(max_Iter):
loss = 0
d_weights = [0] * n
for k in range(m):
h = dot(input[k], weights)
d_weights = [d_weights[j] + (label[k] - h) * input[k][j] for j in range(n)]
loss += (label[k] - h) * (label[k] - h) / 2
d_weights = [d_weights[k]/m for k in range(n)]
weights = [weights[k] + alpha * d_weights[k] for k in range(n)]
if i%10000 == 0:
print "Iteration %d loss: %f"%(i, loss/m)
print weights
可以发现它们实际上是一一对应的:
optimizer.zero_grad()对应d_weights = [0] * n
即将梯度初始化为零(因为一个batch的loss关于weight的导数是所有sample的loss关于weight的导数的累加和)
outputs = net(inputs)对应h = dot(input[k], weights)
即前向传播求出预测的值
loss = criterion(outputs, labels)对应loss += (label[k] - h) * (label[k] - h) / 2
这一步很明显,就是求loss(其实我觉得这一步不用也可以,反向传播时用不到loss值,只是为了让我们知道当前的loss是多少)
loss.backward()对应d_weights = [d_weights[j] + (label[k] - h) * input[k][j] for j in range(n)]
即反向传播求梯度
optimizer.step()对应weights = [weights[k] + alpha * d_weights[k] for k in range(n)]
即更新所有参数
作者:scut_salmon
来源:CSDN
原文:https://blog.csdn.net/scut_salmon/article/details/82414730
版权声明:本文为博主原创文章,转载请附上博文链接!
optimizer.zero_grad()意思是把梯度置零,也就是把loss关于weight的导数变成0.在学习pytorch的时候注意到,对于每个batch大都执行了这样的操作: # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize...
python 打开csv文件,报错'utf-8' codec can't decode bytes in position 16: invalid continuation byte
小nan爱学习:
定位关键点ORB_create()函数解析
cun.Mr:
python 打开csv文件,报错'utf-8' codec can't decode bytes in position 16: invalid continuation byte
666wiwi: