我正在使用Python3和Pytorch 1.9.1编写强化学习的代码。
我发布了一个问题,因为我不理解错误行。错误发生在loss.mean().backward()的行上。
据说数据类型应该有一个浮点数,但是双精度数进来了,但是无论打印多少数据类型,它都是一个浮点数32。有什么问题吗?
有问题的代码如下。
def train_net_ap(self, idx): s, a, r, s_prime, done_mask, prob_a = self.make_batch(idx) print("a is ", a) for i in range(K_epoch): td_target = r + gamma * self.v_ap(s_prime) * done_mask delta = td_target - self.v_ap(s) delta = delta.detach().numpy() advantage_lst = [] advantage = 0.0 for delta_t in delta[::-1]: advantage = gamma * lmbda * advantage + delta_t[0] advantage_lst.append([advantage]) advantage_lst.reverse() advantage = torch.tensor(advantage_lst, dtype=torch.float) pi = self.pi_ap(s, softmax_dim=1) pi_a = pi.gather(1, a) ratio = torch.exp(torch.log(pi_a) - torch.log(prob_a)) # a/b == exp(log(a)-log(b)) surr1 = ratio * advantage surr2 = torch.clamp(ratio, 1 - eps_clip, 1 + eps_clip) * advantage loss = -torch.min(surr1, surr2) + F.smooth_l1_loss(self.v_ap(s), td_target.detach()) print("loss is ", loss) print("loss dtype is ", loss.dtype) print("loss.mean() is ", loss.mean(), loss.mean().dtype) self.optimizer.zero_grad() loss.mean().backward() self.optimizer.step()
打印的短语和错误消息如下。
loss dtype is torch.float32 loss.mean() is tensor(6.1353, grad_fn=<MeanBackward0>) torch.float32 Traceback (most recent call last): main() model.train_net_ap(x) loss.mean().backward() torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag