## 前言

backward()函数，这个函数返回的就是`torch.autograd.backward()`。也就是说，我们在训练中输入我们数据，然后经过一系列神经网络运算，最后计算loss，然后loss.backward()。这里的backward()归根绝地就是，上面说的这个函数。

## Fake Backward

``````class ContentLoss(nn.Module):
def __init__(self, target, weight):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
self.target = target.detach() * weight
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.weight = weight
self.criterion = nn.MSELoss()
def forward(self, input):
self.loss = self.criterion(input * self.weight, self.target)
self.output = input
return self.output
def backward(self, retain_graph=True):
print('ContentLoss Backward works')
self.loss.backward(retain_graph=retain_graph)
return self.loss
...
# 执行backward语句，具体代码请看下方的连接。
for sl in style_losses:
style_score += sl.backward()
for cl in content_losses:
content_score += cl.backward()``````

``````class ContentLoss(nn.Module):
def __init__(self, target, ):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
...
# 执行代码，具体看官网的最新0.4.0风格迁移教程
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
loss = style_score + content_score
loss.backward()``````

## Real Backward

``````class MyReLU(torch.autograd.Function):
"""  We can implement our own custom autograd Functions by subclassing  torch.autograd.Function and implementing the forward and backward passes  which operate on Tensors.  """
@staticmethod
def forward(ctx, x):
"""  In the forward pass we receive a context object and a Tensor containing the  input; we must return a Tensor containing the output, and we can use the  context object to cache objects for use in the backward pass.  """
ctx.save_for_backward(x)
return x.clamp(min=0)
"""  In the backward pass we receive the context object and a Tensor containing  the gradient of the loss with respect to the output produced during the  forward pass. We can retrieve cached data from the context object, and must  compute and return the gradient of the loss with respect to the input to the  forward function.  """
x, = ctx.saved_tensors

``````class my_function(Function):
def forward(self, input, parameters):
self.saved_for_backward = [input, parameters]
# output = [对输入和参数进行的操作，这里省略]
return output
input, parameters = self.saved_for_backward
# 然后通过定义一个Module来包装一下
class my_module(nn.Module):
def __init__(self, ...):
super(my_module, self).__init__()
self.parameters = # 初始化一些参数
def backward(self, input):
output = my_function(input, self.parameters) # 在这里执行你之前定义的function!
return output``````

## 后记

Oldpan的个人博客 – Yahaha,you found me!oldpan.me

## 参考链接

https://discuss.pytorch.org/t/defining-backward-function-in-nn-module/5047

https://discuss.pytorch.org/t/whats-the-difference-between-torch-nn-functional-and-torch-nn/681

https://discuss.pytorch.org/t/difference-of-methods-between-torch-nn-and-functional/1076

https://discuss.pytorch.org/t/whats-the-difference-between-torch-nn-functional-and-torch-nn/681/4

原文作者：OLDPAN
原文地址: https://zhuanlan.zhihu.com/p/37213786
本文转自网络文章，转载此文章仅为分享知识，如有侵权，请联系博主进行删除。