loss function
损失函数的基本用法:
criterion = LossCriterion() # 构造函数有自己的参数 |
得到的loss结果已经对mini-batch数量取了平均值
注:
reduction ( string , optional ) – Specifies the reduction to apply to the output: 'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Note: size_average
and reduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction
. Default: 'mean'
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
L1Loss
创建一个criterion计算input x和target y的每个元素的平均绝对误差(mean absolute error (MAE))
from torch import nn |
MSELoss
创建一个criterion计算input x和target y的每个元素的均方误差(mean absolute error (MAE))
torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean') |
from torch import nn |
CrossEntropyLoss
该criterion将nn.LogSoftmax()和nn.NLLLoss()方法结合到一个类中
torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') |
from torch import nn |
NLLLoss
用于多分类的负对数似然损失函数(negative log likelihood loss)
torch.nn.NLLLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean') |
from torch import nn |
BCELoss
创建一个衡量目标和输出之间二进制交叉熵的criterion
torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction='mean') |
from torch import nn |
BCEWithLogitsLoss
将sigmoid函数和BCELoss方法结合到一个类中
这个版本在数值上比使用一个带着BCELoss损失函数的简单的Sigmoid函数更稳定,通过将操作合并到一层中,我们利用log-sum-exp技巧来实现数值稳定性。
torch.nn.BCEWithLogitsLoss(weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None) |
多出参数:
- pos_weight (Tensor,**可选) –正值例子的权重,必须是有着与分类数目相同的长度的向量
from torch import nn |