sonnix.functional¶
Activations¶
sonnix.functional.safe_exp ¶
safe_exp(input: Tensor, max: float = 20.0) -> Tensor
Compute safely the exponential of the elements.
The values that are higher than the specified minimum value are set to this maximum value. Using a not too large positive value leads to an output tensor without Inf.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Tensor
|
The input tensor. |
required |
max
|
float
|
The maximum value. |
20.0
|
Returns:
| Type | Description |
|---|---|
Tensor
|
A tensor with the exponential of the elements. |
Example
>>> import torch
>>> from sonnix.functional import safe_exp
>>> output = safe_exp(torch.tensor([1.0, 10.0, 100.0, 1000.0]))
>>> output
tensor([2.7183e+00, 2.2026e+04, 4.8517e+08, 4.8517e+08])
sonnix.functional.safe_log ¶
safe_log(input: Tensor, min: float = 1e-08) -> Tensor
Compute safely the logarithm natural logarithm of the elements.
The values that are lower than the specified minimum value are set to this minimum value. Using a small positive value leads to an output tensor without NaN or Inf.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Tensor
|
The input tensor. |
required |
min
|
float
|
The minimum value. |
1e-08
|
Returns:
| Type | Description |
|---|---|
Tensor
|
A tensor with the natural logarithm of the elements. |
Example
>>> import torch
>>> from sonnix.functional import safe_log
>>> safe_log(torch.tensor([1e-4, 1e-5, 1e-6, 1e-8, 1e-9, 1e-10]))
tensor([ -9.2103, -11.5129, -13.8155, -18.4207, -18.4207, -18.4207])
Loss functions¶
sonnix.functional.asinh_mse_loss ¶
asinh_mse_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
) -> Tensor
Compute the mean squared error (MSE) on the inverse hyperbolic sine (asinh) transformed predictions and targets.
It is a generalization of mean squared logarithmic error (MSLE)
that works for real values. The asinh transformation is used
instead of log1p because asinh works on negative values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The mean squared error (MSE) on the inverse hyperbolic sine (asinh) transformed predictions and targets. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import asinh_mse_loss
>>> loss = asinh_mse_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MseLossBackward0>)
>>> loss.backward()
sonnix.functional.asinh_smooth_l1_loss ¶
asinh_smooth_l1_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
beta: float = 1.0,
) -> Tensor
Compute the smooth L1 loss on the inverse hyperbolic sine (asinh) transformed predictions and targets.
It is a generalization of mean squared logarithmic error (MSLE)
that works for real values. The asinh transformation is used
instead of log1p because asinh works on negative values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
beta
|
float
|
The threshold at which to change between L1 and L2 loss. The value must be non-negative. |
1.0
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The smooth L1 loss on the inverse hyperbolic sine (asinh) transformed predictions and targets. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import asinh_smooth_l1_loss
>>> loss = asinh_smooth_l1_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<SmoothL1LossBackward0>)
>>> loss.backward()
sonnix.functional.binary_focal_loss ¶
binary_focal_loss(
prediction: Tensor,
target: Tensor,
alpha: float = 0.25,
gamma: float = 2.0,
reduction: str = "mean",
) -> Tensor
Compute the binary focal loss.
Based on "Focal Loss for Dense Object Detection" (https://arxiv.org/pdf/1708.02002.pdf) Implementation is based on https://pytorch.org/vision/main/_modules/torchvision/ops/focal_loss.html
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as probabilities for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
alpha
|
float
|
The weighting factor, which must be in the range
|
0.25
|
gamma
|
float
|
The focusing parameter, which must be positive
( |
2.0
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed binary focal loss. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import binary_focal_loss
>>> loss = binary_focal_loss(
... torch.rand(2, 4, requires_grad=True),
... torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]]),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.binary_focal_loss_with_logits ¶
binary_focal_loss_with_logits(
prediction: Tensor,
target: Tensor,
alpha: float = 0.25,
gamma: float = 2.0,
reduction: str = "mean",
) -> Tensor
Compute the binary focal loss.
Based on "Focal Loss for Dense Object Detection" (https://arxiv.org/pdf/1708.02002.pdf) Implementation is based on https://pytorch.org/vision/main/_modules/torchvision/ops/focal_loss.html
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as unnormalized scores (often referred to as logits) for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
alpha
|
float
|
The weighting factor, which must be in the range
|
0.25
|
gamma
|
float
|
The focusing parameter, which must be positive
( |
2.0
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed binary focal loss. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import binary_focal_loss_with_logits
>>> loss = binary_focal_loss_with_logits(
... torch.randn(2, 4, requires_grad=True),
... torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]]),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.binary_poly1_loss ¶
binary_poly1_loss(
prediction: Tensor,
target: Tensor,
alpha: float = 1.0,
reduction: str = "mean",
) -> Tensor
Compute the Poly-1 loss for binary targets.
Based on "PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions" (https://arxiv.org/pdf/2204.12511)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as probabilities for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
alpha
|
float
|
The weighting factor, which must be in the range
|
1.0
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed Poly-1 loss value. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import binary_poly1_loss
>>> loss = binary_poly1_loss(
... torch.rand(2, 4, requires_grad=True),
... torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]]),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.binary_poly1_loss_with_logits ¶
binary_poly1_loss_with_logits(
prediction: Tensor,
target: Tensor,
alpha: float = 1.0,
reduction: str = "mean",
) -> Tensor
Compute the Poly-1 loss.
Based on "PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions" (https://arxiv.org/pdf/2204.12511)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as unnormalized scores (often referred to as logits) for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
alpha
|
float
|
The weighting factor, which must be in the range
|
1.0
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed Poly-1 loss value. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import binary_poly1_loss_with_logits
>>> loss = binary_poly1_loss_with_logits(
... torch.randn(2, 4, requires_grad=True),
... torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]]),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.general_robust_regression_loss ¶
general_robust_regression_loss(
prediction: Tensor,
target: Tensor,
alpha: float = 2.0,
scale: float = 1.0,
max: float | None = None,
reduction: str = "mean",
) -> Tensor
Compute the general robust regression loss a.k.a. Barron robust loss.
Based on the paper:
A General and Adaptive Robust Loss Function
Jonathan T. Barron
CVPR 2019 (https://arxiv.org/abs/1701.03077)
Note
The "adaptative" part of the loss is not implemented in this function.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
alpha
|
float
|
The shape parameter that controls the robustness of the loss. |
2.0
|
scale
|
float
|
The scale parameter that controls the size of the loss's quadratic bowl near 0. |
1.0
|
max
|
float | None
|
The max value to clip the loss before to compute the
reduction. |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The loss. The shape of the tensor depends on the reduction strategy. |
Raises:
| Type | Description |
|---|---|
ValueError
|
if the reduction is not valid. |
Example
>>> import torch
>>> from sonnix.functional import general_robust_regression_loss
>>> loss = general_robust_regression_loss(
... torch.randn(2, 4, requires_grad=True), torch.randn(2, 4)
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.log_cosh_loss ¶
log_cosh_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
scale: float = 1.0,
) -> Tensor
Compute the logarithm of the hyperbolic cosine of the prediction error.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
scale
|
float
|
The scale factor. |
1.0
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The logarithm of the hyperbolic cosine of the prediction error. |
Example
>>> import torch
>>> from sonnix.functional import log_cosh_loss
>>> loss = log_cosh_loss(torch.randn(3, 5, requires_grad=True), torch.randn(3, 5))
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.msle_loss ¶
msle_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
) -> Tensor
Compute the mean squared error (MSE) on the logarithmic transformed predictions and targets.
This loss is best to use when targets having exponential growth, such as population counts, average sales of a commodity over a span of years etc. Note that this loss penalizes an under-predicted estimate greater than an over-predicted estimate.
Note: this loss only works with positive values (0 included).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The mean squared logarithmic error. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import msle_loss
>>> loss = msle_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MseLossBackward0>)
>>> loss.backward()
sonnix.functional.poisson_regression_loss ¶
poisson_regression_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
eps: float = 1e-08,
) -> Tensor
Compute the Poisson regression loss.
Loss Functions and Metrics in Deep Learning https://arxiv.org/pdf/2307.02694
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The count predictions. The values must be positive. |
required |
target
|
Tensor
|
The count target values. The values must be positive. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the count is zero. |
1e-08
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The Poisson regression loss. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import poisson_regression_loss
>>> loss = poisson_regression_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.quantile_regression_loss ¶
quantile_regression_loss(
prediction: Tensor,
target: Tensor,
q: float = 0.5,
reduction: str = "mean",
) -> Tensor
Compute the quantile regression loss.
Loss Functions and Metrics in Deep Learning https://arxiv.org/pdf/2307.02694
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
q
|
float
|
The quantile value. |
0.5
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The quantile regression loss. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import quantile_regression_loss
>>> loss = quantile_regression_loss(
... torch.randn(2, 4, requires_grad=True), torch.randn(2, 4)
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.reduce_loss ¶
reduce_loss(tensor: Tensor, reduction: str) -> Tensor
Return the reduced loss.
This function is designed to be used with loss functions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tensor
|
Tensor
|
The input tensor to reduce. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The reduced tensor. The shape of the tensor depends on the reduction strategy. |
Raises:
| Type | Description |
|---|---|
ValueError
|
if the reduction is not valid. |
Example
>>> import torch
>>> from sonnix.functional import reduce_loss
>>> tensor = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0]])
>>> reduce_loss(tensor, "none")
tensor([[0., 1., 2.],
[3., 4., 5.]])
>>> reduce_loss(tensor, "sum")
tensor(15.)
>>> reduce_loss(tensor, "mean")
tensor(2.5000)
sonnix.functional.relative_loss ¶
relative_loss(
loss: Tensor,
indicator: Tensor,
reduction: str = "mean",
eps: float = 1e-08,
) -> Tensor
Compute the relative loss.
The indicators are designed based on https://en.wikipedia.org/wiki/Relative_change#Indicators_of_relative_change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
loss
|
Tensor
|
The loss values. The tensor must have the same shape as the target. |
required |
indicator
|
Tensor
|
The indicator values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the indicator is zero. |
1e-08
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed relative loss. |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
if the loss and indicator shapes do not match. |
ValueError
|
if the reduction is not valid. |
Example
>>> import torch
>>> from sonnix.functional import relative_loss
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> loss = relative_loss(
... loss=torch.nn.functional.mse_loss(prediction, target, reduction="none"),
... indicator=classical_relative_indicator(prediction, target),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
Relative loss indicators¶
sonnix.functional.loss.arithmetical_mean_indicator ¶
arithmetical_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the arithmetical mean change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import arithmetical_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = arithmetical_mean_indicator(prediction, target)
>>> indicator
tensor([[1.0000, 1.0000, 0.5000],
[3.0000, 3.0000, 1.0000]], grad_fn=<MulBackward0>)
sonnix.functional.loss.classical_relative_indicator ¶
classical_relative_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the classical relative change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import classical_relative_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = classical_relative_indicator(prediction, target)
>>> indicator
tensor([[2., 1., 0.],
[3., 5., 1.]])
sonnix.functional.loss.geometric_mean_indicator ¶
geometric_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the geometric mean change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import geometric_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = geometric_mean_indicator(prediction, target)
>>> indicator
tensor([[0.0000, 1.0000, 0.0000],
[3.0000, 2.2361, 1.0000]], grad_fn=<SqrtBackward0>)
sonnix.functional.loss.maximum_mean_indicator ¶
maximum_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the maximum mean change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import maximum_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = maximum_mean_indicator(prediction, target)
>>> indicator
tensor([[2., 1., 1.],
[3., 5., 1.]], grad_fn=<MaximumBackward0>)
sonnix.functional.loss.minimum_mean_indicator ¶
minimum_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the minimum mean change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import minimum_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = minimum_mean_indicator(prediction, target)
>>> indicator
tensor([[0., 1., 0.],
[3., 1., 1.]], grad_fn=<MinimumBackward0>)
sonnix.functional.loss.moment_mean_indicator ¶
moment_mean_indicator(
prediction: Tensor, target: Tensor, k: int = 1
) -> Tensor
Return the moment mean change of order k.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
k
|
int
|
The order. |
1
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import moment_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = moment_mean_indicator(prediction, target)
>>> indicator
tensor([[1.0000, 1.0000, 0.5000],
[3.0000, 3.0000, 1.0000]], grad_fn=<PowBackward0>)
sonnix.functional.loss.reversed_relative_indicator ¶
reversed_relative_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the reversed relative change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import reversed_relative_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = reversed_relative_indicator(prediction, target)
>>> indicator
tensor([[0., 1., 1.],
[3., 1., 1.]], grad_fn=<AbsBackward0>)
Errors¶
sonnix.functional.absolute_error ¶
absolute_error(
prediction: Tensor, target: Tensor
) -> Tensor
Compute the element-wise absolute error between the predictions and targets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The tensor of predictions. |
required |
target
|
Tensor
|
The target tensor, which must have the same shape and
data type as |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The absolute error tensor, which has the same shape and data type as the inputs. |
Example
>>> import torch
>>> from sonnix.functional import absolute_error
>>> absolute_error(torch.eye(2), torch.ones(2, 2))
tensor([[0., 1.],
[1., 0.]])
sonnix.functional.absolute_relative_error ¶
absolute_relative_error(
prediction: Tensor, target: Tensor, eps: float = 1e-08
) -> Tensor
Compute the element-wise absolute relative error between the predictions and targets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The tensor of predictions. |
required |
target
|
Tensor
|
The target tensor, which must have the same shape and
data type as |
required |
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the target is zero. |
1e-08
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The absolute relative error tensor, which has the same shape and data type as the inputs. |
Example
>>> import torch
>>> from sonnix.functional import absolute_relative_error
>>> absolute_relative_error(torch.eye(2), torch.ones(2, 2))
tensor([[0., 1.],
[1., 0.]])
sonnix.functional.symmetric_absolute_relative_error ¶
symmetric_absolute_relative_error(
prediction: Tensor, target: Tensor, eps: float = 1e-08
) -> Tensor
Compute the element-wise symmetric absolute relative error between the predictions and targets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The tensor of predictions. |
required |
target
|
Tensor
|
The target tensor, which must have the same shape and
data type as |
required |
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the target is zero. |
1e-08
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The symmetric absolute relative error tensor, which has the same shape and data type as the inputs. |
Example
>>> import torch
>>> from sonnix.functional import symmetric_absolute_relative_error
>>> symmetric_absolute_relative_error(torch.eye(2), torch.ones(2, 2))
tensor([[0., 2.],
[2., 0.]])
Utility¶
sonnix.functional.check_loss_reduction_strategy ¶
check_loss_reduction_strategy(reduction: str) -> None
Check if the provided reduction ia a valid loss reduction.
The valid reduction values are 'mean', 'none', 'sum',
and 'batchmean'.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reduction
|
str
|
The reduction strategy to check. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
if the provided reduction is not valid. |
Example
>>> from sonnix.functional import check_loss_reduction_strategy
>>> check_loss_reduction_strategy("mean")
All¶
sonnix.functional ¶
Provide functional implementations of PyTorch modules and layers.
This subpackage contains pure function implementations of various neural network operations including activation functions, error calculations, and loss functions that can be used directly without instantiating module objects.
sonnix.functional.absolute_error ¶
absolute_error(
prediction: Tensor, target: Tensor
) -> Tensor
Compute the element-wise absolute error between the predictions and targets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The tensor of predictions. |
required |
target
|
Tensor
|
The target tensor, which must have the same shape and
data type as |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The absolute error tensor, which has the same shape and data type as the inputs. |
Example
>>> import torch
>>> from sonnix.functional import absolute_error
>>> absolute_error(torch.eye(2), torch.ones(2, 2))
tensor([[0., 1.],
[1., 0.]])
sonnix.functional.absolute_relative_error ¶
absolute_relative_error(
prediction: Tensor, target: Tensor, eps: float = 1e-08
) -> Tensor
Compute the element-wise absolute relative error between the predictions and targets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The tensor of predictions. |
required |
target
|
Tensor
|
The target tensor, which must have the same shape and
data type as |
required |
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the target is zero. |
1e-08
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The absolute relative error tensor, which has the same shape and data type as the inputs. |
Example
>>> import torch
>>> from sonnix.functional import absolute_relative_error
>>> absolute_relative_error(torch.eye(2), torch.ones(2, 2))
tensor([[0., 1.],
[1., 0.]])
sonnix.functional.arithmetical_mean_indicator ¶
arithmetical_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the arithmetical mean change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import arithmetical_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = arithmetical_mean_indicator(prediction, target)
>>> indicator
tensor([[1.0000, 1.0000, 0.5000],
[3.0000, 3.0000, 1.0000]], grad_fn=<MulBackward0>)
sonnix.functional.asinh_mse_loss ¶
asinh_mse_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
) -> Tensor
Compute the mean squared error (MSE) on the inverse hyperbolic sine (asinh) transformed predictions and targets.
It is a generalization of mean squared logarithmic error (MSLE)
that works for real values. The asinh transformation is used
instead of log1p because asinh works on negative values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The mean squared error (MSE) on the inverse hyperbolic sine (asinh) transformed predictions and targets. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import asinh_mse_loss
>>> loss = asinh_mse_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MseLossBackward0>)
>>> loss.backward()
sonnix.functional.asinh_smooth_l1_loss ¶
asinh_smooth_l1_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
beta: float = 1.0,
) -> Tensor
Compute the smooth L1 loss on the inverse hyperbolic sine (asinh) transformed predictions and targets.
It is a generalization of mean squared logarithmic error (MSLE)
that works for real values. The asinh transformation is used
instead of log1p because asinh works on negative values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
beta
|
float
|
The threshold at which to change between L1 and L2 loss. The value must be non-negative. |
1.0
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The smooth L1 loss on the inverse hyperbolic sine (asinh) transformed predictions and targets. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import asinh_smooth_l1_loss
>>> loss = asinh_smooth_l1_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<SmoothL1LossBackward0>)
>>> loss.backward()
sonnix.functional.binary_focal_loss ¶
binary_focal_loss(
prediction: Tensor,
target: Tensor,
alpha: float = 0.25,
gamma: float = 2.0,
reduction: str = "mean",
) -> Tensor
Compute the binary focal loss.
Based on "Focal Loss for Dense Object Detection" (https://arxiv.org/pdf/1708.02002.pdf) Implementation is based on https://pytorch.org/vision/main/_modules/torchvision/ops/focal_loss.html
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as probabilities for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
alpha
|
float
|
The weighting factor, which must be in the range
|
0.25
|
gamma
|
float
|
The focusing parameter, which must be positive
( |
2.0
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed binary focal loss. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import binary_focal_loss
>>> loss = binary_focal_loss(
... torch.rand(2, 4, requires_grad=True),
... torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]]),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.binary_focal_loss_with_logits ¶
binary_focal_loss_with_logits(
prediction: Tensor,
target: Tensor,
alpha: float = 0.25,
gamma: float = 2.0,
reduction: str = "mean",
) -> Tensor
Compute the binary focal loss.
Based on "Focal Loss for Dense Object Detection" (https://arxiv.org/pdf/1708.02002.pdf) Implementation is based on https://pytorch.org/vision/main/_modules/torchvision/ops/focal_loss.html
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as unnormalized scores (often referred to as logits) for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
alpha
|
float
|
The weighting factor, which must be in the range
|
0.25
|
gamma
|
float
|
The focusing parameter, which must be positive
( |
2.0
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed binary focal loss. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import binary_focal_loss_with_logits
>>> loss = binary_focal_loss_with_logits(
... torch.randn(2, 4, requires_grad=True),
... torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]]),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.binary_poly1_loss ¶
binary_poly1_loss(
prediction: Tensor,
target: Tensor,
alpha: float = 1.0,
reduction: str = "mean",
) -> Tensor
Compute the Poly-1 loss for binary targets.
Based on "PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions" (https://arxiv.org/pdf/2204.12511)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as probabilities for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
alpha
|
float
|
The weighting factor, which must be in the range
|
1.0
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed Poly-1 loss value. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import binary_poly1_loss
>>> loss = binary_poly1_loss(
... torch.rand(2, 4, requires_grad=True),
... torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]]),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.binary_poly1_loss_with_logits ¶
binary_poly1_loss_with_logits(
prediction: Tensor,
target: Tensor,
alpha: float = 1.0,
reduction: str = "mean",
) -> Tensor
Compute the Poly-1 loss.
Based on "PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions" (https://arxiv.org/pdf/2204.12511)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as unnormalized scores (often referred to as logits) for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
alpha
|
float
|
The weighting factor, which must be in the range
|
1.0
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed Poly-1 loss value. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import binary_poly1_loss_with_logits
>>> loss = binary_poly1_loss_with_logits(
... torch.randn(2, 4, requires_grad=True),
... torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]]),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.check_loss_reduction_strategy ¶
check_loss_reduction_strategy(reduction: str) -> None
Check if the provided reduction ia a valid loss reduction.
The valid reduction values are 'mean', 'none', 'sum',
and 'batchmean'.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reduction
|
str
|
The reduction strategy to check. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
if the provided reduction is not valid. |
Example
>>> from sonnix.functional import check_loss_reduction_strategy
>>> check_loss_reduction_strategy("mean")
sonnix.functional.classical_relative_indicator ¶
classical_relative_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the classical relative change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import classical_relative_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = classical_relative_indicator(prediction, target)
>>> indicator
tensor([[2., 1., 0.],
[3., 5., 1.]])
sonnix.functional.general_robust_regression_loss ¶
general_robust_regression_loss(
prediction: Tensor,
target: Tensor,
alpha: float = 2.0,
scale: float = 1.0,
max: float | None = None,
reduction: str = "mean",
) -> Tensor
Compute the general robust regression loss a.k.a. Barron robust loss.
Based on the paper:
A General and Adaptive Robust Loss Function
Jonathan T. Barron
CVPR 2019 (https://arxiv.org/abs/1701.03077)
Note
The "adaptative" part of the loss is not implemented in this function.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
alpha
|
float
|
The shape parameter that controls the robustness of the loss. |
2.0
|
scale
|
float
|
The scale parameter that controls the size of the loss's quadratic bowl near 0. |
1.0
|
max
|
float | None
|
The max value to clip the loss before to compute the
reduction. |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The loss. The shape of the tensor depends on the reduction strategy. |
Raises:
| Type | Description |
|---|---|
ValueError
|
if the reduction is not valid. |
Example
>>> import torch
>>> from sonnix.functional import general_robust_regression_loss
>>> loss = general_robust_regression_loss(
... torch.randn(2, 4, requires_grad=True), torch.randn(2, 4)
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.geometric_mean_indicator ¶
geometric_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the geometric mean change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import geometric_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = geometric_mean_indicator(prediction, target)
>>> indicator
tensor([[0.0000, 1.0000, 0.0000],
[3.0000, 2.2361, 1.0000]], grad_fn=<SqrtBackward0>)
sonnix.functional.log_cosh_loss ¶
log_cosh_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
scale: float = 1.0,
) -> Tensor
Compute the logarithm of the hyperbolic cosine of the prediction error.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
scale
|
float
|
The scale factor. |
1.0
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The logarithm of the hyperbolic cosine of the prediction error. |
Example
>>> import torch
>>> from sonnix.functional import log_cosh_loss
>>> loss = log_cosh_loss(torch.randn(3, 5, requires_grad=True), torch.randn(3, 5))
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.maximum_mean_indicator ¶
maximum_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the maximum mean change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import maximum_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = maximum_mean_indicator(prediction, target)
>>> indicator
tensor([[2., 1., 1.],
[3., 5., 1.]], grad_fn=<MaximumBackward0>)
sonnix.functional.minimum_mean_indicator ¶
minimum_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the minimum mean change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import minimum_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = minimum_mean_indicator(prediction, target)
>>> indicator
tensor([[0., 1., 0.],
[3., 1., 1.]], grad_fn=<MinimumBackward0>)
sonnix.functional.moment_mean_indicator ¶
moment_mean_indicator(
prediction: Tensor, target: Tensor, k: int = 1
) -> Tensor
Return the moment mean change of order k.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
k
|
int
|
The order. |
1
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import moment_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = moment_mean_indicator(prediction, target)
>>> indicator
tensor([[1.0000, 1.0000, 0.5000],
[3.0000, 3.0000, 1.0000]], grad_fn=<PowBackward0>)
sonnix.functional.msle_loss ¶
msle_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
) -> Tensor
Compute the mean squared error (MSE) on the logarithmic transformed predictions and targets.
This loss is best to use when targets having exponential growth, such as population counts, average sales of a commodity over a span of years etc. Note that this loss penalizes an under-predicted estimate greater than an over-predicted estimate.
Note: this loss only works with positive values (0 included).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The mean squared logarithmic error. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import msle_loss
>>> loss = msle_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MseLossBackward0>)
>>> loss.backward()
sonnix.functional.poisson_regression_loss ¶
poisson_regression_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
eps: float = 1e-08,
) -> Tensor
Compute the Poisson regression loss.
Loss Functions and Metrics in Deep Learning https://arxiv.org/pdf/2307.02694
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The count predictions. The values must be positive. |
required |
target
|
Tensor
|
The count target values. The values must be positive. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the count is zero. |
1e-08
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The Poisson regression loss. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import poisson_regression_loss
>>> loss = poisson_regression_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.quantile_regression_loss ¶
quantile_regression_loss(
prediction: Tensor,
target: Tensor,
q: float = 0.5,
reduction: str = "mean",
) -> Tensor
Compute the quantile regression loss.
Loss Functions and Metrics in Deep Learning https://arxiv.org/pdf/2307.02694
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
q
|
float
|
The quantile value. |
0.5
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The quantile regression loss. The shape of the tensor depends on the reduction strategy. |
Example
>>> import torch
>>> from sonnix.functional import quantile_regression_loss
>>> loss = quantile_regression_loss(
... torch.randn(2, 4, requires_grad=True), torch.randn(2, 4)
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.rectifier_asinh_unit ¶
rectifier_asinh_unit(input: Tensor) -> Tensor
Compute the inverse hyperbolic sine (arcsinh) of the positive elements, and zero for the negative elements.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Tensor
|
The input tensor. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
A tensor with inverse hyperbolic sine of the positive elements. |
Example
>>> import torch
>>> from sonnix.functional import rectifier_asinh_unit
>>> output = rectifier_asinh_unit(torch.tensor([-2.0, -1.0, 0.0, 1.0, 2.0]))
>>> output
tensor([0.0000, 0.0000, 0.0000, 0.8814, 1.4436])
sonnix.functional.reduce_loss ¶
reduce_loss(tensor: Tensor, reduction: str) -> Tensor
Return the reduced loss.
This function is designed to be used with loss functions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tensor
|
Tensor
|
The input tensor to reduce. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The reduced tensor. The shape of the tensor depends on the reduction strategy. |
Raises:
| Type | Description |
|---|---|
ValueError
|
if the reduction is not valid. |
Example
>>> import torch
>>> from sonnix.functional import reduce_loss
>>> tensor = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0]])
>>> reduce_loss(tensor, "none")
tensor([[0., 1., 2.],
[3., 4., 5.]])
>>> reduce_loss(tensor, "sum")
tensor(15.)
>>> reduce_loss(tensor, "mean")
tensor(2.5000)
sonnix.functional.relative_loss ¶
relative_loss(
loss: Tensor,
indicator: Tensor,
reduction: str = "mean",
eps: float = 1e-08,
) -> Tensor
Compute the relative loss.
The indicators are designed based on https://en.wikipedia.org/wiki/Relative_change#Indicators_of_relative_change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
loss
|
Tensor
|
The loss values. The tensor must have the same shape as the target. |
required |
indicator
|
Tensor
|
The indicator values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the indicator is zero. |
1e-08
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed relative loss. |
Raises:
| Type | Description |
|---|---|
RuntimeError
|
if the loss and indicator shapes do not match. |
ValueError
|
if the reduction is not valid. |
Example
>>> import torch
>>> from sonnix.functional import relative_loss
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> loss = relative_loss(
... loss=torch.nn.functional.mse_loss(prediction, target, reduction="none"),
... indicator=classical_relative_indicator(prediction, target),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.functional.reversed_relative_indicator ¶
reversed_relative_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the reversed relative change.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
Example
>>> import torch
>>> from sonnix.functional.loss import reversed_relative_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = reversed_relative_indicator(prediction, target)
>>> indicator
tensor([[0., 1., 1.],
[3., 1., 1.]], grad_fn=<AbsBackward0>)
sonnix.functional.safe_exp ¶
safe_exp(input: Tensor, max: float = 20.0) -> Tensor
Compute safely the exponential of the elements.
The values that are higher than the specified minimum value are set to this maximum value. Using a not too large positive value leads to an output tensor without Inf.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Tensor
|
The input tensor. |
required |
max
|
float
|
The maximum value. |
20.0
|
Returns:
| Type | Description |
|---|---|
Tensor
|
A tensor with the exponential of the elements. |
Example
>>> import torch
>>> from sonnix.functional import safe_exp
>>> output = safe_exp(torch.tensor([1.0, 10.0, 100.0, 1000.0]))
>>> output
tensor([2.7183e+00, 2.2026e+04, 4.8517e+08, 4.8517e+08])
sonnix.functional.safe_log ¶
safe_log(input: Tensor, min: float = 1e-08) -> Tensor
Compute safely the logarithm natural logarithm of the elements.
The values that are lower than the specified minimum value are set to this minimum value. Using a small positive value leads to an output tensor without NaN or Inf.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input
|
Tensor
|
The input tensor. |
required |
min
|
float
|
The minimum value. |
1e-08
|
Returns:
| Type | Description |
|---|---|
Tensor
|
A tensor with the natural logarithm of the elements. |
Example
>>> import torch
>>> from sonnix.functional import safe_log
>>> safe_log(torch.tensor([1e-4, 1e-5, 1e-6, 1e-8, 1e-9, 1e-10]))
tensor([ -9.2103, -11.5129, -13.8155, -18.4207, -18.4207, -18.4207])
sonnix.functional.symmetric_absolute_relative_error ¶
symmetric_absolute_relative_error(
prediction: Tensor, target: Tensor, eps: float = 1e-08
) -> Tensor
Compute the element-wise symmetric absolute relative error between the predictions and targets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The tensor of predictions. |
required |
target
|
Tensor
|
The target tensor, which must have the same shape and
data type as |
required |
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the target is zero. |
1e-08
|
Returns:
| Type | Description |
|---|---|
Tensor
|
The symmetric absolute relative error tensor, which has the same shape and data type as the inputs. |
Example
>>> import torch
>>> from sonnix.functional import symmetric_absolute_relative_error
>>> symmetric_absolute_relative_error(torch.eye(2), torch.ones(2, 2))
tensor([[0., 2.],
[2., 0.]])