karbonn.functional¶
Activations¶
karbonn.functional.safe_exp ¶
safe_exp(input: Tensor, max: float = 20.0) -> Tensor
Compute safely the exponential of the elements.
The values that are higher than the specified minimum value are set to this maximum value. Using a not too large positive value leads to an output tensor without Inf.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input
|
Tensor
|
The input tensor. |
required |
max
|
float
|
The maximum value. |
20.0
|
Returns:
Type | Description |
---|---|
Tensor
|
A tensor with the exponential of the elements. |
Example usage:
>>> import torch
>>> from karbonn.functional import safe_exp
>>> output = safe_exp(torch.tensor([1.0, 10.0, 100.0, 1000.0]))
>>> output
tensor([2.7183e+00, 2.2026e+04, 4.8517e+08, 4.8517e+08])
karbonn.functional.safe_log ¶
safe_log(input: Tensor, min: float = 1e-08) -> Tensor
Compute safely the logarithm natural logarithm of the elements.
The values that are lower than the specified minimum value are set to this minimum value. Using a small positive value leads to an output tensor without NaN or Inf.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input
|
Tensor
|
The input tensor. |
required |
min
|
float
|
The minimum value. |
1e-08
|
Returns:
Type | Description |
---|---|
Tensor
|
A tensor with the natural logarithm of the elements. |
Example usage:
>>> import torch
>>> from karbonn.functional import safe_log
>>> safe_log(torch.tensor([1e-4, 1e-5, 1e-6, 1e-8, 1e-9, 1e-10]))
tensor([ -9.2103, -11.5129, -13.8155, -18.4207, -18.4207, -18.4207])
Loss functions¶
karbonn.functional.asinh_mse_loss ¶
asinh_mse_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
) -> Tensor
Compute the mean squared error (MSE) on the inverse hyperbolic sine (asinh) transformed predictions and targets.
It is a generalization of mean squared logarithmic error (MSLE)
that works for real values. The asinh
transformation is used
instead of log1p
because asinh
works on negative values.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
Type | Description |
---|---|
Tensor
|
The mean squared error (MSE) on the inverse hyperbolic sine (asinh) transformed predictions and targets. The shape of the tensor depends on the reduction strategy. |
Example usage:
>>> import torch
>>> from karbonn.functional import asinh_mse_loss
>>> loss = asinh_mse_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MseLossBackward0>)
>>> loss.backward()
karbonn.functional.asinh_smooth_l1_loss ¶
asinh_smooth_l1_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
beta: float = 1.0,
) -> Tensor
Compute the smooth L1 loss on the inverse hyperbolic sine (asinh) transformed predictions and targets.
It is a generalization of mean squared logarithmic error (MSLE)
that works for real values. The asinh
transformation is used
instead of log1p
because asinh
works on negative values.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
beta
|
float
|
The threshold at which to change between L1 and L2 loss. The value must be non-negative. |
1.0
|
Returns:
Type | Description |
---|---|
Tensor
|
The smooth L1 loss on the inverse hyperbolic sine (asinh) transformed predictions and targets. The shape of the tensor depends on the reduction strategy. |
Example usage:
>>> import torch
>>> from karbonn.functional import asinh_smooth_l1_loss
>>> loss = asinh_smooth_l1_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<SmoothL1LossBackward0>)
>>> loss.backward()
karbonn.functional.general_robust_regression_loss ¶
general_robust_regression_loss(
prediction: Tensor,
target: Tensor,
alpha: float = 2.0,
scale: float = 1.0,
max: float | None = None,
reduction: str = "mean",
) -> Tensor
Compute the general robust regression loss a.k.a. Barron robust loss.
Based on the paper:
A General and Adaptive Robust Loss Function
Jonathan T. Barron
CVPR 2019 (https://arxiv.org/abs/1701.03077)
Note
The "adaptative" part of the loss is not implemented in this function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
alpha
|
float
|
The shape parameter that controls the robustness of the loss. |
2.0
|
scale
|
float
|
The scale parameter that controls the size of the loss's quadratic bowl near 0. |
1.0
|
max
|
float | None
|
The max value to clip the loss before to compute the
reduction. |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
Type | Description |
---|---|
Tensor
|
The loss. The shape of the tensor depends on the reduction strategy. |
Raises:
Type | Description |
---|---|
ValueError
|
if the reduction is not valid. |
Example usage:
>>> import torch
>>> from karbonn.functional import general_robust_regression_loss
>>> loss = general_robust_regression_loss(
... torch.randn(2, 4, requires_grad=True), torch.randn(2, 4)
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
karbonn.functional.log_cosh_loss ¶
log_cosh_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
scale: float = 1.0,
) -> Tensor
Compute the logarithm of the hyperbolic cosine of the prediction error.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
scale
|
float
|
The scale factor. |
1.0
|
Returns:
Type | Description |
---|---|
Tensor
|
The logarithm of the hyperbolic cosine of the prediction error. |
Example usage:
>>> import torch
>>> from karbonn.functional import log_cosh_loss
>>> loss = log_cosh_loss(torch.randn(3, 5, requires_grad=True), torch.randn(3, 5))
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
karbonn.functional.msle_loss ¶
msle_loss(
prediction: Tensor,
target: Tensor,
reduction: str = "mean",
) -> Tensor
Compute the mean squared error (MSE) on the logarithmic transformed predictions and targets.
This loss is best to use when targets having exponential growth, such as population counts, average sales of a commodity over a span of years etc. Note that this loss penalizes an under-predicted estimate greater than an over-predicted estimate.
Note: this loss only works with positive values (0 included).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Returns:
Type | Description |
---|---|
Tensor
|
The mean squared logarithmic error. The shape of the tensor depends on the reduction strategy. |
Example usage:
>>> import torch
>>> from karbonn.functional import msle_loss
>>> loss = msle_loss(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MseLossBackward0>)
>>> loss.backward()
karbonn.functional.reduce_loss ¶
reduce_loss(tensor: Tensor, reduction: str) -> Tensor
Return the reduced loss.
This function is designed to be used with loss functions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor
|
Tensor
|
The input tensor to reduce. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
required |
Returns:
Type | Description |
---|---|
Tensor
|
The reduced tensor. The shape of the tensor depends on the reduction strategy. |
Raises:
Type | Description |
---|---|
ValueError
|
if the reduction is not valid. |
Example usage:
>>> import torch
>>> from karbonn.functional import reduce_loss
>>> tensor = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0]])
>>> reduce_loss(tensor, "none")
tensor([[0., 1., 2.],
[3., 4., 5.]])
>>> reduce_loss(tensor, "sum")
tensor(15.)
>>> reduce_loss(tensor, "mean")
tensor(2.5000)
karbonn.functional.relative_loss ¶
relative_loss(
loss: Tensor,
indicator: Tensor,
reduction: str = "mean",
eps: float = 1e-08,
) -> Tensor
Compute the relative loss.
The indicators are designed based on https://en.wikipedia.org/wiki/Relative_change#Indicators_of_relative_change.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
loss
|
Tensor
|
The loss values. The tensor must have the same shape as the target. |
required |
indicator
|
Tensor
|
The indicator values. |
required |
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the indicator is zero. |
1e-08
|
Returns:
Type | Description |
---|---|
Tensor
|
The computed relative loss. |
Raises:
Type | Description |
---|---|
RuntimeError
|
if the loss and indicator shapes do not match. |
ValueError
|
if the reduction is not valid. |
Example usage:
>>> import torch
>>> from karbonn.functional import relative_loss
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> loss = relative_loss(
... loss=torch.nn.functional.mse_loss(prediction, target, reduction="none"),
... indicator=classical_relative_indicator(prediction, target),
... )
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
Relative loss indicators¶
karbonn.functional.loss.arithmetical_mean_indicator ¶
arithmetical_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the arithmetical mean change.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The indicator values. |
Example usage:
>>> import torch
>>> from karbonn.functional.loss import arithmetical_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = arithmetical_mean_indicator(prediction, target)
>>> indicator
tensor([[1.0000, 1.0000, 0.5000],
[3.0000, 3.0000, 1.0000]], grad_fn=<MulBackward0>)
karbonn.functional.loss.classical_relative_indicator ¶
classical_relative_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the classical relative change.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The indicator values. |
>>> import torch
>>> from karbonn.functional.loss import classical_relative_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = classical_relative_indicator(prediction, target)
>>> indicator
tensor([[2., 1., 0.],
[3., 5., 1.]])
karbonn.functional.loss.geometric_mean_indicator ¶
geometric_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the geometric mean change.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The indicator values. |
>>> import torch
>>> from karbonn.functional.loss import geometric_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = geometric_mean_indicator(prediction, target)
>>> indicator
tensor([[0.0000, 1.0000, 0.0000],
[3.0000, 2.2361, 1.0000]], grad_fn=<SqrtBackward0>)
karbonn.functional.loss.maximum_mean_indicator ¶
maximum_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the maximum mean change.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The indicator values. |
>>> import torch
>>> from karbonn.functional.loss import maximum_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = maximum_mean_indicator(prediction, target)
>>> indicator
tensor([[2., 1., 1.],
[3., 5., 1.]], grad_fn=<MaximumBackward0>)
karbonn.functional.loss.minimum_mean_indicator ¶
minimum_mean_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the minimum mean change.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The indicator values. |
>>> import torch
>>> from karbonn.functional.loss import minimum_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = minimum_mean_indicator(prediction, target)
>>> indicator
tensor([[0., 1., 0.],
[3., 1., 1.]], grad_fn=<MinimumBackward0>)
karbonn.functional.loss.moment_mean_indicator ¶
moment_mean_indicator(
prediction: Tensor, target: Tensor, k: int = 1
) -> Tensor
Return the moment mean change of order k.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
k
|
int
|
The order. |
1
|
Returns:
Type | Description |
---|---|
Tensor
|
The indicator values. |
>>> import torch
>>> from karbonn.functional.loss import moment_mean_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = moment_mean_indicator(prediction, target)
>>> indicator
tensor([[1.0000, 1.0000, 0.5000],
[3.0000, 3.0000, 1.0000]], grad_fn=<PowBackward0>)
karbonn.functional.loss.reversed_relative_indicator ¶
reversed_relative_indicator(
prediction: Tensor, target: Tensor
) -> Tensor
Return the reversed relative change.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The indicator values. |
>>> import torch
>>> from karbonn.functional.loss import reversed_relative_indicator
>>> prediction = torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True)
>>> target = torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]])
>>> indicator = reversed_relative_indicator(prediction, target)
>>> indicator
tensor([[0., 1., 1.],
[3., 1., 1.]], grad_fn=<AbsBackward0>)
Errors¶
karbonn.functional.absolute_error ¶
absolute_error(
prediction: Tensor, target: Tensor
) -> Tensor
Compute the element-wise absolute error between the predictions and targets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The tensor of predictions. |
required |
target
|
Tensor
|
The target tensor, which must have the same shape and
data type as |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The absolute error tensor, which has the same shape and data type as the inputs. |
Example usage:
>>> import torch
>>> from karbonn.functional import absolute_error
>>> absolute_error(torch.eye(2), torch.ones(2, 2))
tensor([[0., 1.],
[1., 0.]])
karbonn.functional.absolute_relative_error ¶
absolute_relative_error(
prediction: Tensor, target: Tensor, eps: float = 1e-08
) -> Tensor
Compute the element-wise absolute relative error between the predictions and targets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The tensor of predictions. |
required |
target
|
Tensor
|
The target tensor, which must have the same shape and
data type as |
required |
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the target is zero. |
1e-08
|
Returns:
Type | Description |
---|---|
Tensor
|
The absolute relative error tensor, which has the same shape and data type as the inputs. |
Example usage:
>>> import torch
>>> from karbonn.functional import absolute_relative_error
>>> absolute_relative_error(torch.eye(2), torch.ones(2, 2))
tensor([[0., 1.],
[1., 0.]])
karbonn.functional.symmetric_absolute_relative_error ¶
symmetric_absolute_relative_error(
prediction: Tensor, target: Tensor, eps: float = 1e-08
) -> Tensor
Compute the element-wise symmetric absolute relative error between the predictions and targets.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The tensor of predictions. |
required |
target
|
Tensor
|
The target tensor, which must have the same shape and
data type as |
required |
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the target is zero. |
1e-08
|
Returns:
Type | Description |
---|---|
Tensor
|
The symmetric absolute relative error tensor, which has the same shape and data type as the inputs. |
Example usage:
>>> import torch
>>> from karbonn.functional import symmetric_absolute_relative_error
>>> symmetric_absolute_relative_error(torch.eye(2), torch.ones(2, 2))
tensor([[0., 2.],
[2., 0.]])
Utility¶
karbonn.functional.check_loss_reduction_strategy ¶
check_loss_reduction_strategy(reduction: str) -> None
Check if the provided reduction ia a valid loss reduction.
The valid reduction values are 'mean'
, 'none'
, and
'sum'
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction
|
str
|
The reduction strategy to check. |
required |
Raises:
Type | Description |
---|---|
ValueError
|
if the provided reduction is not valid. |
Example usage:
>>> from karbonn.functional import check_loss_reduction_strategy
>>> check_loss_reduction_strategy("mean")