sonnix.modules¶
sonnix.modules ¶
Provide custom PyTorch modules and layers.
This subpackage contains custom torch.nn.Module implementations
including activation functions, loss functions, fusion layers, numerical
encoders, and other building blocks for neural networks.
sonnix.modules.ArithmeticalMeanIndicator ¶
Bases: BaseRelativeIndicator
Implement the arithmetical mean change indicator function.
Example
>>> import torch
>>> from sonnix.modules.loss import ArithmeticalMeanIndicator
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> indicator = ArithmeticalMeanIndicator()
>>> indicator
ArithmeticalMeanIndicator()
>>> values = indicator(
... prediction=torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True),
... target=torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]]),
... )
>>> values
tensor([[1.0000, 1.0000, 0.5000],
[3.0000, 3.0000, 1.0000]], grad_fn=<MulBackward0>)
sonnix.modules.Asinh ¶
Bases: Module
Implement a torch.nn.Module to compute the inverse hyperbolic
sine (arcsinh) of the elements.
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Asinh
>>> m = Asinh()
>>> m
Asinh()
>>> out = m(torch.tensor([[-1.0, 0.0, 1.0], [-2.0, 2.0, 4.0]]))
>>> out
tensor([[-0.8814, 0.0000, 0.8814],
[-1.4436, 1.4436, 2.0947]])
sonnix.modules.AsinhCosSinNumericalEncoder ¶
Bases: CosSinNumericalEncoder
Extension of CosSinNumericalEncoder with an additional
feature built using the inverse hyperbolic sine (arcsinh).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
frequency
|
Tensor
|
The initial frequency values. This input should be
a tensor of shape |
required |
phase_shift
|
Tensor
|
The initial phase-shift values. This input should
be a tensor of shape |
required |
learnable
|
bool
|
If |
False
|
Shape
- Input:
(*, n_features), where*means any number of dimensions. - Output:
(*, n_features, feature_size + 1), where*has the same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import AsinhCosSinNumericalEncoder
>>> # Example with 1 feature
>>> m = AsinhCosSinNumericalEncoder(
... frequency=torch.tensor([[1.0, 2.0, 4.0]]),
... phase_shift=torch.zeros(1, 3),
... )
>>> m
AsinhCosSinNumericalEncoder(frequency=(1, 6), phase_shift=(1, 6), learnable=False)
>>> out = m(torch.tensor([[0.0], [1.0], [2.0], [3.0]]))
>>> out
tensor([[[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000, 0.0000]],
[[ 0.8415, 0.9093, -0.7568, 0.5403, -0.4161, -0.6536, 0.8814]],
[[ 0.9093, -0.7568, 0.9894, -0.4161, -0.6536, -0.1455, 1.4436]],
[[ 0.1411, -0.2794, -0.5366, -0.9900, 0.9602, 0.8439, 1.8184]]])
>>> # Example with 2 features
>>> m = AsinhCosSinNumericalEncoder(
... frequency=torch.tensor([[1.0, 2.0, 4.0], [2.0, 4.0, 6.0]]),
... phase_shift=torch.zeros(2, 3),
... )
>>> m
AsinhCosSinNumericalEncoder(frequency=(2, 6), phase_shift=(2, 6), learnable=False)
>>> out = m(torch.tensor([[0.0, 3.0], [1.0, 2.0], [2.0, 1.0], [3.0, 0.0]]))
>>> out
tensor([[[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000, 0.0000],
[-0.2794, -0.5366, -0.7510, 0.9602, 0.8439, 0.6603, 1.8184]],
[[ 0.8415, 0.9093, -0.7568, 0.5403, -0.4161, -0.6536, 0.8814],
[-0.7568, 0.9894, -0.5366, -0.6536, -0.1455, 0.8439, 1.4436]],
[[ 0.9093, -0.7568, 0.9894, -0.4161, -0.6536, -0.1455, 1.4436],
[ 0.9093, -0.7568, -0.2794, -0.4161, -0.6536, 0.9602, 0.8814]],
[[ 0.1411, -0.2794, -0.5366, -0.9900, 0.9602, 0.8439, 1.8184],
[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000, 0.0000]]])
sonnix.modules.AsinhCosSinNumericalEncoder.output_size
property
¶
output_size: int
Return the output feature size.
sonnix.modules.AsinhMSELoss ¶
Bases: Module
Implement a loss module that computes the mean squared error (MSE) on the inverse hyperbolic sine (asinh) transformed predictions and targets.
It is a generalization of mean squared logarithmic error (MSLE)
that works for real values. The asinh transformation is used
instead of log1p because asinh works on negative values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Example
>>> import torch
>>> from sonnix.modules import AsinhMSELoss
>>> criterion = AsinhMSELoss()
>>> criterion
AsinhMSELoss(reduction=mean)
>>> loss = criterion(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MseLossBackward0>)
>>> loss.backward()
sonnix.modules.AsinhNumericalEncoder ¶
Bases: Module
Implement a numerical encoder using the inverse hyperbolic sine (asinh).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
scale
|
Tensor
|
The initial scale values. This input should be a tensor
of shape |
required |
learnable
|
bool
|
If |
False
|
Shape
- Input:
(*, n_features), where*means any number of dimensions. - Output:
(*, n_features, feature_size), where*has the same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import AsinhNumericalEncoder
>>> # Example with 1 feature
>>> m = AsinhNumericalEncoder(scale=torch.tensor([[1.0, 2.0, 4.0]]))
>>> m
AsinhNumericalEncoder(scale=(1, 3), learnable=False)
>>> out = m(torch.tensor([[0.0], [1.0], [2.0], [3.0]]))
>>> out
tensor([[[0.0000, 0.0000, 0.0000]],
[[0.8814, 1.4436, 2.0947]],
[[1.4436, 2.0947, 2.7765]],
[[1.8184, 2.4918, 3.1798]]])
>>> # Example with 2 features
>>> m = AsinhNumericalEncoder(scale=torch.tensor([[1.0, 2.0, 4.0], [1.0, 3.0, 6.0]]))
>>> m
AsinhNumericalEncoder(scale=(2, 3), learnable=False)
>>> out = m(torch.tensor([[0.0, 0.0], [1.0, 1.0], [2.0, 2.0], [3.0, 3.0]]))
>>> out
tensor([[[0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000]],
[[0.8814, 1.4436, 2.0947], [0.8814, 1.8184, 2.4918]],
[[1.4436, 2.0947, 2.7765], [1.4436, 2.4918, 3.1798]],
[[1.8184, 2.4918, 3.1798], [1.8184, 2.8934, 3.5843]]])
sonnix.modules.AsinhSmoothL1Loss ¶
Bases: Module
Implement a loss module that computes the smooth L1 loss on the inverse hyperbolic sine (asinh) transformed predictions and targets.
It is a generalization of mean squared logarithmic error (MSLE)
that works for real values. The asinh transformation is used
instead of log1p because asinh works on negative values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
beta
|
float
|
The threshold at which to change between L1 and L2 loss. The value must be non-negative. |
1.0
|
Example
>>> import torch
>>> from sonnix.modules import AsinhSmoothL1Loss
>>> criterion = AsinhSmoothL1Loss()
>>> criterion
AsinhSmoothL1Loss(reduction=mean, beta=1.0)
>>> loss = criterion(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<SmoothL1LossBackward0>)
>>> loss.backward()
sonnix.modules.AverageFusion ¶
Bases: SumFusion
Implement a layer to average the inputs.
Example
>>> import torch
>>> from sonnix.modules import AverageFusion
>>> module = AverageFusion()
>>> module
AverageFusion(normalized=True)
>>> x1 = torch.tensor([[2.0, 3.0, 4.0], [5.0, 6.0, 7.0]], requires_grad=True)
>>> x2 = torch.tensor([[12.0, 13.0, 14.0], [15.0, 16.0, 17.0]], requires_grad=True)
>>> out = module(x1, x2)
>>> out
tensor([[ 7., 8., 9.],
[10., 11., 12.]], grad_fn=<DivBackward0>)
>>> out.mean().backward()
sonnix.modules.BaseAlphaActivation ¶
Bases: Module
Define a base class to implement an activation layer with a
learnable parameter alpha.
When called without arguments, the activation layer uses a single
parameter alpha across all input channels. If called with a
first argument, a separate alpha is used for each input
channel.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import MultiQuadratic
>>> m = MultiQuadratic()
>>> m
MultiQuadratic(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000, 0.7071, 0.4472, 0.3162],
[0.2425, 0.1961, 0.1644, 0.1414]], grad_fn=<MulBackward0>)
sonnix.modules.BaseRelativeIndicator ¶
Bases: Module
Define the base class to implement a relative indicator function.
The indicators are designed based on https://en.wikipedia.org/wiki/Relative_change#Indicators_of_relative_change.
Example
>>> import torch
>>> from sonnix.modules.loss import ClassicalRelativeIndicator
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> indicator = ClassicalRelativeIndicator()
>>> indicator
ClassicalRelativeIndicator()
>>> values = indicator(
... prediction=torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True),
... target=torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]]),
... )
>>> values
tensor([[2., 1., 0.],
[3., 5., 1.]])
sonnix.modules.BaseRelativeIndicator.forward ¶
forward(prediction: Tensor, target: Tensor) -> Tensor
Return the indicator values.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The predictions. |
required |
target
|
Tensor
|
The target values. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The indicator values. |
sonnix.modules.BinaryFocalLoss ¶
Bases: Module
Implementation of the binary focal loss.
Based on "focal loss for Dense Object Detection" (https://arxiv.org/pdf/1708.02002.pdf)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
alpha
|
float
|
The weighting factor, which must be in the range
|
0.25
|
gamma
|
float
|
The focusing parameter, which must be positive
( |
2.0
|
reduction
|
str
|
The reduction to apply to the output:
|
'mean'
|
Shape
- Input:
(*), where*means any number of dimensions. - Target:
(*), same shape as the input. - Output: scalar. If
reductionis'none', then(*), same shape as input.
Example
>>> import torch
>>> from sonnix.modules import BinaryFocalLoss
>>> criterion = BinaryFocalLoss()
>>> criterion
BinaryFocalLoss(alpha=0.25, gamma=2.0, reduction=mean)
>>> prediction = torch.rand(2, 4, requires_grad=True)
>>> target = torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]])
>>> loss = criterion(prediction, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.modules.BinaryFocalLoss.forward ¶
forward(prediction: Tensor, target: Tensor) -> Tensor
Compute the binary focal loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as probabilities for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
|
sonnix.modules.BinaryFocalLossWithLogits ¶
Bases: Module
Implementation of the binary focal loss.
Based on "focal loss for Dense Object Detection" (https://arxiv.org/pdf/1708.02002.pdf)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
alpha
|
float
|
The weighting factor, which must be in the range
|
0.25
|
gamma
|
float
|
The focusing parameter, which must be positive
( |
2.0
|
reduction
|
str
|
The reduction to apply to the output:
|
'mean'
|
Shape
- Input:
(*), where*means any number of dimensions. - Target:
(*), same shape as the input. - Output: scalar. If
reductionis'none', then(*), same shape as input.
Example
>>> import torch
>>> from sonnix.modules import BinaryFocalLossWithLogits
>>> criterion = BinaryFocalLossWithLogits()
>>> criterion
BinaryFocalLossWithLogits(alpha=0.25, gamma=2.0, reduction=mean)
>>> prediction = torch.randn(2, 4, requires_grad=True)
>>> target = torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]])
>>> loss = criterion(prediction, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.modules.BinaryFocalLossWithLogits.forward ¶
forward(prediction: Tensor, target: Tensor) -> Tensor
Compute the binary focal loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as unnormalized scores (often referred to as logits) for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
|
sonnix.modules.BinaryPoly1Loss ¶
Bases: Module
Implementation of the binary Poly-1 loss for binary targets.
Based on "PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions" (https://arxiv.org/pdf/2204.12511)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
alpha
|
float
|
The weighting factor, which must be in the range
|
1.0
|
gamma
|
The focusing parameter, which must be positive
( |
required | |
reduction
|
str
|
The reduction to apply to the output:
|
'mean'
|
Shape
- Input:
(*), where*means any number of dimensions. - Target:
(*), same shape as the input. - Output: scalar. If
reductionis'none', then(*), same shape as input.
Example
>>> import torch
>>> from sonnix.modules import BinaryPoly1Loss
>>> criterion = BinaryPoly1Loss()
>>> criterion
BinaryPoly1Loss(alpha=1.0, reduction=mean)
>>> prediction = torch.rand(2, 4, requires_grad=True)
>>> target = torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]])
>>> loss = criterion(prediction, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.modules.BinaryPoly1Loss.forward ¶
forward(prediction: Tensor, target: Tensor) -> Tensor
Compute the binary Poly-1 loss for binary targets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as probabilities for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The loss value(s). The shape of the tensor depends on the
reduction. If the reduction is |
sonnix.modules.BinaryPoly1LossWithLogits ¶
Bases: Module
Implementation of the binary Poly-1 loss for binary targets.
Based on "PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions" (https://arxiv.org/pdf/2204.12511)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
alpha
|
float
|
The weighting factor, which must be in the range
|
1.0
|
gamma
|
The focusing parameter, which must be positive
( |
required | |
reduction
|
str
|
The reduction to apply to the output:
|
'mean'
|
Shape
- Input:
(*), where*means any number of dimensions. - Target:
(*), same shape as the input. - Output: scalar. If
reductionis'none', then(*), same shape as input.
Example
>>> import torch
>>> from sonnix.modules import BinaryPoly1LossWithLogits
>>> criterion = BinaryPoly1LossWithLogits()
>>> criterion
BinaryPoly1LossWithLogits(alpha=1.0, reduction=mean)
>>> prediction = torch.randn(2, 4, requires_grad=True)
>>> target = torch.tensor([[1.0, 0.0, 0.0, 1.0], [1.0, 0.0, 1.0, 0.0]])
>>> loss = criterion(prediction, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.modules.BinaryPoly1LossWithLogits.forward ¶
forward(prediction: Tensor, target: Tensor) -> Tensor
Compute the binary Poly-1 loss for binary targets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prediction
|
Tensor
|
The float tensor with predictions as unnormalized scores (often referred to as logits) for each example. |
required |
target
|
Tensor
|
A float tensor with the same shape as inputs. It stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The loss value(s). The shape of the tensor depends on the
reduction. If the reduction is |
sonnix.modules.Clamp ¶
Bases: Module
Implement a module to clamp all elements in input into the range
[min, max].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min
|
float | None
|
The lower-bound of the range to be clamped to.
|
-1.0
|
max
|
float | None
|
The upper-bound of the range to be clamped to.
|
1.0
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Clamp
>>> m = Clamp(min=-1, max=2)
>>> m
Clamp(min=-1, max=2)
>>> out = m(torch.tensor([[-2.0, -1.0, 0.0], [1.0, 2.0, 3.0]]))
>>> out
tensor([[-1., -1., 0.], [ 1., 2., 2.]])
sonnix.modules.ClassicalRelativeIndicator ¶
Bases: BaseRelativeIndicator
Implement the classical relative indicator function.
Example
>>> import torch
>>> from sonnix.modules.loss import ClassicalRelativeIndicator
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> indicator = ClassicalRelativeIndicator()
>>> indicator
ClassicalRelativeIndicator()
>>> values = indicator(
... prediction=torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True),
... target=torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]]),
... )
>>> values
tensor([[2., 1., 0.],
[3., 5., 1.]])
sonnix.modules.ConcatFusion ¶
Bases: Module
Implement a module to concatenate inputs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dim
|
int
|
The fusion dimension. |
-1
|
Example
>>> import torch
>>> from sonnix.modules import ConcatFusion
>>> module = ConcatFusion()
>>> module
ConcatFusion(dim=-1)
>>> x1 = torch.tensor([[2.0, 3.0, 4.0], [5.0, 6.0, 7.0]], requires_grad=True)
>>> x2 = torch.tensor([[12.0, 13.0, 14.0], [15.0, 16.0, 17.0]], requires_grad=True)
>>> out = module(x1, x2)
>>> out
tensor([[ 2., 3., 4., 12., 13., 14.],
[ 5., 6., 7., 15., 16., 17.]], grad_fn=<CatBackward0>)
>>> out.mean().backward()
sonnix.modules.CosSinNumericalEncoder ¶
Bases: Module
Implement a frequency/phase-shift numerical encoder where the periodic functions are cosine and sine.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
frequency
|
Tensor
|
The initial frequency values. This input should be
a tensor of shape |
required |
phase_shift
|
Tensor
|
The initial phase-shift values. This input should
be a tensor of shape |
required |
learnable
|
bool
|
If |
False
|
Shape
- Input:
(*, n_features), where*means any number of dimensions. - Output:
(*, n_features, feature_size), where*has the same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import CosSinNumericalEncoder
>>> # Example with 1 feature
>>> m = CosSinNumericalEncoder(
... frequency=torch.tensor([[1.0, 2.0, 4.0]]),
... phase_shift=torch.zeros(1, 3),
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 6), phase_shift=(1, 6), learnable=False)
>>> out = m(torch.tensor([[0.0], [1.0], [2.0], [3.0]]))
>>> out
tensor([[[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000]],
[[ 0.8415, 0.9093, -0.7568, 0.5403, -0.4161, -0.6536]],
[[ 0.9093, -0.7568, 0.9894, -0.4161, -0.6536, -0.1455]],
[[ 0.1411, -0.2794, -0.5366, -0.9900, 0.9602, 0.8439]]])
>>> # Example with 2 features
>>> m = CosSinNumericalEncoder(
... frequency=torch.tensor([[1.0, 2.0, 4.0], [2.0, 4.0, 6.0]]),
... phase_shift=torch.zeros(2, 3),
... )
>>> m
CosSinNumericalEncoder(frequency=(2, 6), phase_shift=(2, 6), learnable=False)
>>> out = m(torch.tensor([[0.0, 3.0], [1.0, 2.0], [2.0, 1.0], [3.0, 0.0]]))
>>> out
tensor([[[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000],
[-0.2794, -0.5366, -0.7510, 0.9602, 0.8439, 0.6603]],
[[ 0.8415, 0.9093, -0.7568, 0.5403, -0.4161, -0.6536],
[-0.7568, 0.9894, -0.5366, -0.6536, -0.1455, 0.8439]],
[[ 0.9093, -0.7568, 0.9894, -0.4161, -0.6536, -0.1455],
[ 0.9093, -0.7568, -0.2794, -0.4161, -0.6536, 0.9602]],
[[ 0.1411, -0.2794, -0.5366, -0.9900, 0.9602, 0.8439],
[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000]]])
sonnix.modules.CosSinNumericalEncoder.input_size
property
¶
input_size: int
Return the input feature size.
sonnix.modules.CosSinNumericalEncoder.output_size
property
¶
output_size: int
Return the output feature size.
sonnix.modules.CosSinNumericalEncoder.create_linspace_frequency
classmethod
¶
create_linspace_frequency(
num_frequencies: int,
min_frequency: float,
max_frequency: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a `CosSinNumericalEncoder`` where the frequencies are evenly spaced in a frequency range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_frequency
|
float
|
The minimum frequency. |
required |
max_frequency
|
float
|
The maximum frequency. |
required |
learnable
|
bool
|
If |
False
|
Returns:
| Type | Description |
|---|---|
CosSinNumericalEncoder
|
An instantiated |
Example
>>> import torch
>>> from sonnix.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_linspace_frequency(
... num_frequencies=5, min_frequency=0.1, max_frequency=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
sonnix.modules.CosSinNumericalEncoder.create_linspace_value_range
classmethod
¶
create_linspace_value_range(
num_frequencies: int,
min_abs_value: float,
max_abs_value: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a CosSinNumericalEncoder where the frequencies are
evenly spaced given a value range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_abs_value
|
float
|
The minimum absolute value to encode. |
required |
max_abs_value
|
float
|
The maximum absolute value to encoder. |
required |
learnable
|
bool
|
If |
False
|
Returns:
| Type | Description |
|---|---|
CosSinNumericalEncoder
|
An instantiated |
Example
>>> import torch
>>> from sonnix.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_linspace_value_range(
... num_frequencies=5, min_abs_value=0.1, max_abs_value=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
sonnix.modules.CosSinNumericalEncoder.create_logspace_frequency
classmethod
¶
create_logspace_frequency(
num_frequencies: int,
min_frequency: float,
max_frequency: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a CosSinNumericalEncoder where the frequencies are
evenly spaced in the log space in a frequency range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_frequency
|
float
|
The minimum frequency. |
required |
max_frequency
|
float
|
The maximum frequency. |
required |
learnable
|
bool
|
If |
False
|
Returns:
| Type | Description |
|---|---|
CosSinNumericalEncoder
|
An instantiated |
Example
>>> import torch
>>> from sonnix.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_logspace_frequency(
... num_frequencies=5, min_frequency=0.1, max_frequency=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
sonnix.modules.CosSinNumericalEncoder.create_logspace_value_range
classmethod
¶
create_logspace_value_range(
num_frequencies: int,
min_abs_value: float,
max_abs_value: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a CosSinNumericalEncoder where the frequencies are
evenly spaced in the log space given a value range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_abs_value
|
float
|
The minimum absolute value to encode. |
required |
max_abs_value
|
float
|
The maximum absolute value to encoder. |
required |
learnable
|
bool
|
If |
False
|
Returns:
| Type | Description |
|---|---|
CosSinNumericalEncoder
|
An instantiated |
Example
>>> import torch
>>> from sonnix.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_logspace_value_range(
... num_frequencies=5, min_abs_value=0.1, max_abs_value=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
sonnix.modules.CosSinNumericalEncoder.create_rand_frequency
classmethod
¶
create_rand_frequency(
num_frequencies: int,
min_frequency: float,
max_frequency: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a CosSinNumericalEncoder where the frequencies are
uniformly initialized in a frequency range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_frequency
|
float
|
The minimum frequency. |
required |
max_frequency
|
float
|
The maximum frequency. |
required |
learnable
|
bool
|
If |
False
|
Returns:
| Type | Description |
|---|---|
CosSinNumericalEncoder
|
An instantiated |
Example
>>> import torch
>>> from sonnix.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_rand_frequency(
... num_frequencies=5, min_frequency=0.1, max_frequency=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
sonnix.modules.CosSinNumericalEncoder.create_rand_value_range
classmethod
¶
create_rand_value_range(
num_frequencies: int,
min_abs_value: float,
max_abs_value: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a CosSinNumericalEncoder where the frequencies are
uniformly initialized for a given value range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_abs_value
|
float
|
The minimum absolute value to encode. |
required |
max_abs_value
|
float
|
The maximum absolute value to encode. |
required |
learnable
|
bool
|
If |
False
|
Returns:
| Type | Description |
|---|---|
CosSinNumericalEncoder
|
An instantiated |
Example
>>> import torch
>>> from sonnix.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_rand_value_range(
... num_frequencies=5, min_abs_value=0.1, max_abs_value=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
sonnix.modules.DynamicAsinh ¶
Bases: BaseDynamicNorm
Applies the Dynamic Asinh normalization over a mini-batch of inputs.
This layer implements the following operation:
y = gamma * asinh(alpha * x) + beta
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
normalized_shape
|
int | list[int] | tuple[int, ...]
|
The input shape to normalize. If a single integer is used, it is treated as a singleton list, and this module willcnormalize over the last dimension which is expected to be of that specific size. |
required |
alpha_init_value
|
float
|
The initial value for alpha. |
0.5
|
Shape
- Input:
(N, *), where*means any number of dimensions. - Output:
(N, *), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import DynamicAsinh
>>> m = DynamicAsinh(normalized_shape=5)
>>> m
DynamicAsinh(normalized_shape=(5,))
>>> out = m(torch.tensor([[-2, -1, 0, 1, 2], [3, 2, 1, 2, 3]]))
>>> out
tensor([[-0.8814, -0.4812, 0.0000, 0.4812, 0.8814],
[ 1.1948, 0.8814, 0.4812, 0.8814, 1.1948]], grad_fn=<AddBackward0>)
sonnix.modules.DynamicTanh ¶
Bases: BaseDynamicNorm
Applies the Dynamic Tanh normalization over a mini-batch of inputs.
This layer implements the following operation:
y = gamma * tanh(alpha * x) + beta
Paper
Transformers without Normalization. CVPR 2025. https://arxiv.org/pdf/2503.10622
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
normalized_shape
|
int | list[int] | tuple[int, ...]
|
The input shape to normalize. If a single integer is used, it is treated as a singleton list, and this module willcnormalize over the last dimension which is expected to be of that specific size. |
required |
alpha_init_value
|
float
|
The initial value for alpha. |
0.5
|
Shape
- Input:
(N, *), where*means any number of dimensions. - Output:
(N, *), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import DynamicTanh
>>> m = DynamicTanh(normalized_shape=5)
>>> m
DynamicTanh(normalized_shape=(5,))
>>> out = m(torch.tensor([[-2, -1, 0, 1, 2], [3, 2, 1, 2, 3]]))
>>> out
tensor([[-0.7616, -0.4621, 0.0000, 0.4621, 0.7616],
[ 0.9051, 0.7616, 0.4621, 0.7616, 0.9051]], grad_fn=<AddBackward0>)
sonnix.modules.ExU ¶
Bases: Module
Implementation of the exp-centered (ExU) layer.
This layer was proposed in the following paper:
Neural Additive Models: Interpretable Machine Learning with
Neural Nets.
Agarwal R., Melnick L., Frosst N., Zhang X., Lengerich B.,
Caruana R., Hinton G.
NeurIPS 2021. (https://arxiv.org/pdf/2004.13912.pdf)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
in_features
|
int
|
The size of each input sample. |
required |
out_features
|
int
|
The size of each output sample. |
required |
bias
|
bool
|
If set to |
True
|
device
|
device | None
|
The device where to initialize the layer's parameters. |
None
|
dtype
|
dtype | None
|
The data type of the layer's parameters. |
None
|
Shape
- Input:
(*, in_features), where*means any number of dimensions, including none. - Output:
(*, out_features), where*is the same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import ExU
>>> m = ExU(4, 6)
>>> m
ExU(in_features=4, out_features=6, bias=True)
>>> out = m(torch.rand(6, 4))
>>> out
tensor([[...]], grad_fn=<MmBackward0>)
sonnix.modules.ExU.reset_parameters ¶
reset_parameters() -> None
Reset the parameters.
As indicated in page 4 of the paper, the weights are initialed
using a normal distribution N(4.0; 0.5). The biases are
initialized to 0
sonnix.modules.Exp ¶
Bases: Module
Implement a torch.nn.Module to compute the exponential of the
input.
This module is equivalent to exp(input)
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Exp
>>> m = Exp()
>>> m
Exp()
>>> out = m(torch.tensor([[-1.0, 0.0, 1.0], [-2.0, 2.0, 3.0]]))
>>> out
tensor([[ 0.3679, 1.0000, 2.7183],
[ 0.1353, 7.3891, 20.0855]])
sonnix.modules.ExpSin ¶
Bases: BaseAlphaActivation
Implement the ExpSin activation layer.
Formula: exp(-sin(alpha * x))
This activation layer was proposed in the following paper:
Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs.
Ramasinghe S., Lucey S.
ECCV 2022. (http://arxiv.org/pdf/2111.15135)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import ExpSin
>>> m = ExpSin()
>>> m
ExpSin(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000, 2.3198, 2.4826, 1.1516],
[0.4692, 0.3833, 0.7562, 1.9290]], grad_fn=<ExpBackward0>)
sonnix.modules.Expm1 ¶
Bases: Module
Implement a torch.nn.Module to compute the exponential of the
elements minus 1 of input.
This module is equivalent to exp(input) - 1
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Expm1
>>> m = Expm1()
>>> m
Expm1()
>>> out = m(torch.tensor([[-1.0, 0.0, 1.0], [-2.0, 2.0, 4.0]]))
>>> out
tensor([[-0.6321, 0.0000, 1.7183],
[-0.8647, 6.3891, 53.5981]])
sonnix.modules.Gaussian ¶
Bases: BaseAlphaActivation
Implement the Gaussian activation layer.
Formula: exp(-0.5 * x^2 / alpha^2)
This activation layer was proposed in the following paper:
Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs.
Ramasinghe S., Lucey S.
ECCV 2022. (http://arxiv.org/pdf/2111.15135)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Gaussian
>>> m = Gaussian()
>>> m
Gaussian(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000e+00, 6.0653e-01, 1.3534e-01, 1.1109e-02],
[3.3546e-04, 3.7267e-06, 1.5230e-08, 2.2897e-11]], grad_fn=<ExpBackward0>)
sonnix.modules.GeneralRobustRegressionLoss ¶
Bases: Module
Implement the general robust regression loss a.k.a. Barron robust loss.
Based on the paper:
A General and Adaptive Robust Loss Function
Jonathan T. Barron
CVPR 2019 (https://arxiv.org/abs/1701.03077)
Note
The "adaptative" part of the loss is not implemented in this function.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
alpha
|
float
|
The shape parameter that controls the robustness of the loss. |
2.0
|
scale
|
float
|
The scale parameter that controls the size of the loss's quadratic bowl near 0. |
1.0
|
max
|
float | None
|
The max value to clip the loss before to compute the
reduction. |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Shape
- Input:
(*), where*means any number of dimensions. - Target:
(*), same shape as the input. - Output: scalar. If
reductionis'none', then(*), same shape as input.
Example
>>> import torch
>>> from sonnix.modules import GeneralRobustRegressionLoss
>>> criterion = GeneralRobustRegressionLoss()
>>> criterion
GeneralRobustRegressionLoss(alpha=2.0, scale=1.0, max=None, reduction=mean)
>>> input = torch.randn(3, 2, requires_grad=True)
>>> target = torch.rand(3, 2, requires_grad=False)
>>> loss = criterion(input, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.modules.GeometricMeanIndicator ¶
Bases: BaseRelativeIndicator
Implement the geometric mean indicator function.
Example
>>> import torch
>>> from sonnix.modules.loss import GeometricMeanIndicator
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> indicator = GeometricMeanIndicator()
>>> indicator
GeometricMeanIndicator()
>>> values = indicator(
... prediction=torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True),
... target=torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]]),
... )
>>> values
tensor([[0.0000, 1.0000, 0.0000],
[3.0000, 2.2361, 1.0000]], grad_fn=<SqrtBackward0>)
sonnix.modules.Laplacian ¶
Bases: BaseAlphaActivation
Implement the Laplacian activation layer.
Formula: exp(-|x| / alpha)
This activation layer was proposed in the following paper:
Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs.
Ramasinghe S., Lucey S.
ECCV 2022. (http://arxiv.org/pdf/2111.15135)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Laplacian
>>> m = Laplacian()
>>> m
Laplacian(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000e+00, 3.6788e-01, 1.3534e-01, 4.9787e-02],
[1.8316e-02, 6.7379e-03, 2.4788e-03, 9.1188e-04]], grad_fn=<ExpBackward0>)
sonnix.modules.Log ¶
Bases: Module
Implement a torch.nn.Module to compute the natural logarithm
of the input.
This module is equivalent to log(input)
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Log
>>> m = Log()
>>> m
Log()
>>> out = m(torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]))
>>> out
tensor([[0.0000, 0.6931, 1.0986],
[1.3863, 1.6094, 1.7918]])
sonnix.modules.Log1p ¶
Bases: Module
Implement a torch.nn.Module to compute the natural logarithm
of (1 + input).
This module is equivalent to log(1 + input)
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Log1p
>>> m = Log1p()
>>> m
Log1p()
>>> out = m(torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0]]))
>>> out
tensor([[0.0000, 0.6931, 1.0986],
[1.3863, 1.6094, 1.7918]])
sonnix.modules.MaximumMeanIndicator ¶
Bases: BaseRelativeIndicator
Implement the maximum mean change indicator function.
Example
>>> import torch
>>> from sonnix.modules.loss import MaximumMeanIndicator
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> indicator = MaximumMeanIndicator()
>>> indicator
MaximumMeanIndicator()
>>> values = indicator(
... prediction=torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True),
... target=torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]]),
... )
>>> values
tensor([[2., 1., 1.],
[3., 5., 1.]], grad_fn=<MaximumBackward0>)
sonnix.modules.MinimumMeanIndicator ¶
Bases: BaseRelativeIndicator
Implement the minimum mean change indicator function.
Example
>>> import torch
>>> from sonnix.modules.loss import MinimumMeanIndicator
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> indicator = MinimumMeanIndicator()
>>> indicator
MinimumMeanIndicator()
>>> values = indicator(
... prediction=torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True),
... target=torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]]),
... )
>>> values
tensor([[0., 1., 0.],
[3., 1., 1.]], grad_fn=<MinimumBackward0>)
sonnix.modules.MomentMeanIndicator ¶
Bases: BaseRelativeIndicator
Implement the moment mean change of order k indicator function.
Example
>>> import torch
>>> from sonnix.modules.loss import MomentMeanIndicator
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> indicator = MomentMeanIndicator()
>>> indicator
MomentMeanIndicator(k=1)
>>> values = indicator(
... prediction=torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True),
... target=torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]]),
... )
>>> values
tensor([[1.0000, 1.0000, 0.5000],
[3.0000, 3.0000, 1.0000]], grad_fn=<PowBackward0>)
sonnix.modules.MultiQuadratic ¶
Bases: BaseAlphaActivation
Implement the Multi Quadratic activation layer.
Formula: 1 / sqrt(1 + (alpha * x)^2)
This activation layer was proposed in the following paper:
Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs.
Ramasinghe S., Lucey S.
ECCV 2022. (http://arxiv.org/pdf/2111.15135)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import MultiQuadratic
>>> m = MultiQuadratic()
>>> m
MultiQuadratic(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000, 0.7071, 0.4472, 0.3162],
[0.2425, 0.1961, 0.1644, 0.1414]], grad_fn=<MulBackward0>)
sonnix.modules.MulticlassFlatten ¶
Bases: Module
Implement a wrapper to flat the multiclass inputs of a
torch.nn.Module.
The input prediction tensor shape is (d1, d2, ..., dn, C)
and is reshaped to (d1 * d2 * ... * dn, C).
The input target tensor shape is (d1, d2, ..., dn)
and is reshaped to (d1 * d2 * ... * dn,).
Example
>>> import torch
>>> from sonnix.modules import MulticlassFlatten
>>> m = MulticlassFlatten(torch.nn.CrossEntropyLoss())
>>> m
MulticlassFlatten(
(module): CrossEntropyLoss()
)
>>> out = m(torch.ones(6, 2, 4, requires_grad=True), torch.zeros(6, 2, dtype=torch.long))
>>> out
tensor(1.3863, grad_fn=<NllLossBackward0>)
sonnix.modules.MultiplicationFusion ¶
Bases: Module
Implement a fusion layer that multiplies the inputs.
Example
>>> import torch
>>> from sonnix.modules import MultiplicationFusion
>>> module = MultiplicationFusion()
>>> module
MultiplicationFusion()
>>> x1 = torch.tensor([[2.0, 3.0, 4.0], [5.0, 6.0, 7.0]], requires_grad=True)
>>> x2 = torch.tensor([[12.0, 13.0, 14.0], [15.0, 16.0, 17.0]], requires_grad=True)
>>> out = module(x1, x2)
>>> out
tensor([[ 24., 39., 56.],
[ 75., 96., 119.]], grad_fn=<MulBackward0>)
>>> out.mean().backward()
sonnix.modules.NLinear ¶
Bases: Module
Implement N separate linear layers.
Technically, NLinear(n, in, out) is just a layout of n
linear layers torch.nn.Linear(in, out).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n
|
int
|
The number of separate linear layers. |
required |
in_features
|
int
|
The size of each input sample. |
required |
out_features
|
int
|
The size of each output sample. |
required |
bias
|
bool
|
If set to |
True
|
Shape
- Input:
(*, n, in_features), where*means any number of dimensions. - Output:
(*, n, out_features), where*has the same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import NLinear
>>> # Example with 1 feature
>>> m = NLinear(n=3, in_features=4, out_features=6)
>>> m
NLinear(n=3, in_features=4, out_features=6, bias=True)
>>> out = m(torch.randn(2, 3, 4))
>>> out.shape
torch.Size([2, 3, 6])
>>> out = m(torch.randn(2, 5, 3, 4))
>>> out.shape
torch.Size([2, 5, 3, 6])
sonnix.modules.PiecewiseLinearNumericalEncoder ¶
Bases: Module
Implement a numerical encoder using piecewise linear functions.
This layer was proposed in the following paper:
On Embeddings for Numerical Features in Tabular Deep Learning Yury Gorishniy, Ivan Rubachev, Artem Babenko NeurIPS 2022, https://arxiv.org/pdf/2203.05556 https://github.com/yandex-research/rtdl-num-embeddings
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bins
|
Tensor
|
The bins used to compute the piecewise linear
representations. This input should be a tensor of shape
|
required |
Shape
- Input:
(*, n_features), where*means any number of dimensions. - Output:
(*, n_features, n_bins - 1), where*has the same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import PiecewiseLinearNumericalEncoder
>>> # Example with 1 feature
>>> m = PiecewiseLinearNumericalEncoder(bins=torch.tensor([[1.0, 2.0, 4.0, 8.0]]))
>>> m
PiecewiseLinearNumericalEncoder(n_features=1, feature_size=3)
>>> out = m(torch.tensor([[0.0], [1.0], [2.0], [3.0]]))
>>> out
tensor([[[-1.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000]],
[[ 1.0000, 0.0000, 0.0000]],
[[ 1.0000, 0.5000, 0.0000]]])
>>> # Example with 2 features
>>> m = PiecewiseLinearNumericalEncoder(
... bins=torch.tensor([[1.0, 2.0, 4.0, 8.0], [0.0, 2.0, 4.0, 6.0]])
... )
>>> m
PiecewiseLinearNumericalEncoder(n_features=2, feature_size=3)
>>> out = m(torch.tensor([[0.0, 0.0], [1.0, 1.0], [2.0, 2.0], [3.0, 3.0]]))
>>> out
tensor([[[-1.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000],
[ 0.5000, 0.0000, 0.0000]],
[[ 1.0000, 0.0000, 0.0000],
[ 1.0000, 0.0000, 0.0000]],
[[ 1.0000, 0.5000, 0.0000],
[ 1.0000, 0.5000, 0.0000]]])
sonnix.modules.PiecewiseLinearNumericalEncoder.input_size
property
¶
input_size: int
Return the input feature size i.e. the number of scalar values.
sonnix.modules.PiecewiseLinearNumericalEncoder.output_size
property
¶
output_size: int
Return the output feature size i.e. the number of bins minus one.
sonnix.modules.PoissonRegressionLoss ¶
Bases: Module
Implement a loss module that computes the Poisson regression loss.
Loss Functions and Metrics in Deep Learning https://arxiv.org/pdf/2307.02694
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the count is zero. |
1e-08
|
Example
>>> import torch
>>> from sonnix.modules import PoissonRegressionLoss
>>> criterion = PoissonRegressionLoss()
>>> criterion
PoissonRegressionLoss(reduction=mean, eps=1e-08)
>>> loss = criterion(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.modules.Pow ¶
Bases: Module
Implement a torch.nn.Module to compute the natural logarithm
of the input.
This module is equivalent to pow(input, exponent)
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Pow
>>> m = Pow(exponent=2)
>>> m
Pow(exponent=2.0)
>>> out = m(torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]))
>>> out
tensor([[ 1., 4., 9.],
[16., 25., 36.]])
sonnix.modules.Quadratic ¶
Bases: BaseAlphaActivation
Implement the Quadratic activation layer.
Formula: 1 / (1 + (alpha * x)^2)
This activation layer was proposed in the following paper:
Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs.
Ramasinghe S., Lucey S.
ECCV 2022. (http://arxiv.org/pdf/2111.15135)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Quadratic
>>> m = Quadratic()
>>> m
Quadratic(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000, 0.5000, 0.2000, 0.1000],
[0.0588, 0.0385, 0.0270, 0.0200]], grad_fn=<MulBackward0>)
sonnix.modules.QuantileRegressionLoss ¶
Bases: Module
Implement a loss module that computes the quantile regression loss.
Loss Functions and Metrics in Deep Learning https://arxiv.org/pdf/2307.02694
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
q
|
float
|
The quantile value. |
0.5
|
Example
>>> import torch
>>> from sonnix.modules import QuantileRegressionLoss
>>> criterion = QuantileRegressionLoss()
>>> criterion
QuantileRegressionLoss(reduction=mean, q=0.5)
>>> loss = criterion(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.modules.ReLUn ¶
Bases: Module
Implement the ReLU-n module.
The ReLU-n equation is: ReLUn(x, n)=min(max(0,x),n)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max
|
float
|
The maximum value a.k.a. |
1.0
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import ReLUn
>>> m = ReLUn(max=5)
>>> m
ReLUn(max=5.0)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[0., 1., 2., 3.],
[4., 5., 5., 5.]])
sonnix.modules.RectifierAsinhUnit ¶
Bases: Module
Implement a torch.nn.Module to compute the inverse hyperbolic
sine (arcsinh) of the positive elements, and zero for the negative
elements.
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import RectifierAsinhUnit
>>> m = RectifierAsinhUnit()
>>> m
RectifierAsinhUnit()
>>> out = m(torch.tensor([[-1.0, 0.0, 1.0], [-2.0, 2.0, 4.0]]))
>>> out
tensor([[0.0000, 0.0000, 0.8814],
[0.0000, 1.4436, 2.0947]])
sonnix.modules.RelativeLoss ¶
Bases: Module
Implement a "generic" relative loss that takes as input a criterion.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
criterion
|
Module | dict[Any, Any]
|
The criterion or its configuration. This criterion should not have reduction to be compatible with the shapes of the prediction and targets. |
required |
indicator
|
BaseRelativeIndicator | dict[Any, Any] | None
|
The name of the indicator function to use or its
implementation. If |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the indicator is zero. |
1e-08
|
Example
>>> import torch
>>> from sonnix.modules import RelativeLoss
>>> from sonnix.modules.loss import ClassicalRelativeIndicator
>>> criterion = RelativeLoss(
... criterion=torch.nn.MSELoss(reduction="none"),
... indicator=ClassicalRelativeIndicator(),
... )
>>> criterion
RelativeLoss(
eps=1e-08, reduction=mean
(criterion): MSELoss()
(indicator): ClassicalRelativeIndicator()
)
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> loss = criterion(prediction, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.modules.RelativeMSELoss ¶
Bases: RelativeLoss
Implement the relative MSE loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
indicator
|
BaseRelativeIndicator | dict[Any, Any] | None
|
The name of the indicator function to use or its
implementation. If |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the indicator is zero. |
1e-08
|
Example
>>> import torch
>>> from sonnix.modules import RelativeMSELoss
>>> from sonnix.modules.loss import ClassicalRelativeIndicator
>>> criterion = RelativeMSELoss(indicator=ClassicalRelativeIndicator())
>>> criterion
RelativeMSELoss(
eps=1e-08, reduction=mean
(criterion): MSELoss()
(indicator): ClassicalRelativeIndicator()
)
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> loss = criterion(prediction, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.modules.RelativeSmoothL1Loss ¶
Bases: RelativeLoss
Implement the relative smooth L1 loss.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
indicator
|
BaseRelativeIndicator | dict[Any, Any] | None
|
The name of the indicator function to use or its
implementation. If |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the indicator is zero. |
1e-08
|
Example
>>> import torch
>>> from sonnix.modules import RelativeSmoothL1Loss
>>> from sonnix.modules.loss import ClassicalRelativeIndicator
>>> criterion = RelativeSmoothL1Loss(indicator=ClassicalRelativeIndicator())
>>> criterion
RelativeSmoothL1Loss(
eps=1e-08, reduction=mean
(criterion): SmoothL1Loss()
(indicator): ClassicalRelativeIndicator()
)
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> loss = criterion(prediction, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
sonnix.modules.ResidualBlock ¶
Bases: Module
Implementation of a residual block.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
residual
|
Module | dict[Any, Any]
|
The residual mapping module or its configuration (dictionary). |
required |
skip
|
Module | dict[Any, Any] | None
|
The skip mapping module or its configuration
(dictionary). If |
None
|
Example
>>> import torch
>>> from torch import nn
>>> from sonnix.modules import ResidualBlock
>>> m = ResidualBlock(residual=nn.Sequential(nn.Linear(4, 6), nn.ReLU(), nn.Linear(6, 4)))
>>> m
ResidualBlock(
(residual): Sequential(
(0): Linear(in_features=4, out_features=6, bias=True)
(1): ReLU()
(2): Linear(in_features=6, out_features=4, bias=True)
)
(skip): Identity()
)
>>> out = m(torch.rand(6, 4))
>>> out
tensor([[...]], grad_fn=<AddBackward0>)
sonnix.modules.ReversedRelativeIndicator ¶
Bases: BaseRelativeIndicator
Implement the reversed relative indicator function.
Example
>>> import torch
>>> from sonnix.modules.loss import ReversedRelativeIndicator
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> indicator = ReversedRelativeIndicator()
>>> indicator
ReversedRelativeIndicator()
>>> values = indicator(
... prediction=torch.tensor([[0.0, 1.0, -1.0], [3.0, 1.0, -1.0]], requires_grad=True),
... target=torch.tensor([[-2.0, 1.0, 0.0], [-3.0, 5.0, -1.0]]),
... )
>>> values
tensor([[0., 1., 1.],
[3., 1., 1.]], grad_fn=<AbsBackward0>)
sonnix.modules.SafeExp ¶
Bases: Module
Implement a torch.nn.Module to compute the exponential of the
elements.
The values that are higher than the specified minimum value are set to this maximum value. Using a not too large positive value leads to an output tensor without Inf.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max
|
float
|
The maximum value before to compute the exponential. |
20.0
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import SafeExp
>>> m = SafeExp()
>>> m
SafeExp(max=20.0)
>>> out = m(torch.tensor([[0.01, 0.1, 1.0], [10.0, 100.0, 1000.0]]))
>>> out
tensor([[1.0101e+00, 1.1052e+00, 2.7183e+00],
[2.2026e+04, 4.8517e+08, 4.8517e+08]])
sonnix.modules.SafeLog ¶
Bases: Module
Implement a torch.nn.Module to compute the logarithm natural
of the elements.
The values that are lower than the specified minimum value are set to this minimum value. Using a small positive value leads to an output tensor without NaN or Inf.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min
|
float
|
The minimum value before to compute the logarithm natural. |
1e-08
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import SafeLog
>>> m = SafeLog()
>>> m
SafeLog(min=1e-08)
>>> out = m(torch.tensor([[1e-4, 1e-5, 1e-6], [1e-8, 1e-9, 1e-10]]))
>>> out
tensor([[ -9.2103, -11.5129, -13.8155],
[-18.4207, -18.4207, -18.4207]])
sonnix.modules.ScaleAndShift ¶
Bases: Module
Applies a scale and shift transformation over a mini-batch of inputs.
This layer implements the following operation:
y = gamma * x + beta
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
normalized_shape
|
int | list[int] | tuple[int, ...]
|
The input shape to normalize. If a single integer is used, it is treated as a singleton list, and this module willcnormalize over the last dimension which is expected to be of that specific size. |
required |
Shape
- Input:
(N, *), where*means any number of dimensions. - Output:
(N, *), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import ScaleAndShift
>>> m = ScaleAndShift(normalized_shape=5)
>>> m
ScaleAndShift(normalized_shape=(5,))
>>> out = m(torch.tensor([[-2, -1, 0, 1, 2], [3, 2, 1, 2, 3]]))
>>> out
tensor([[-2., -1., 0., 1., 2.],
[ 3., 2., 1., 2., 3.]], grad_fn=<AddBackward0>)
sonnix.modules.Sin ¶
Bases: Module
Implement the sine activation layer.
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Sin
>>> m = Sin()
>>> m
Sin()
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[ 0.0000, 0.8415, 0.9093, 0.1411],
[-0.7568, -0.9589, -0.2794, 0.6570]])
sonnix.modules.Sinh ¶
Bases: Module
Implement a torch.nn.Module to compute the hyperbolic sine
(sinh) of the elements.
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Sinh
>>> m = Sinh()
>>> m
Sinh()
>>> out = m(torch.tensor([[-1.0, 0.0, 1.0], [-2.0, 2.0, 4.0]]))
>>> out
tensor([[-1.1752, 0.0000, 1.1752],
[-3.6269, 3.6269, 27.2899]])
sonnix.modules.Snake ¶
Bases: Module
Implement the Snake activation layer.
Snake was proposed in the following paper:
Neural Networks Fail to Learn Periodic Functions and How to Fix It.
Ziyin L., Hartwig T., Ueda M.
NeurIPS, 2020. (http://arxiv.org/pdf/2006.08195)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
frequency
|
float
|
The frequency. |
1.0
|
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Snake
>>> m = Snake()
>>> m
Snake(frequency=1.0)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[0.0000, 1.7081, 2.8268, 3.0199],
[4.5728, 5.9195, 6.0781, 7.4316]])
sonnix.modules.Square ¶
Bases: Module
Implement the square activation.
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import Square
>>> m = Square()
>>> m
Square()
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[ 0., 1., 4., 9.],
[16., 25., 36., 49.]])
sonnix.modules.SquaredReLU ¶
Bases: Module
Implement the Squared ReLU.
Squared ReLU is defined in the following paper:
Primer: Searching for Efficient Transformers for Language Modeling.
So DR., Mańke W., Liu H., Dai Z., Shazeer N., Le QV.
NeurIPS, 2021. (https://arxiv.org/pdf/2109.08668.pdf)
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import SquaredReLU
>>> m = SquaredReLU()
>>> m
SquaredReLU()
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[ 0., 1., 4., 9.],
[16., 25., 36., 49.]])
sonnix.modules.Squeeze ¶
Bases: Module
Implement a torch.nn.Module to squeeze the input tensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dim
|
int | None
|
The dimension to squeeze the input tensor. If |
None
|
Example
>>> import torch
>>> from sonnix.modules import Squeeze
>>> m = Squeeze()
>>> m
Squeeze(dim=None)
>>> out = m(torch.ones(2, 1, 3, 1))
>>> out.shape
torch.Size([2, 3])
sonnix.modules.SumFusion ¶
Bases: Module
Implement a layer to sum the inputs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
normalized
|
bool
|
The output is normalized by the number of inputs. |
False
|
Example
>>> import torch
>>> from sonnix.modules import SumFusion
>>> module = SumFusion()
>>> module
SumFusion(normalized=False)
>>> x1 = torch.tensor([[2.0, 3.0, 4.0], [5.0, 6.0, 7.0]], requires_grad=True)
>>> x2 = torch.tensor([[12.0, 13.0, 14.0], [15.0, 16.0, 17.0]], requires_grad=True)
>>> out = module(x1, x2)
>>> out
tensor([[14., 16., 18.],
[20., 22., 24.]], grad_fn=<AddBackward0>)
>>> out.mean().backward()
sonnix.modules.ToBinaryLabel ¶
Bases: Module
Implement a torch.nn.Module to compute binary labels from
scores by thresholding.
The output label is 1 if the value is greater than the
threshold, and 0 otherwise.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
threshold
|
float
|
The threshold value used to compute the binary labels. |
0.0
|
Example
>>> import torch
>>> from sonnix.modules import ToBinaryLabel
>>> transform = ToBinaryLabel()
>>> transform
ToBinaryLabel(threshold=0.0)
>>> out = transform(torch.tensor([-1.0, 1.0, -2.0, 1.0]))
>>> out
tensor([0, 1, 0, 1])
sonnix.modules.ToBinaryLabel.threshold
property
¶
threshold: float
The threshold used to compute the binary label.
sonnix.modules.ToBinaryLabel.forward ¶
forward(scores: Tensor) -> Tensor
Compute binary labels from scores.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
scores
|
Tensor
|
The scores used to compute the binary labels.
This input must be a |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed binary labels where the values are |
Example
>>> import torch
>>> from sonnix.modules import ToBinaryLabel
>>> transform = ToBinaryLabel()
>>> out = transform(torch.tensor([-1.0, 1.0, -2.0, 1.0]))
>>> out
tensor([0, 1, 0, 1])
sonnix.modules.ToCategoricalLabel ¶
Bases: Module
Implement a torch.nn.Module to compute categorical labels
from scores.
Example
>>> import torch
>>> from sonnix.modules import ToCategoricalLabel
>>> transform = ToCategoricalLabel()
>>> transform
ToCategoricalLabel()
>>> out = transform(torch.tensor([[1.0, 2.0, 3.0, 4.0], [5.0, 3.0, 2.0, 2.0]]))
>>> out
tensor([3, 0])
sonnix.modules.ToCategoricalLabel.forward ¶
forward(scores: Tensor) -> Tensor
Compute categorical labels from scores.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
scores
|
Tensor
|
The scores used to compute the categorical labels.
This input must be a |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
The computed categorical labels where the values are in
|
Example
>>> import torch
>>> from sonnix.modules import ToCategoricalLabel
>>> transform = ToCategoricalLabel()
>>> out = transform(torch.tensor([[1.0, 2.0, 3.0, 4.0], [5.0, 3.0, 2.0, 2.0]]))
>>> out
tensor([3, 0])
sonnix.modules.ToFloat ¶
Bases: Module
Implement a torch.nn.Module to convert a tensor to a float
tensor.
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import ToFloat
>>> m = ToFloat()
>>> m
ToFloat()
>>> out = m(torch.tensor([[2, -1, 0], [1, 2, 3]]))
>>> out
tensor([[ 2., -1., 0.],
[ 1., 2., 3.]])
sonnix.modules.ToLong ¶
Bases: Module
Implement a torch.nn.Module to convert a tensor to a long
tensor.
Shape
- Input:
(*), where*means any number of dimensions. - Output:
(*), same shape as the input.
Example
>>> import torch
>>> from sonnix.modules import ToLong
>>> m = ToLong()
>>> m
ToLong()
>>> out = m(torch.tensor([[2.0, -1.0, 0.0], [1.0, 2.0, 3.0]]))
>>> out
tensor([[ 2, -1, 0],
[ 1, 2, 3]])
sonnix.modules.TransformedLoss ¶
Bases: Module
Implement a loss function where the predictions and targets are transformed before to be fed to the loss function.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
criterion
|
Module | dict[Any, Any]
|
The criterion or its configuration. The loss has two inputs: predictions and targets. |
required |
prediction
|
Module | dict[Any, Any] | None
|
The transformation for the predictions or its
configuration. If |
None
|
target
|
Module | dict[Any, Any] | None
|
The transformation for the targets or its
configuration. If |
None
|
Example
>>> import torch
>>> from sonnix.modules import TransformedLoss, Asinh
>>> criterion = TransformedLoss(
... criterion=torch.nn.SmoothL1Loss(),
... prediction=Asinh(),
... target=Asinh(),
... )
>>> loss = criterion(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<SmoothL1LossBackward0>)
>>> loss.backward()
sonnix.modules.View ¶
Bases: Module
Implement a torch.nn.Module to return a new tensor with the
same data as the input tensor but of a different shape.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
shape
|
tuple[int, ...] | list[int]
|
The desired shape. |
required |
Example
>>> import torch
>>> from sonnix.modules import View
>>> m = View(shape=(-1, 2, 3))
>>> m
View(shape=(-1, 2, 3))
>>> out = m(torch.ones(4, 5, 2, 3))
>>> out.shape
torch.Size([20, 2, 3])