karbonn.modules¶
karbonn.modules ¶
Contains some modules.
karbonn.modules.Asinh ¶
Bases: Module
Implement a torch.nn.Module
to compute the inverse hyperbolic
sine (arcsinh) of the elements.
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Asinh
>>> m = Asinh()
>>> m
Asinh()
>>> out = m(torch.tensor([[-1.0, 0.0, 1.0], [-2.0, 2.0, 4.0]]))
>>> out
tensor([[-0.8814, 0.0000, 0.8814],
[-1.4436, 1.4436, 2.0947]])
karbonn.modules.AsinhCosSinNumericalEncoder ¶
Bases: CosSinNumericalEncoder
Extension of CosSinNumericalEncoder
with an additional
feature built using the inverse hyperbolic sine (arcsinh).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
frequency
|
Tensor
|
The initial frequency values. This input should be
a tensor of shape |
required |
phase_shift
|
Tensor
|
The initial phase-shift values. This input should
be a tensor of shape |
required |
learnable
|
bool
|
If |
False
|
Shape
- Input:
(*, n_features)
, where*
means any number of dimensions. - Output:
(*, n_features, feature_size + 1)
, where*
has the same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import AsinhCosSinNumericalEncoder
>>> # Example with 1 feature
>>> m = AsinhCosSinNumericalEncoder(
... frequency=torch.tensor([[1.0, 2.0, 4.0]]),
... phase_shift=torch.zeros(1, 3),
... )
>>> m
AsinhCosSinNumericalEncoder(frequency=(1, 6), phase_shift=(1, 6), learnable=False)
>>> out = m(torch.tensor([[0.0], [1.0], [2.0], [3.0]]))
>>> out
tensor([[[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000, 0.0000]],
[[ 0.8415, 0.9093, -0.7568, 0.5403, -0.4161, -0.6536, 0.8814]],
[[ 0.9093, -0.7568, 0.9894, -0.4161, -0.6536, -0.1455, 1.4436]],
[[ 0.1411, -0.2794, -0.5366, -0.9900, 0.9602, 0.8439, 1.8184]]])
>>> # Example with 2 features
>>> m = AsinhCosSinNumericalEncoder(
... frequency=torch.tensor([[1.0, 2.0, 4.0], [2.0, 4.0, 6.0]]),
... phase_shift=torch.zeros(2, 3),
... )
>>> m
AsinhCosSinNumericalEncoder(frequency=(2, 6), phase_shift=(2, 6), learnable=False)
>>> out = m(torch.tensor([[0.0, 3.0], [1.0, 2.0], [2.0, 1.0], [3.0, 0.0]]))
>>> out
tensor([[[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000, 0.0000],
[-0.2794, -0.5366, -0.7510, 0.9602, 0.8439, 0.6603, 1.8184]],
[[ 0.8415, 0.9093, -0.7568, 0.5403, -0.4161, -0.6536, 0.8814],
[-0.7568, 0.9894, -0.5366, -0.6536, -0.1455, 0.8439, 1.4436]],
[[ 0.9093, -0.7568, 0.9894, -0.4161, -0.6536, -0.1455, 1.4436],
[ 0.9093, -0.7568, -0.2794, -0.4161, -0.6536, 0.9602, 0.8814]],
[[ 0.1411, -0.2794, -0.5366, -0.9900, 0.9602, 0.8439, 1.8184],
[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000, 0.0000]]])
karbonn.modules.AsinhCosSinNumericalEncoder.output_size
property
¶
output_size: int
Return the output feature size.
karbonn.modules.AsinhMSELoss ¶
Bases: Module
Implement a loss module that computes the mean squared error (MSE) on the inverse hyperbolic sine (asinh) transformed predictions and targets.
It is a generalization of mean squared logarithmic error (MSLE)
that works for real values. The asinh
transformation is used
instead of log1p
because asinh
works on negative values.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Example usage:
>>> import torch
>>> from karbonn.modules import AsinhMSELoss
>>> criterion = AsinhMSELoss()
>>> criterion
AsinhMSELoss(reduction=mean)
>>> loss = criterion(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<MseLossBackward0>)
>>> loss.backward()
karbonn.modules.AsinhNumericalEncoder ¶
Bases: Module
Implement a numerical encoder using the inverse hyperbolic sine (asinh).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scale
|
Tensor
|
The initial scale values. This input should be a tensor
of shape |
required |
learnable
|
bool
|
If |
False
|
Shape
- Input:
(*, n_features)
, where*
means any number of dimensions. - Output:
(*, n_features, feature_size)
, where*
has the same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import AsinhNumericalEncoder
>>> # Example with 1 feature
>>> m = AsinhNumericalEncoder(scale=torch.tensor([[1.0, 2.0, 4.0]]))
>>> m
AsinhNumericalEncoder(scale=(1, 3), learnable=False)
>>> out = m(torch.tensor([[0.0], [1.0], [2.0], [3.0]]))
>>> out
tensor([[[0.0000, 0.0000, 0.0000]],
[[0.8814, 1.4436, 2.0947]],
[[1.4436, 2.0947, 2.7765]],
[[1.8184, 2.4918, 3.1798]]])
>>> # Example with 2 features
>>> m = AsinhNumericalEncoder(scale=torch.tensor([[1.0, 2.0, 4.0], [1.0, 3.0, 6.0]]))
>>> m
AsinhNumericalEncoder(scale=(2, 3), learnable=False)
>>> out = m(torch.tensor([[0.0, 0.0], [1.0, 1.0], [2.0, 2.0], [3.0, 3.0]]))
>>> out
tensor([[[0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000]],
[[0.8814, 1.4436, 2.0947], [0.8814, 1.8184, 2.4918]],
[[1.4436, 2.0947, 2.7765], [1.4436, 2.4918, 3.1798]],
[[1.8184, 2.4918, 3.1798], [1.8184, 2.8934, 3.5843]]])
karbonn.modules.AsinhSmoothL1Loss ¶
Bases: Module
Implement a loss module that computes the smooth L1 loss on the inverse hyperbolic sine (asinh) transformed predictions and targets.
It is a generalization of mean squared logarithmic error (MSLE)
that works for real values. The asinh
transformation is used
instead of log1p
because asinh
works on negative values.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
beta
|
float
|
The threshold at which to change between L1 and L2 loss. The value must be non-negative. |
1.0
|
Example usage:
>>> import torch
>>> from karbonn.functional import asinh_smooth_l1_loss
>>> from karbonn.modules import AsinhSmoothL1Loss
>>> criterion = AsinhSmoothL1Loss()
>>> criterion
AsinhSmoothL1Loss(reduction=mean, beta=1.0)
>>> loss = criterion(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<SmoothL1LossBackward0>)
>>> loss.backward()
karbonn.modules.AverageFusion ¶
Bases: SumFusion
Implement a layer to average the inputs.
Example usage:
>>> import torch
>>> from karbonn.modules import AverageFusion
>>> module = AverageFusion()
>>> module
AverageFusion(normalized=True)
>>> x1 = torch.tensor([[2.0, 3.0, 4.0], [5.0, 6.0, 7.0]], requires_grad=True)
>>> x2 = torch.tensor([[12.0, 13.0, 14.0], [15.0, 16.0, 17.0]], requires_grad=True)
>>> out = module(x1, x2)
>>> out
tensor([[ 7., 8., 9.],
[10., 11., 12.]], grad_fn=<DivBackward0>)
>>> out.mean().backward()
karbonn.modules.BinaryFocalLoss ¶
Bases: Module
Implementation of the binary Focal Loss.
Based on "Focal Loss for Dense Object Detection" (https://arxiv.org/pdf/1708.02002.pdf)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
loss
|
Module | dict
|
The binary cross entropy layer or another equivalent layer. To be used as in the original paper, this loss should not use reducton as the reduction is done in this class. |
required |
alpha
|
float
|
The weighting factor, which must be in the range
|
0.5
|
gamma
|
float
|
The focusing parameter, which must be positive
( |
2.0
|
reduction
|
str
|
The reduction to apply to the output:
|
'mean'
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Target:
(*)
, same shape as the input. - Output: scalar. If
reduction
is'none'
, then(*)
, same shape as input.
Example usage:
>>> import torch
>>> from karbonn.modules import BinaryFocalLoss
>>> criterion = BinaryFocalLoss(nn.BCEWithLogitsLoss(reduction="none"))
>>> criterion
BinaryFocalLoss(
alpha=0.5, gamma=2.0, reduction=mean
(loss): BCEWithLogitsLoss()
)
>>> input = torch.randn(3, 2, requires_grad=True)
>>> target = torch.rand(3, 2)
>>> loss = criterion(input, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
karbonn.modules.BinaryFocalLoss.forward ¶
forward(prediction: Tensor, target: Tensor) -> Tensor
Compute the binary Focal Loss.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction
|
Tensor
|
The predicted probabilities or the un-normalized scores. |
required |
target
|
Tensor
|
The targets where |
required |
Returns:
Type | Description |
---|---|
Tensor
|
|
karbonn.modules.Clamp ¶
Bases: Module
Implement a module to clamp all elements in input into the range
[min, max]
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
min
|
float | None
|
The lower-bound of the range to be clamped to.
|
-1.0
|
max
|
float | None
|
The upper-bound of the range to be clamped to.
|
1.0
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Clamp
>>> m = Clamp(min=-1, max=2)
>>> m
Clamp(min=-1, max=2)
>>> out = m(torch.tensor([[-2.0, -1.0, 0.0], [1.0, 2.0, 3.0]]))
>>> out
tensor([[-1., -1., 0.], [ 1., 2., 2.]])
karbonn.modules.ConcatFusion ¶
Bases: Module
Implement a module to concatenate inputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dim
|
int
|
The fusion dimension. |
-1
|
Example usage:
>>> import torch
>>> from karbonn.modules import ConcatFusion
>>> module = ConcatFusion()
>>> module
ConcatFusion(dim=-1)
>>> x1 = torch.tensor([[2.0, 3.0, 4.0], [5.0, 6.0, 7.0]], requires_grad=True)
>>> x2 = torch.tensor([[12.0, 13.0, 14.0], [15.0, 16.0, 17.0]], requires_grad=True)
>>> out = module(x1, x2)
>>> out
tensor([[ 2., 3., 4., 12., 13., 14.],
[ 5., 6., 7., 15., 16., 17.]], grad_fn=<CatBackward0>)
>>> out.mean().backward()
karbonn.modules.CosSinNumericalEncoder ¶
Bases: Module
Implement a frequency/phase-shift numerical encoder where the periodic functions are cosine and sine.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
frequency
|
Tensor
|
The initial frequency values. This input should be
a tensor of shape |
required |
phase_shift
|
Tensor
|
The initial phase-shift values. This input should
be a tensor of shape |
required |
learnable
|
bool
|
If |
False
|
Shape
- Input:
(*, n_features)
, where*
means any number of dimensions. - Output:
(*, n_features, feature_size)
, where*
has the same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import CosSinNumericalEncoder
>>> # Example with 1 feature
>>> m = CosSinNumericalEncoder(
... frequency=torch.tensor([[1.0, 2.0, 4.0]]),
... phase_shift=torch.zeros(1, 3),
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 6), phase_shift=(1, 6), learnable=False)
>>> out = m(torch.tensor([[0.0], [1.0], [2.0], [3.0]]))
>>> out
tensor([[[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000]],
[[ 0.8415, 0.9093, -0.7568, 0.5403, -0.4161, -0.6536]],
[[ 0.9093, -0.7568, 0.9894, -0.4161, -0.6536, -0.1455]],
[[ 0.1411, -0.2794, -0.5366, -0.9900, 0.9602, 0.8439]]])
>>> # Example with 2 features
>>> m = CosSinNumericalEncoder(
... frequency=torch.tensor([[1.0, 2.0, 4.0], [2.0, 4.0, 6.0]]),
... phase_shift=torch.zeros(2, 3),
... )
>>> m
CosSinNumericalEncoder(frequency=(2, 6), phase_shift=(2, 6), learnable=False)
>>> out = m(torch.tensor([[0.0, 3.0], [1.0, 2.0], [2.0, 1.0], [3.0, 0.0]]))
>>> out
tensor([[[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000],
[-0.2794, -0.5366, -0.7510, 0.9602, 0.8439, 0.6603]],
[[ 0.8415, 0.9093, -0.7568, 0.5403, -0.4161, -0.6536],
[-0.7568, 0.9894, -0.5366, -0.6536, -0.1455, 0.8439]],
[[ 0.9093, -0.7568, 0.9894, -0.4161, -0.6536, -0.1455],
[ 0.9093, -0.7568, -0.2794, -0.4161, -0.6536, 0.9602]],
[[ 0.1411, -0.2794, -0.5366, -0.9900, 0.9602, 0.8439],
[ 0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 1.0000]]])
karbonn.modules.CosSinNumericalEncoder.input_size
property
¶
input_size: int
Return the input feature size.
karbonn.modules.CosSinNumericalEncoder.output_size
property
¶
output_size: int
Return the output feature size.
karbonn.modules.CosSinNumericalEncoder.create_linspace_frequency
classmethod
¶
create_linspace_frequency(
num_frequencies: int,
min_frequency: float,
max_frequency: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a `CosSinNumericalEncoder`` where the frequencies are evenly spaced in a frequency range.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_frequency
|
float
|
The minimum frequency. |
required |
max_frequency
|
float
|
The maximum frequency. |
required |
learnable
|
bool
|
If |
False
|
Returns:
Type | Description |
---|---|
CosSinNumericalEncoder
|
An instantiated |
Example usage:
>>> import torch
>>> from karbonn.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_linspace_frequency(
... num_frequencies=5, min_frequency=0.1, max_frequency=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
karbonn.modules.CosSinNumericalEncoder.create_linspace_value_range
classmethod
¶
create_linspace_value_range(
num_frequencies: int,
min_abs_value: float,
max_abs_value: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a CosSinNumericalEncoder
where the frequencies are
evenly spaced given a value range.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_abs_value
|
float
|
The minimum absolute value to encode. |
required |
max_abs_value
|
float
|
The maximum absolute value to encoder. |
required |
learnable
|
bool
|
If |
False
|
Returns:
Type | Description |
---|---|
CosSinNumericalEncoder
|
An instantiated |
Example usage:
>>> import torch
>>> from karbonn.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_linspace_value_range(
... num_frequencies=5, min_abs_value=0.1, max_abs_value=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
karbonn.modules.CosSinNumericalEncoder.create_logspace_frequency
classmethod
¶
create_logspace_frequency(
num_frequencies: int,
min_frequency: float,
max_frequency: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a CosSinNumericalEncoder
where the frequencies are
evenly spaced in the log space in a frequency range.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_frequency
|
float
|
The minimum frequency. |
required |
max_frequency
|
float
|
The maximum frequency. |
required |
learnable
|
bool
|
If |
False
|
Returns:
Type | Description |
---|---|
CosSinNumericalEncoder
|
An instantiated |
Example usage:
>>> import torch
>>> from karbonn.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_logspace_frequency(
... num_frequencies=5, min_frequency=0.1, max_frequency=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
karbonn.modules.CosSinNumericalEncoder.create_logspace_value_range
classmethod
¶
create_logspace_value_range(
num_frequencies: int,
min_abs_value: float,
max_abs_value: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a CosSinNumericalEncoder
where the frequencies are
evenly spaced in the log space given a value range.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_abs_value
|
float
|
The minimum absolute value to encode. |
required |
max_abs_value
|
float
|
The maximum absolute value to encoder. |
required |
learnable
|
bool
|
If |
False
|
Returns:
Type | Description |
---|---|
CosSinNumericalEncoder
|
An instantiated |
Example usage:
>>> import torch
>>> from karbonn.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_logspace_value_range(
... num_frequencies=5, min_abs_value=0.1, max_abs_value=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
karbonn.modules.CosSinNumericalEncoder.create_rand_frequency
classmethod
¶
create_rand_frequency(
num_frequencies: int,
min_frequency: float,
max_frequency: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a CosSinNumericalEncoder
where the frequencies are
uniformly initialized in a frequency range.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_frequency
|
float
|
The minimum frequency. |
required |
max_frequency
|
float
|
The maximum frequency. |
required |
learnable
|
bool
|
If |
False
|
Returns:
Type | Description |
---|---|
CosSinNumericalEncoder
|
An instantiated |
Example usage:
>>> import torch
>>> from karbonn.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_rand_frequency(
... num_frequencies=5, min_frequency=0.1, max_frequency=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
karbonn.modules.CosSinNumericalEncoder.create_rand_value_range
classmethod
¶
create_rand_value_range(
num_frequencies: int,
min_abs_value: float,
max_abs_value: float,
learnable: bool = False,
) -> CosSinNumericalEncoder
Create a CosSinNumericalEncoder
where the frequencies are
uniformly initialized for a given value range.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_frequencies
|
int
|
The number of frequencies. |
required |
min_abs_value
|
float
|
The minimum absolute value to encode. |
required |
max_abs_value
|
float
|
The maximum absolute value to encode. |
required |
learnable
|
bool
|
If |
False
|
Returns:
Type | Description |
---|---|
CosSinNumericalEncoder
|
An instantiated |
Example usage:
>>> import torch
>>> from karbonn.modules import CosSinNumericalEncoder
>>> m = CosSinNumericalEncoder.create_rand_value_range(
... num_frequencies=5, min_abs_value=0.1, max_abs_value=1.0
... )
>>> m
CosSinNumericalEncoder(frequency=(1, 10), phase_shift=(1, 10), learnable=False)
karbonn.modules.ExU ¶
Bases: Module
Implementation of the exp-centered (ExU) layer.
This layer was proposed in the following paper:
Neural Additive Models: Interpretable Machine Learning with
Neural Nets.
Agarwal R., Melnick L., Frosst N., Zhang X., Lengerich B.,
Caruana R., Hinton G.
NeurIPS 2021. (https://arxiv.org/pdf/2004.13912.pdf)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
in_features
|
int
|
The size of each input sample. |
required |
out_features
|
int
|
The size of each output sample. |
required |
bias
|
bool
|
If set to |
True
|
device
|
device | None
|
The device where to initialize the layer's parameters. |
None
|
dtype
|
dtype | None
|
The data type of the layer's parameters. |
None
|
Shape
- Input:
(*, in_features)
, where*
means any number of dimensions, including none. - Output:
(*, out_features)
, where*
is the same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import ExU
>>> m = ExU(4, 6)
>>> m
ExU(in_features=4, out_features=6, bias=True)
>>> out = m(torch.rand(6, 4))
>>> out
tensor([[...]], grad_fn=<MmBackward0>)
karbonn.modules.ExU.reset_parameters ¶
reset_parameters() -> None
Reset the parameters.
As indicated in page 4 of the paper, the weights are initialed
using a normal distribution N(4.0; 0.5)
. The biases are
initialized to 0
karbonn.modules.Exp ¶
Bases: Module
Implement a torch.nn.Module
to compute the exponential of the
input.
This module is equivalent to exp(input)
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Exp
>>> m = Exp()
>>> m
Exp()
>>> out = m(torch.tensor([[-1.0, 0.0, 1.0], [-2.0, 2.0, 3.0]]))
>>> out
tensor([[ 0.3679, 1.0000, 2.7183],
[ 0.1353, 7.3891, 20.0855]])
karbonn.modules.ExpSin ¶
Bases: BaseAlphaActivation
Implement the ExpSin activation layer.
Formula: exp(-sin(alpha * x))
This activation layer was proposed in the following paper:
Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs.
Ramasinghe S., Lucey S.
ECCV 2022. (http://arxiv.org/pdf/2111.15135)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import ExpSin
>>> m = ExpSin()
>>> m
ExpSin(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000, 2.3198, 2.4826, 1.1516],
[0.4692, 0.3833, 0.7562, 1.9290]], grad_fn=<ExpBackward0>)
karbonn.modules.Expm1 ¶
Bases: Module
Implement a torch.nn.Module
to compute the exponential of the
elements minus 1 of input.
This module is equivalent to exp(input) - 1
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Expm1
>>> m = Expm1()
>>> m
Expm1()
>>> out = m(torch.tensor([[-1.0, 0.0, 1.0], [-2.0, 2.0, 4.0]]))
>>> out
tensor([[-0.6321, 0.0000, 1.7183],
[-0.8647, 6.3891, 53.5981]])
karbonn.modules.Gaussian ¶
Bases: BaseAlphaActivation
Implement the Gaussian activation layer.
Formula: exp(-0.5 * x^2 / alpha^2)
This activation layer was proposed in the following paper:
Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs.
Ramasinghe S., Lucey S.
ECCV 2022. (http://arxiv.org/pdf/2111.15135)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Gaussian
>>> m = Gaussian()
>>> m
Gaussian(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000e+00, 6.0653e-01, 1.3534e-01, 1.1109e-02],
[3.3546e-04, 3.7267e-06, 1.5230e-08, 2.2897e-11]], grad_fn=<ExpBackward0>)
karbonn.modules.GeneralRobustRegressionLoss ¶
Bases: Module
Implement the general robust regression loss a.k.a. Barron robust loss.
Based on the paper:
A General and Adaptive Robust Loss Function
Jonathan T. Barron
CVPR 2019 (https://arxiv.org/abs/1701.03077)
Note
The "adaptative" part of the loss is not implemented in this function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
alpha
|
float
|
The shape parameter that controls the robustness of the loss. |
2.0
|
scale
|
float
|
The scale parameter that controls the size of the loss's quadratic bowl near 0. |
1.0
|
max
|
float | None
|
The max value to clip the loss before to compute the
reduction. |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Target:
(*)
, same shape as the input. - Output: scalar. If
reduction
is'none'
, then(*)
, same shape as input.
Example usage:
>>> import torch
>>> from karbonn.modules import GeneralRobustRegressionLoss
>>> criterion = GeneralRobustRegressionLoss()
>>> criterion
GeneralRobustRegressionLoss(alpha=2.0, scale=1.0, max=None, reduction=mean)
>>> input = torch.randn(3, 2, requires_grad=True)
>>> target = torch.rand(3, 2, requires_grad=False)
>>> loss = criterion(input, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
karbonn.modules.Laplacian ¶
Bases: BaseAlphaActivation
Implement the Laplacian activation layer.
Formula: exp(-|x| / alpha)
This activation layer was proposed in the following paper:
Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs.
Ramasinghe S., Lucey S.
ECCV 2022. (http://arxiv.org/pdf/2111.15135)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Laplacian
>>> m = Laplacian()
>>> m
Laplacian(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000e+00, 3.6788e-01, 1.3534e-01, 4.9787e-02],
[1.8316e-02, 6.7379e-03, 2.4788e-03, 9.1188e-04]], grad_fn=<ExpBackward0>)
karbonn.modules.Log ¶
Bases: Module
Implement a torch.nn.Module
to compute the natural logarithm
of the input.
This module is equivalent to log(input)
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Log
>>> m = Log()
>>> m
Log()
>>> out = m(torch.tensor([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]))
>>> out
tensor([[0.0000, 0.6931, 1.0986],
[1.3863, 1.6094, 1.7918]])
karbonn.modules.Log1p ¶
Bases: Module
Implement a torch.nn.Module
to compute the natural logarithm
of (1 + input)
.
This module is equivalent to log(1 + input)
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Log1p
>>> m = Log1p()
>>> m
Log1p()
>>> out = m(torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0]]))
>>> out
tensor([[0.0000, 0.6931, 1.0986],
[1.3863, 1.6094, 1.7918]])
karbonn.modules.MultiQuadratic ¶
Bases: BaseAlphaActivation
Implement the Multi Quadratic activation layer.
Formula: 1 / sqrt(1 + (alpha * x)^2)
This activation layer was proposed in the following paper:
Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs.
Ramasinghe S., Lucey S.
ECCV 2022. (http://arxiv.org/pdf/2111.15135)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import MultiQuadratic
>>> m = MultiQuadratic()
>>> m
MultiQuadratic(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000, 0.7071, 0.4472, 0.3162],
[0.2425, 0.1961, 0.1644, 0.1414]], grad_fn=<MulBackward0>)
karbonn.modules.MulticlassFlatten ¶
Bases: Module
Implement a wrapper to flat the multiclass inputs of a
torch.nn.Module
.
The input prediction tensor shape is (d1, d2, ..., dn, C)
and is reshaped to (d1 * d2 * ... * dn, C)
.
The input target tensor shape is (d1, d2, ..., dn)
and is reshaped to (d1 * d2 * ... * dn,)
.
Example usage:
>>> import torch
>>> from karbonn.modules import MulticlassFlatten
>>> m = MulticlassFlatten(torch.nn.CrossEntropyLoss())
>>> m
MulticlassFlatten(
(module): CrossEntropyLoss()
)
>>> out = m(torch.ones(6, 2, 4, requires_grad=True), torch.zeros(6, 2, dtype=torch.long))
>>> out
tensor(1.3863, grad_fn=<NllLossBackward0>)
karbonn.modules.MultiplicationFusion ¶
Bases: Module
Implement a fusion layer that multiplies the inputs.
Example usage:
>>> import torch
>>> from karbonn.modules import MultiplicationFusion
>>> module = MultiplicationFusion()
>>> module
MultiplicationFusion()
>>> x1 = torch.tensor([[2.0, 3.0, 4.0], [5.0, 6.0, 7.0]], requires_grad=True)
>>> x2 = torch.tensor([[12.0, 13.0, 14.0], [15.0, 16.0, 17.0]], requires_grad=True)
>>> out = module(x1, x2)
>>> out
tensor([[ 24., 39., 56.],
[ 75., 96., 119.]], grad_fn=<MulBackward0>)
>>> out.mean().backward()
karbonn.modules.NLinear ¶
Bases: Module
Implement N separate linear layers.
Technically, NLinear(n, in, out)
is just a layout of n
linear layers torch.nn.Linear(in, out)
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n
|
int
|
The number of separate linear layers. |
required |
in_features
|
int
|
The size of each input sample. |
required |
out_features
|
int
|
The size of each output sample. |
required |
bias
|
bool
|
If set to |
True
|
Shape
- Input:
(*, n, in_features)
, where*
means any number of dimensions. - Output:
(*, n, out_features)
, where*
has the same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import NLinear
>>> # Example with 1 feature
>>> m = NLinear(n=3, in_features=4, out_features=6)
>>> m
NLinear(n=3, in_features=4, out_features=6, bias=True)
>>> out = m(torch.randn(2, 3, 4))
>>> out.shape
torch.Size([2, 3, 6])
>>> out = m(torch.randn(2, 5, 3, 4))
>>> out.shape
torch.Size([2, 5, 3, 6])
karbonn.modules.PiecewiseLinearNumericalEncoder ¶
Bases: Module
Implement a numerical encoder using piecewise linear functions.
This layer was proposed in the following paper:
On Embeddings for Numerical Features in Tabular Deep Learning Yury Gorishniy, Ivan Rubachev, Artem Babenko NeurIPS 2022, https://arxiv.org/pdf/2203.05556 https://github.com/yandex-research/rtdl-num-embeddings
Parameters:
Name | Type | Description | Default |
---|---|---|---|
bins
|
Tensor
|
The bins used to compute the piecewise linear
representations. This input should be a tensor of shape
|
required |
Shape
- Input:
(*, n_features)
, where*
means any number of dimensions. - Output:
(*, n_features, n_bins - 1)
, where*
has the same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import PiecewiseLinearNumericalEncoder
>>> # Example with 1 feature
>>> m = PiecewiseLinearNumericalEncoder(bins=torch.tensor([[1.0, 2.0, 4.0, 8.0]]))
>>> m
PiecewiseLinearNumericalEncoder(n_features=1, feature_size=3)
>>> out = m(torch.tensor([[0.0], [1.0], [2.0], [3.0]]))
>>> out
tensor([[[-1.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000]],
[[ 1.0000, 0.0000, 0.0000]],
[[ 1.0000, 0.5000, 0.0000]]])
>>> # Example with 2 features
>>> m = PiecewiseLinearNumericalEncoder(
... bins=torch.tensor([[1.0, 2.0, 4.0, 8.0], [0.0, 2.0, 4.0, 6.0]])
... )
>>> m
PiecewiseLinearNumericalEncoder(n_features=2, feature_size=3)
>>> out = m(torch.tensor([[0.0, 0.0], [1.0, 1.0], [2.0, 2.0], [3.0, 3.0]]))
>>> out
tensor([[[-1.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, 0.0000],
[ 0.5000, 0.0000, 0.0000]],
[[ 1.0000, 0.0000, 0.0000],
[ 1.0000, 0.0000, 0.0000]],
[[ 1.0000, 0.5000, 0.0000],
[ 1.0000, 0.5000, 0.0000]]])
karbonn.modules.PiecewiseLinearNumericalEncoder.input_size
property
¶
input_size: int
Return the input feature size i.e. the number of scalar values.
karbonn.modules.PiecewiseLinearNumericalEncoder.output_size
property
¶
output_size: int
Return the output feature size i.e. the number of bins minus one.
karbonn.modules.Quadratic ¶
Bases: BaseAlphaActivation
Implement the Quadratic activation layer.
Formula: 1 / (1 + (alpha * x)^2)
This activation layer was proposed in the following paper:
Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs.
Ramasinghe S., Lucey S.
ECCV 2022. (http://arxiv.org/pdf/2111.15135)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
num_parameters
|
int
|
The number of learnable parameters. Although
it takes an integer as input, there is only two values are
legitimate: |
1
|
init
|
float
|
The initial value of the learnable parameter(s). |
1.0
|
learnable
|
bool
|
If |
True
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Quadratic
>>> m = Quadratic()
>>> m
Quadratic(num_parameters=1, learnable=True)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[1.0000, 0.5000, 0.2000, 0.1000],
[0.0588, 0.0385, 0.0270, 0.0200]], grad_fn=<MulBackward0>)
karbonn.modules.ReLUn ¶
Bases: Module
Implement the ReLU-n module.
The ReLU-n equation is: ReLUn(x, n)=min(max(0,x),n)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max
|
float
|
The maximum value a.k.a. |
1.0
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import ReLUn
>>> m = ReLUn(max=5)
>>> m
ReLUn(max=5.0)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[0., 1., 2., 3.],
[4., 5., 5., 5.]])
karbonn.modules.RelativeLoss ¶
Bases: Module
Implement a "generic" relative loss that takes as input a criterion.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
criterion
|
Module | dict
|
The criterion or its configuration. This criterion should not have reduction to be compatible with the shapes of the prediction and targets. |
required |
indicator
|
BaseRelativeIndicator | dict | None
|
The name of the indicator function to use or its
implementation. If |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the indicator is zero. |
1e-08
|
Example usage:
>>> import torch
>>> from karbonn.modules import RelativeLoss
>>> from karbonn.modules.loss import ClassicalRelativeIndicator
>>> criterion = RelativeLoss(
... criterion=torch.nn.MSELoss(reduction="none"),
... indicator=ClassicalRelativeIndicator(),
... )
>>> criterion
RelativeLoss(
eps=1e-08, reduction=mean
(criterion): MSELoss()
(indicator): ClassicalRelativeIndicator()
)
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> loss = criterion(prediction, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
karbonn.modules.RelativeMSELoss ¶
Bases: RelativeLoss
Implement the relative MSE loss.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
indicator
|
BaseRelativeIndicator | dict | None
|
The name of the indicator function to use or its
implementation. If |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the indicator is zero. |
1e-08
|
Example usage:
>>> import torch
>>> from karbonn.modules import RelativeMSELoss
>>> from karbonn.modules.loss import ClassicalRelativeIndicator
>>> criterion = RelativeMSELoss(indicator=ClassicalRelativeIndicator())
>>> criterion
RelativeMSELoss(
eps=1e-08, reduction=mean
(criterion): MSELoss()
(indicator): ClassicalRelativeIndicator()
)
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> loss = criterion(prediction, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
karbonn.modules.RelativeSmoothL1Loss ¶
Bases: RelativeLoss
Implement the relative smooth L1 loss.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
indicator
|
BaseRelativeIndicator | dict | None
|
The name of the indicator function to use or its
implementation. If |
None
|
reduction
|
str
|
The reduction strategy. The valid values are
|
'mean'
|
eps
|
float
|
An arbitrary small strictly positive number to avoid undefined results when the indicator is zero. |
1e-08
|
Example usage:
>>> import torch
>>> from karbonn.modules import RelativeSmoothL1Loss
>>> from karbonn.modules.loss import ClassicalRelativeIndicator
>>> criterion = RelativeSmoothL1Loss(indicator=ClassicalRelativeIndicator())
>>> criterion
RelativeSmoothL1Loss(
eps=1e-08, reduction=mean
(criterion): SmoothL1Loss()
(indicator): ClassicalRelativeIndicator()
)
>>> prediction = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> loss = criterion(prediction, target)
>>> loss
tensor(..., grad_fn=<MeanBackward0>)
>>> loss.backward()
karbonn.modules.ResidualBlock ¶
Bases: Module
Implementation of a residual block.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
residual
|
Module | dict
|
The residual mapping module or its configuration (dictionary). |
required |
skip
|
Module | dict | None
|
The skip mapping module or its configuration
(dictionary). If |
None
|
Example usage:
>>> import torch
>>> from torch import nn
>>> from karbonn.modules import ResidualBlock
>>> m = ResidualBlock(residual=nn.Sequential(nn.Linear(4, 6), nn.ReLU(), nn.Linear(6, 4)))
>>> m
ResidualBlock(
(residual): Sequential(
(0): Linear(in_features=4, out_features=6, bias=True)
(1): ReLU()
(2): Linear(in_features=6, out_features=4, bias=True)
)
(skip): Identity()
)
>>> out = m(torch.rand(6, 4))
>>> out
tensor([[...]], grad_fn=<AddBackward0>)
karbonn.modules.SafeExp ¶
Bases: Module
Implement a torch.nn.Module
to compute the exponential of the
elements.
The values that are higher than the specified minimum value are set to this maximum value. Using a not too large positive value leads to an output tensor without Inf.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
max
|
float
|
The maximum value before to compute the exponential. |
20.0
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import SafeExp
>>> m = SafeExp()
>>> m
SafeExp(max=20.0)
>>> out = m(torch.tensor([[0.01, 0.1, 1.0], [10.0, 100.0, 1000.0]]))
>>> out
tensor([[1.0101e+00, 1.1052e+00, 2.7183e+00],
[2.2026e+04, 4.8517e+08, 4.8517e+08]])
karbonn.modules.SafeLog ¶
Bases: Module
Implement a torch.nn.Module
to compute the logarithm natural
of the elements.
The values that are lower than the specified minimum value are set to this minimum value. Using a small positive value leads to an output tensor without NaN or Inf.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
min
|
float
|
The minimum value before to compute the logarithm natural. |
1e-08
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import SafeLog
>>> m = SafeLog()
>>> m
SafeLog(min=1e-08)
>>> out = m(torch.tensor([[1e-4, 1e-5, 1e-6], [1e-8, 1e-9, 1e-10]]))
>>> out
tensor([[ -9.2103, -11.5129, -13.8155],
[-18.4207, -18.4207, -18.4207]])
karbonn.modules.Sin ¶
Bases: Module
Implement the sine activation layer.
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Sin
>>> m = Sin()
>>> m
Sin()
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[ 0.0000, 0.8415, 0.9093, 0.1411],
[-0.7568, -0.9589, -0.2794, 0.6570]])
karbonn.modules.Sinh ¶
Bases: Module
Implement a torch.nn.Module
to compute the hyperbolic sine
(sinh) of the elements.
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Sinh
>>> m = Sinh()
>>> m
Sinh()
>>> out = m(torch.tensor([[-1.0, 0.0, 1.0], [-2.0, 2.0, 4.0]]))
>>> out
tensor([[-1.1752, 0.0000, 1.1752],
[-3.6269, 3.6269, 27.2899]])
karbonn.modules.Snake ¶
Bases: Module
Implement the Snake activation layer.
Snake was proposed in the following paper:
Neural Networks Fail to Learn Periodic Functions and How to Fix It.
Ziyin L., Hartwig T., Ueda M.
NeurIPS, 2020. (http://arxiv.org/pdf/2006.08195)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
frequency
|
float
|
The frequency. |
1.0
|
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import Snake
>>> m = Snake()
>>> m
Snake(frequency=1.0)
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[0.0000, 1.7081, 2.8268, 3.0199],
[4.5728, 5.9195, 6.0781, 7.4316]])
karbonn.modules.SquaredReLU ¶
Bases: Module
Implement the Squared ReLU.
Squared ReLU is defined in the following paper:
Primer: Searching for Efficient Transformers for Language Modeling.
So DR., Mańke W., Liu H., Dai Z., Shazeer N., Le QV.
NeurIPS, 2021. (https://arxiv.org/pdf/2109.08668.pdf)
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import SquaredReLU
>>> m = SquaredReLU()
>>> m
SquaredReLU()
>>> out = m(torch.arange(8, dtype=torch.float).view(2, 4))
>>> out
tensor([[ 0., 1., 4., 9.],
[16., 25., 36., 49.]])
karbonn.modules.Squeeze ¶
Bases: Module
Implement a torch.nn.Module
to squeeze the input tensor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dim
|
int | None
|
The dimension to squeeze the input tensor. If |
None
|
Example usage:
>>> import torch
>>> from karbonn.modules import Squeeze
>>> m = Squeeze()
>>> m
Squeeze(dim=None)
>>> out = m(torch.ones(2, 1, 3, 1))
>>> out.shape
torch.Size([2, 3])
karbonn.modules.SumFusion ¶
Bases: Module
Implement a layer to sum the inputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
normalized
|
bool
|
The output is normalized by the number of inputs. |
False
|
Example usage:
>>> import torch
>>> from karbonn.modules import SumFusion
>>> module = SumFusion()
>>> module
SumFusion(normalized=False)
>>> x1 = torch.tensor([[2.0, 3.0, 4.0], [5.0, 6.0, 7.0]], requires_grad=True)
>>> x2 = torch.tensor([[12.0, 13.0, 14.0], [15.0, 16.0, 17.0]], requires_grad=True)
>>> out = module(x1, x2)
>>> out
tensor([[14., 16., 18.],
[20., 22., 24.]], grad_fn=<AddBackward0>)
>>> out.mean().backward()
karbonn.modules.ToBinaryLabel ¶
Bases: Module
Implement a torch.nn.Module
to compute binary labels from
scores by thresholding.
The output label is 1
if the value is greater than the
threshold, and 0
otherwise.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
threshold
|
float
|
The threshold value used to compute the binary labels. |
0.0
|
Example usage:
>>> import torch
>>> from karbonn.modules import ToBinaryLabel
>>> transform = ToBinaryLabel()
>>> transform
ToBinaryLabel(threshold=0.0)
>>> out = transform(torch.tensor([-1.0, 1.0, -2.0, 1.0]))
>>> out
tensor([0, 1, 0, 1])
karbonn.modules.ToBinaryLabel.threshold
property
¶
threshold: float
The threshold used to compute the binary label.
karbonn.modules.ToBinaryLabel.forward ¶
forward(scores: Tensor) -> Tensor
Compute binary labels from scores.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scores
|
Tensor
|
The scores used to compute the binary labels.
This input must be a |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The computed binary labels where the values are |
Example usage:
>>> import torch
>>> from karbonn.modules import ToBinaryLabel
>>> transform = ToBinaryLabel()
>>> out = transform(torch.tensor([-1.0, 1.0, -2.0, 1.0]))
>>> out
tensor([0, 1, 0, 1])
karbonn.modules.ToCategoricalLabel ¶
Bases: Module
Implement a torch.nn.Module
to compute categorical labels
from scores.
Example usage:
>>> import torch
>>> from karbonn.modules import ToCategoricalLabel
>>> transform = ToCategoricalLabel()
>>> transform
ToCategoricalLabel()
>>> out = transform(torch.tensor([[1.0, 2.0, 3.0, 4.0], [5.0, 3.0, 2.0, 2.0]]))
>>> out
tensor([3, 0])
karbonn.modules.ToCategoricalLabel.forward ¶
forward(scores: Tensor) -> Tensor
Compute categorical labels from scores.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
scores
|
Tensor
|
The scores used to compute the categorical labels.
This input must be a |
required |
Returns:
Type | Description |
---|---|
Tensor
|
The computed categorical labels where the values are in
|
Example usage:
>>> import torch
>>> from karbonn.modules import ToCategoricalLabel
>>> transform = ToCategoricalLabel()
>>> out = transform(torch.tensor([[1.0, 2.0, 3.0, 4.0], [5.0, 3.0, 2.0, 2.0]]))
>>> out
tensor([3, 0])
karbonn.modules.ToFloat ¶
Bases: Module
Implement a torch.nn.Module
to convert a tensor to a float
tensor.
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import ToFloat
>>> m = ToFloat()
>>> m
ToFloat()
>>> out = m(torch.tensor([[2, -1, 0], [1, 2, 3]]))
>>> out
tensor([[ 2., -1., 0.],
[ 1., 2., 3.]])
karbonn.modules.ToLong ¶
Bases: Module
Implement a torch.nn.Module
to convert a tensor to a long
tensor.
Shape
- Input:
(*)
, where*
means any number of dimensions. - Output:
(*)
, same shape as the input.
Example usage:
>>> import torch
>>> from karbonn.modules import ToLong
>>> m = ToLong()
>>> m
ToLong()
>>> out = m(torch.tensor([[2.0, -1.0, 0.0], [1.0, 2.0, 3.0]]))
>>> out
tensor([[ 2, -1, 0],
[ 1, 2, 3]])
karbonn.modules.TransformedLoss ¶
Bases: Module
Implement a loss function where the predictions and targets are transformed before to be fed to the loss function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
criterion
|
Module | dict
|
The criterion or its configuration. The loss has two inputs: predictions and targets. |
required |
prediction
|
Module | dict | None
|
The transformation for the predictions or its
configuration. If |
None
|
target
|
Module | dict | None
|
The transformation for the targets or its
configuration. If |
None
|
Example usage:
>>> import torch
>>> from karbonn.modules import TransformedLoss, Asinh
>>> criterion = TransformedLoss(
... criterion=torch.nn.SmoothL1Loss(),
... prediction=Asinh(),
... target=Asinh(),
... )
>>> loss = criterion(torch.randn(2, 4, requires_grad=True), torch.randn(2, 4))
>>> loss
tensor(..., grad_fn=<SmoothL1LossBackward0>)
>>> loss.backward()
karbonn.modules.View ¶
Bases: Module
Implement a torch.nn.Module
to return a new tensor with the
same data as the input tensor but of a different shape.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
shape
|
tuple[int, ...] | list[int]
|
The desired shape. |
required |
Example usage:
>>> import torch
>>> from karbonn.modules import View
>>> m = View(shape=(-1, 2, 3))
>>> m
View(shape=(-1, 2, 3))
>>> out = m(torch.ones(4, 5, 2, 3))
>>> out.shape
torch.Size([20, 2, 3])
karbonn.modules.binary_focal_loss ¶
binary_focal_loss(
alpha: float = 0.5,
gamma: float = 2.0,
reduction: str = "mean",
logits: bool = False,
) -> BinaryFocalLoss
Return an instantiated binary focal loss with a binary cross entropy loss.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
alpha
|
float
|
The weighting factor, which must be in the range
|
0.5
|
gamma
|
float
|
The focusing parameter, which must be positive
( |
2.0
|
reduction
|
str
|
The reduction to apply to the output:
|
'mean'
|
logits
|
bool
|
If |
False
|
Returns:
Type | Description |
---|---|
BinaryFocalLoss
|
The instantiated binary focal loss. |
Example usage:
>>> from karbonn.modules import binary_focal_loss
>>> criterion = binary_focal_loss()
>>> criterion
BinaryFocalLoss(
alpha=0.5, gamma=2.0, reduction=mean
(loss): BCELoss()
)
>>> criterion = binary_focal_loss(logits=True)
>>> criterion
BinaryFocalLoss(
alpha=0.5, gamma=2.0, reduction=mean
(loss): BCEWithLogitsLoss()
)