Utils
karbonn.utils ¶
Contain utility functions.
karbonn.utils.freeze_module ¶
freeze_module(module: Module) -> None
Freeze the parameters of the given module.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to freeze. |
required |
Example usage:
>>> import torch
>>> from karbonn.utils import freeze_module
>>> module = torch.nn.Linear(4, 6)
>>> freeze_module(module)
>>> for name, param in module.named_parameters():
... print(name, param.requires_grad)
...
weight False
bias False
karbonn.utils.get_module_device ¶
get_module_device(module: Module) -> device
Get the device used by this module.
This function assumes the module uses a single device. If the
module uses several devices, you should use
get_module_devices
. It returns torch.device('cpu')
if
the model does not have parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module. |
required |
Returns:
Type | Description |
---|---|
device
|
The device |
Example usage:
>>> import torch
>>> from karbonn.utils import get_module_device
>>> get_module_device(torch.nn.Linear(4, 6))
device(type='cpu')
karbonn.utils.get_module_devices ¶
get_module_devices(module: Module) -> tuple[device, ...]
Get the devices used in a module.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module. |
required |
Returns:
Type | Description |
---|---|
tuple[device, ...]
|
The tuple of |
Example usage:
>>> import torch
>>> from karbonn.utils import get_module_devices
>>> get_module_devices(torch.nn.Linear(4, 6))
(device(type='cpu'),)
karbonn.utils.has_learnable_parameters ¶
has_learnable_parameters(module: Module) -> bool
Indicate if the module has learnable parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to test. |
required |
Returns:
Type | Description |
---|---|
bool
|
|
Example usage:
>>> import torch
>>> from karbonn.utils import has_learnable_parameters, freeze_module
>>> has_learnable_parameters(torch.nn.Linear(4, 6))
True
>>> module = torch.nn.Linear(4, 6)
>>> freeze_module(module)
>>> has_learnable_parameters(module)
False
>>> has_learnable_parameters(torch.nn.Identity())
False
karbonn.utils.has_parameters ¶
has_parameters(module: Module) -> bool
Indicate if the module has parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to test. |
required |
Returns:
Type | Description |
---|---|
bool
|
|
Example usage:
>>> import torch
>>> from karbonn.utils import has_parameters
>>> has_parameters(torch.nn.Linear(4, 6))
True
>>> has_parameters(torch.nn.Identity())
False
karbonn.utils.is_loss_decreasing ¶
is_loss_decreasing(
module: Module,
criterion: Module | Callable[[Tensor, Tensor], Tensor],
optimizer: Optimizer,
feature: Tensor,
target: Tensor,
num_iterations: int = 1,
random_seed: int = 10772155803920552556,
) -> bool
Check if the loss decreased after some iterations.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to test. The module must have a single input tensor and a single output tensor. |
required |
criterion |
Module | Callable[[Tensor, Tensor], Tensor]
|
The criterion to test. |
required |
optimizer |
Optimizer
|
The optimizer to update the weights of the model. |
required |
feature |
Tensor
|
The input of the module. |
required |
target |
Tensor
|
The target used to compute the loss. |
required |
num_iterations |
int
|
The number of optimization steps. |
1
|
random_seed |
int
|
The random seed to make the function deterministic if the module contains randomness. |
10772155803920552556
|
Returns:
Type | Description |
---|---|
bool
|
|
Example usage:
>>> import torch
>>> from torch import nn
>>> from karbonn.utils import is_loss_decreasing
>>> module = nn.Linear(4, 2)
>>> is_loss_decreasing(
... module=module,
... criterion=nn.MSELoss(),
... optimizer=SGD(module.parameters(), lr=0.01),
... feature=torch.rand(4, 4),
... target=torch.rand(4, 2),
... )
True
karbonn.utils.is_loss_decreasing_with_adam ¶
is_loss_decreasing_with_adam(
module: Module,
criterion: Module | Callable[[Tensor, Tensor], Tensor],
feature: Tensor,
target: Tensor,
lr: float = 0.0003,
num_iterations: int = 1,
random_seed: int = 10772155803920552556,
) -> bool
Check if the loss decreased after some iterations.
The module is trained with the Adam optimizer.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to test. The module must have a single input tensor and a single output tensor. |
required |
criterion |
Module | Callable[[Tensor, Tensor], Tensor]
|
The criterion to test. |
required |
feature |
Tensor
|
The input of the module. |
required |
target |
Tensor
|
The target used to compute the loss. |
required |
lr |
float
|
The learning rate. |
0.0003
|
num_iterations |
int
|
The number of optimization steps. |
1
|
random_seed |
int
|
The random seed to make the function deterministic if the module contains randomness. |
10772155803920552556
|
Returns:
Type | Description |
---|---|
bool
|
|
Example usage:
>>> import torch
>>> from torch import nn
>>> from karbonn.utils import is_loss_decreasing_with_adam
>>> is_loss_decreasing_with_adam(
... module=nn.Linear(4, 2),
... criterion=nn.MSELoss(),
... feature=torch.rand(4, 4),
... target=torch.rand(4, 2),
... lr=0.0003,
... )
True
karbonn.utils.is_loss_decreasing_with_sgd ¶
is_loss_decreasing_with_sgd(
module: Module,
criterion: Module | Callable[[Tensor, Tensor], Tensor],
feature: Tensor,
target: Tensor,
lr: float = 0.01,
num_iterations: int = 1,
random_seed: int = 10772155803920552556,
) -> bool
Check if the loss decreased after some iterations.
The module is trained with the torch.optim.SGD
optimizer.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to test. The module must have a single input tensor and a single output tensor. |
required |
criterion |
Module | Callable[[Tensor, Tensor], Tensor]
|
The criterion to test. |
required |
feature |
Tensor
|
The input of the module. |
required |
target |
Tensor
|
The target used to compute the loss. |
required |
num_iterations |
int
|
The number of optimization steps. |
1
|
lr |
float
|
The learning rate. |
0.01
|
random_seed |
int
|
The random seed to make the function deterministic if the module contains randomness. |
10772155803920552556
|
Returns:
Type | Description |
---|---|
bool
|
|
Example usage:
>>> import torch
>>> from torch import nn
>>> from karbonn.utils import is_loss_decreasing_with_adam
>>> is_loss_decreasing_with_adam(
... module=nn.Linear(4, 2),
... criterion=nn.MSELoss(),
... feature=torch.rand(4, 4),
... target=torch.rand(4, 2),
... lr=0.01,
... )
True
karbonn.utils.is_module_config ¶
is_module_config(config: dict) -> bool
Indicate if the input configuration is a configuration for a
torch.nn.Module
.
This function only checks if the value of the key _target_
is valid. It does not check the other values. If _target_
indicates a function, the returned type hint is used to check
the class.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config |
dict
|
The configuration to check. |
required |
Returns:
Type | Description |
---|---|
bool
|
|
Example usage:
>>> from karbonn import is_module_config
>>> is_module_config({"_target_": "torch.nn.Identity"})
True
karbonn.utils.is_module_on_device ¶
is_module_on_device(module: Module, device: device) -> bool
Indicate if all the parameters of a module are on the specified device.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module. |
required |
device |
device
|
The device. |
required |
Returns:
Type | Description |
---|---|
bool
|
|
Example usage:
>>> import torch
>>> from karbonn.utils import is_module_on_device
>>> is_module_on_device(torch.nn.Linear(4, 6), torch.device("cpu"))
True
karbonn.utils.module_mode ¶
module_mode(module: Module) -> Generator[None, None, None]
Implement a context manager that restores the mode (train or eval) of every submodule individually.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to restore the mode. |
required |
Example usage:
>>> import torch
>>> from karbonn.utils import module_mode
>>> module = torch.nn.ModuleDict(
... {"module1": torch.nn.Linear(4, 6), "module2": torch.nn.Linear(2, 4).eval()}
... )
>>> print(module["module1"].training, module["module2"].training)
True False
>>> with module_mode(module):
... module.eval()
... print(module["module1"].training, module["module2"].training)
...
ModuleDict(
(module1): Linear(in_features=4, out_features=6, bias=True)
(module2): Linear(in_features=2, out_features=4, bias=True)
)
False False
>>> print(module["module1"].training, module["module2"].training)
True False
karbonn.utils.num_learnable_parameters ¶
num_learnable_parameters(module: Module) -> int
Return the number of learnable parameters in the module.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to compute the number of learnable parameters. |
required |
Returns:
Name | Type | Description |
---|---|---|
int |
int
|
The number of learnable parameters. |
Example usage:
>>> import torch
>>> from karbonn.utils import num_learnable_parameters
>>> num_learnable_parameters(torch.nn.Linear(4, 6))
30
>>> module = torch.nn.Linear(4, 6)
>>> freeze_module(module)
>>> num_learnable_parameters(module)
0
>>> num_learnable_parameters(torch.nn.Identity())
0
karbonn.utils.num_parameters ¶
num_parameters(module: Module) -> int
Return the number of parameters in the module.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to compute the number of parameters. |
required |
Returns:
Type | Description |
---|---|
int
|
The number of parameters. |
Example usage:
>>> import torch
>>> from karbonn.utils import num_parameters
>>> num_parameters(torch.nn.Linear(4, 6))
30
>>> num_parameters(torch.nn.Identity())
0
karbonn.utils.setup_module ¶
setup_module(module: Module | dict) -> Module
Set up a torch.nn.Module
object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module | dict
|
The module or its configuration. |
required |
Returns:
Type | Description |
---|---|
Module
|
The instantiated |
Example usage:
>>> from karbonn import setup_module
>>> linear = setup_module(
... {"_target_": "torch.nn.Linear", "in_features": 4, "out_features": 6}
... )
>>> linear
Linear(in_features=4, out_features=6, bias=True)
karbonn.utils.top_module_mode ¶
top_module_mode(
module: Module,
) -> Generator[None, None, None]
Implement a context manager that restores the mode (train or eval) of a given module.
This context manager only restores the mode at the top-level.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to restore the mode. |
required |
Example usage:
>>> import torch
>>> from karbonn.utils import top_module_mode
>>> module = torch.nn.Linear(4, 6)
>>> print(module.training)
True
>>> with top_module_mode(module):
... module.eval()
... print(module.training)
...
Linear(in_features=4, out_features=6, bias=True)
False
>>> print(module.training)
True
karbonn.utils.unfreeze_module ¶
unfreeze_module(module: Module) -> None
Unfreeze the parameters of the given module.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Module
|
The module to unfreeze. |
required |
Example usage:
>>> import torch
>>> from karbonn.utils import unfreeze_module
>>> module = torch.nn.Linear(4, 6)
>>> unfreeze_module(module)
>>> for name, param in module.named_parameters():
... print(name, param.requires_grad)
...
weight True
bias True