torchdistill.optim


torchdistill.optim.registry

torchdistill.optim.registry.register_optimizer(arg=None, **kwargs)[source]

Registers an optimizer class or function to instantiate it.

Parameters:

arg (class or Callable or None) – class or function to be registered as an optimizer.

Returns:

registered optimizer class or function to instantiate it.

Return type:

class or Callable

Note

The optimizer will be registered as an option. You can choose the registered class/function by specifying the name of the class/function or key you used for the registration, in a training configuration used for torchdistill.core.distillation.DistillationBox or torchdistill.core.training.TrainingBox.

If you want to register the class/function with a key of your choice, add key to the decorator as below:

>>> from torch.optim import Optimizer
>>> from torchdistill.optim.registry import register_optimizer
>>>
>>> @register_optimizer(key='my_custom_optimizer')
>>> class CustomOptimizer(Optimizer):
>>>     def __init__(self, **kwargs):
>>>         print('This is my custom optimizer class')

In the example, CustomOptimizer class is registered with a key “my_custom_optimizer”. When you configure torchdistill.core.distillation.DistillationBox or torchdistill.core.training.TrainingBox, you can choose the CustomOptimizer class by “my_custom_optimizer”.

torchdistill.optim.registry.register_scheduler(arg=None, **kwargs)[source]

Registers a scheduler class or function to instantiate it.

Parameters:

arg (class or Callable or None) – class or function to be registered as a scheduler.

Returns:

registered scheduler class or function to instantiate it.

Return type:

class or Callable

Note

The scheduler will be registered as an option. You can choose the registered class/function by specifying the name of the class/function or key you used for the registration, in a training configuration used for torchdistill.core.distillation.DistillationBox or torchdistill.core.training.TrainingBox.

If you want to register the class/function with a key of your choice, add key to the decorator as below:

>>> from torch.optim.lr_scheduler import LRScheduler
>>> from torchdistill.optim.registry import register_scheduler
>>>
>>> @register_scheduler(key='my_custom_scheduler')
>>> class CustomScheduler(LRScheduler):
>>>     def __init__(self, **kwargs):
>>>         print('This is my custom scheduler class')

In the example, CustomScheduler class is registered with a key “my_custom_scheduler”. When you configure torchdistill.core.distillation.DistillationBox or torchdistill.core.training.TrainingBox, you can choose the CustomScheduler class by “my_custom_scheduler”.

torchdistill.optim.registry.get_optimizer(module, key, filters_params=True, *args, **kwargs)[source]

Gets an optimizer from the optimizer registry.

Parameters:
  • module (nn.Module) – module to be added to optimizer.

  • key (str) – optimizer key.

  • filters_params (bool) – if True, filers out parameters whose required_grad = False.

Returns:

optimizer.

Return type:

Optimizer

torchdistill.optim.registry.get_scheduler(optimizer, key, *args, **kwargs)[source]

Gets a scheduler from the scheduler registry.

Parameters:
  • optimizer (Optimizer) – optimizer to be added to scheduler.

  • key (str) – scheduler key.

Returns:

scheduler.

Return type:

LRScheduler


torchdistill.optim.scheduler

torchdistill.optim.scheduler.poly_lr_scheduler(optimizer, num_iterations, num_epochs, power=0.9)[source]

A “poly” learning rate policy used in “Rethinking Atrous Convolution for Semantic Image Segmentation”

\[lr = init\_lr \times \left(1 - \frac{iter}{num\_iterations \times num\_epochs}\right)^{power}\]
Parameters:
  • optimizer (Optimizer) – optimizer.

  • num_iterations (int) – number of iterations per epoch.

  • num_epochs (int) – number of epochs for the training with this scheduler.

  • power (float) – exponent.

Returns:

lambda lr scheduler.

Return type:

LambdaLR