dival.reconstructors.fbpunet_reconstructor module

class dival.reconstructors.fbpunet_reconstructor.FBPUNetReconstructor(ray_trafo, allow_multiple_workers_without_random_access=False, **kwargs)[source]

Bases: StandardLearnedReconstructor

CT reconstructor applying filtered back-projection followed by a postprocessing U-Net (e.g. [1]).

References

HYPER_PARAMS = {'batch_size': {'default': 64, 'retrain': True}, 'channels': {'default': (32, 32, 64, 64, 128, 128), 'retrain': True}, 'epochs': {'default': 20, 'retrain': True}, 'filter_type': {'default': 'Hann', 'retrain': True}, 'frequency_scaling': {'default': 1.0, 'retrain': True}, 'init_bias_zero': {'default': True, 'retrain': True}, 'lr': {'default': 0.001, 'retrain': True}, 'lr_min': {'default': 0.0001, 'retrain': True}, 'normalize_by_opnorm': {'default': False, 'retrain': True}, 'scales': {'default': 5, 'retrain': True}, 'scheduler': {'choices': ['base', 'cosine'], 'default': 'cosine', 'retrain': True}, 'skip_channels': {'default': 4, 'retrain': True}, 'use_sigmoid': {'default': False, 'retrain': True}}

Specification of hyper parameters.

This class attribute is a dict that lists the hyper parameter of the reconstructor. It should not be hidden by an instance attribute of the same name (i.e. by assigning a value to self.HYPER_PARAMS in an instance of a subtype).

Note: in order to inherit HYPER_PARAMS from a super class, the subclass should create a deep copy of it, i.e. execute HYPER_PARAMS = copy.deepcopy(SuperReconstructorClass.HYPER_PARAMS) in the class body.

The keys of this dict are the names of the hyper parameters, and each value is a dict with the following fields.

Standard fields:

'default'

Default value.

'retrain'bool, optional

Whether training depends on the parameter. Default: False. Any custom subclass of LearnedReconstructor must set this field to True if training depends on the parameter value.

Hyper parameter search fields:

'range'(float, float), optional

Interval of valid values. If this field is set, the parameter is taken to be real-valued. Either 'range' or 'choices' has to be set.

'choices'sequence, optional

Sequence of valid values of any type. If this field is set, 'range' is ignored. Can be used to perform manual grid search. Either 'range' or 'choices' has to be set.

'method'{‘grid_search’, ‘hyperopt’}, optional

Optimization method for the parameter. Default: 'grid_search'. Options are:

'grid_search'

Grid search over a sequence of fixed values. Can be configured by the dict 'grid_search_options'.

'hyperopt'

Random search using the hyperopt package. Can be configured by the dict 'hyperopt_options'.

'grid_search_options'dict

Option dict for grid search.

The following fields determine how 'range' is sampled (in case it is specified and no 'choices' are specified):

'num_samples'int, optional

Number of values. Default: 10.

'type'{‘linear’, ‘logarithmic’}, optional

Type of grid, i.e. distribution of the values. Default: 'linear'. Options are:

'linear'

Equidistant values in the 'range'.

'logarithmic'

Values in the 'range' that are equidistant in the log scale.

'log_base'int, optional

Log-base that is used if 'type' is 'logarithmic'. Default: 10..

'hyperopt_options'dict

Option dict for 'hyperopt' method with the fields:

'space'hyperopt space, optional

Custom hyperopt search space. If this field is set, 'range' and 'type' are ignored.

'type'{‘uniform’}, optional

Type of the space for sampling. Default: 'uniform'. Options are:

'uniform'

Uniform distribution over the 'range'.

__init__(ray_trafo, allow_multiple_workers_without_random_access=False, **kwargs)[source]
Parameters:
  • ray_trafo (odl.tomo.RayTransform) – Ray transform (the forward operator).

  • allow_multiple_workers_without_random_access (bool, optional) – Whether for datasets without support for random access a specification of num_data_loader_workers > 1 is honored. If False (the default), the value is overridden by 1 for generator-only datasets.

  • super().__init__(). (Further keyword arguments are passed to)

train(dataset)[source]

Train the reconstructor with a dataset by adapting its parameters.

Should only use the training and validation data from dataset.

Parameters:

dataset (Dataset) – The dataset from which the training data should be used.

init_model()[source]

Initialize model. Called in train() after calling init_transform(), but before calling init_optimizer() and init_scheduler().

init_scheduler(dataset_train)[source]

Initialize the learning rate scheduler. Called in train(), after calling init_transform(), init_model() and init_optimizer().

Parameters:

dataset_train (torch.utils.data.Dataset) – The training (torch) dataset constructed in train().

property batch_size
property channels
property epochs
property filter_type
property frequency_scaling
property init_bias_zero
property lr
property lr_min
property normalize_by_opnorm
property scales
property scheduler

torch learning rate scheduler: The scheduler, usually set by init_scheduler(), which gets called in train().

property skip_channels
property use_sigmoid