dival.reconstructors.learnedgd_reconstructor module

class dival.reconstructors.learnedgd_reconstructor.LearnedGDReconstructor(ray_trafo, **kwargs)[source]

Bases: StandardLearnedReconstructor

CT reconstructor applying a learned gradient descent iterative scheme.

Note that the weights are not shared across the blocks, like presented in the original paper [1]. This implementation rather follows https://github.com/adler-j/learned_primal_dual/blob/master/ellipses/learned_primal.py.

References

HYPER_PARAMS = {'batch_norm': {'default': False, 'retrain': True}, 'batch_size': {'default': 32, 'retrain': True}, 'epochs': {'default': 20, 'retrain': True}, 'init_fbp': {'default': True, 'retrain': True}, 'init_filter_type': {'default': 'Hann', 'retrain': True}, 'init_frequency_scaling': {'default': 0.4, 'retrain': True}, 'init_weight_gain': {'default': 1.0, 'retrain': True}, 'init_weight_xavier_normal': {'default': False, 'retrain': True}, 'internal_ch': {'default': 32, 'retrain': True}, 'kernel_size': {'default': 3, 'retrain': True}, 'lr': {'default': 0.01, 'retrain': True}, 'lr_time_decay_rate': {'default': 3.2, 'retrain': True}, 'lrelu_coeff': {'default': 0.2, 'retrain': True}, 'niter': {'default': 5, 'retrain': True}, 'nlayer': {'default': 3, 'retrain': True}, 'normalize_by_opnorm': {'default': True, 'retrain': True}, 'prelu': {'default': False, 'retrain': True}, 'use_sigmoid': {'default': False, 'retrain': True}}

Specification of hyper parameters.

This class attribute is a dict that lists the hyper parameter of the reconstructor. It should not be hidden by an instance attribute of the same name (i.e. by assigning a value to self.HYPER_PARAMS in an instance of a subtype).

Note: in order to inherit HYPER_PARAMS from a super class, the subclass should create a deep copy of it, i.e. execute HYPER_PARAMS = copy.deepcopy(SuperReconstructorClass.HYPER_PARAMS) in the class body.

The keys of this dict are the names of the hyper parameters, and each value is a dict with the following fields.

Standard fields:

'default'

Default value.

'retrain'bool, optional

Whether training depends on the parameter. Default: False. Any custom subclass of LearnedReconstructor must set this field to True if training depends on the parameter value.

Hyper parameter search fields:

'range'(float, float), optional

Interval of valid values. If this field is set, the parameter is taken to be real-valued. Either 'range' or 'choices' has to be set.

'choices'sequence, optional

Sequence of valid values of any type. If this field is set, 'range' is ignored. Can be used to perform manual grid search. Either 'range' or 'choices' has to be set.

'method'{‘grid_search’, ‘hyperopt’}, optional

Optimization method for the parameter. Default: 'grid_search'. Options are:

'grid_search'

Grid search over a sequence of fixed values. Can be configured by the dict 'grid_search_options'.

'hyperopt'

Random search using the hyperopt package. Can be configured by the dict 'hyperopt_options'.

'grid_search_options'dict

Option dict for grid search.

The following fields determine how 'range' is sampled (in case it is specified and no 'choices' are specified):

'num_samples'int, optional

Number of values. Default: 10.

'type'{‘linear’, ‘logarithmic’}, optional

Type of grid, i.e. distribution of the values. Default: 'linear'. Options are:

'linear'

Equidistant values in the 'range'.

'logarithmic'

Values in the 'range' that are equidistant in the log scale.

'log_base'int, optional

Log-base that is used if 'type' is 'logarithmic'. Default: 10..

'hyperopt_options'dict

Option dict for 'hyperopt' method with the fields:

'space'hyperopt space, optional

Custom hyperopt search space. If this field is set, 'range' and 'type' are ignored.

'type'{‘uniform’}, optional

Type of the space for sampling. Default: 'uniform'. Options are:

'uniform'

Uniform distribution over the 'range'.

__init__(ray_trafo, **kwargs)[source]
Parameters:
  • ray_trafo (odl.tomo.RayTransform) – Ray transform (the forward operator).

  • super().__init__(). (Further keyword arguments are passed to)

init_model()[source]

Initialize model. Called in train() after calling init_transform(), but before calling init_optimizer() and init_scheduler().

property batch_norm
property batch_size
property epochs
property init_fbp
property init_filter_type
property init_frequency_scaling
property init_weight_gain
property init_weight_xavier_normal
property internal_ch
property kernel_size
property lr
property lr_time_decay_rate
property lrelu_coeff
property niter
property nlayer
property normalize_by_opnorm
property prelu
property use_sigmoid