dival package
Subpackages
- dival.datasets package
- Submodules
- Module contents
get_standard_dataset()
Dataset
Dataset.space
Dataset.shape
Dataset.train_len
Dataset.validation_len
Dataset.test_len
Dataset.random_access
Dataset.num_elements_per_sample
Dataset.standard_dataset_name
Dataset.__init__()
Dataset.generator()
Dataset.get_train_generator()
Dataset.get_validation_generator()
Dataset.get_test_generator()
Dataset.get_len()
Dataset.get_train_len()
Dataset.get_validation_len()
Dataset.get_test_len()
Dataset.get_shape()
Dataset.get_num_elements_per_sample()
Dataset.get_data_pairs()
Dataset.get_data_pairs_per_index()
Dataset.create_torch_dataset()
Dataset.create_keras_generator()
Dataset.get_sample()
Dataset.get_samples()
Dataset.supports_random_access()
GroundTruthDataset
ObservationGroundTruthPairDataset
EllipsesDataset
LoDoPaBDataset
LoDoPaBDataset.space
LoDoPaBDataset.shape
LoDoPaBDataset.train_len
LoDoPaBDataset.validation_len
LoDoPaBDataset.test_len
LoDoPaBDataset.random_access
LoDoPaBDataset.num_elements_per_sample
LoDoPaBDataset.ray_trafo
LoDoPaBDataset.sorted_by_patient
LoDoPaBDataset.rel_patient_ids
LoDoPaBDataset.__init__()
LoDoPaBDataset.generator()
LoDoPaBDataset.get_ray_trafo()
LoDoPaBDataset.get_sample()
LoDoPaBDataset.get_samples()
LoDoPaBDataset.get_indices_for_patient()
LoDoPaBDataset.check_for_lodopab()
LoDoPaBDataset.get_num_patients()
LoDoPaBDataset.get_patient_ids()
LoDoPaBDataset.get_idx_sorted_by_patient()
AngleSubsetDataset
CachedDataset
generate_cache_files()
FBPDataset
ReorderedDataset
- dival.reconstructors package
- Subpackages
- Submodules
- dival.reconstructors.dip_ct_reconstructor module
- dival.reconstructors.fbpunet_reconstructor module
- dival.reconstructors.iradonmap_reconstructor module
- dival.reconstructors.learnedgd_reconstructor module
- dival.reconstructors.learnedpd_reconstructor module
- dival.reconstructors.odl_reconstructors module
- dival.reconstructors.reconstructor module
- dival.reconstructors.regression_reconstructors module
- dival.reconstructors.standard_learned_reconstructor module
- dival.reconstructors.tvadam_ct_reconstructor module
- Module contents
Reconstructor
Reconstructor.reco_space
Reconstructor.observation_space
Reconstructor.name
Reconstructor.hyper_params
Reconstructor.HYPER_PARAMS
Reconstructor.__init__()
Reconstructor.reconstruct()
Reconstructor.save_hyper_params()
Reconstructor.load_hyper_params()
Reconstructor.save_params()
Reconstructor.load_params()
IterativeReconstructor
StandardIterativeReconstructor
LearnedReconstructor
FunctionReconstructor
- dival.util package
- Submodules
- dival.util.constants module
- dival.util.download module
- dival.util.input module
- dival.util.odl_noise_random_state module
- dival.util.odl_utility module
- dival.util.plot module
- dival.util.std_out_err_redirect_tqdm module
- dival.util.torch_losses module
- dival.util.torch_utility module
- dival.util.zenodo_download module
- Module contents
- Submodules
Submodules
Module contents
- dival.get_config(key_path='/')[source]
Return (sub-)configuration stored in config file. Note that values may differ from the current
CONFIG
variable if it was manipulated directly.- Parameters:
key_path (str, optional) –
'/'
-separated path to sub-configuration. Default is'/'
, which returns the full configuration dict.- Returns:
(sub-)configuration, either a dict or a value
- Return type:
sub_config
- dival.set_config(key_path, value, verbose=True)[source]
Updates (sub-)configuration both in
CONFIG
variable and in config file.- Parameters:
key_path (str, optional) –
'/'
-separated path to sub-configuration. Pass'/'
to replace the full configuration dict.value (object) – (sub-)configuration value. Either a dict, which is copied, or a value.
- class dival.DataPairs(observations, ground_truth=None, name='', description='')[source]
Bases:
object
Bundles
observations
withground_truth
. Implements__getitem__()
and__len__()
.- observations
The observations, possibly distorted or low-dimensional.
- Type:
list of observation space elements
- ground_truth
The ground truth data (may be replaced with good quality references). If not known, it may be omitted (None).
- Type:
list of reconstruction space elements or None
- class dival.Dataset(space=None)[source]
Bases:
object
Dataset base class.
Subclasses must either implement
generator()
or provide random access by implementingget_sample()
andget_samples()
(which then should be indicated by setting the attributerandom_access = True
).- space
The spaces of the elements of samples as a tuple. If only one element per sample is provided, this attribute is the space of the element (i.e., no tuple). It is strongly recommended to set this attribute in subclasses, as some functionality may depend on it.
- Type:
[tuple of ]
odl.space.base_tensors.TensorSpace
or None
- shape
The shapes of the elements of samples as a tuple of tuple of int. If only one element per sample is provided, this attribute is the shape of the element (i.e., not a tuple of tuple of int, but a tuple of int).
- Type:
[tuple of ] tuple of int, optional
- train_len
Number of training samples.
- Type:
int, optional
- validation_len
Number of validation samples.
- Type:
int, optional
- test_len
Number of test samples.
- Type:
int, optional
- random_access
Whether the dataset supports random access via
self.get_sample
andself.get_samples
. Setting this attribute is the preferred way for subclasses to indicate whether they support random access.- Type:
bool, optional
- num_elements_per_sample
Number of elements per sample. E.g. 1 for a ground truth dataset or 2 for a dataset of pairs of observation and ground truth.
- Type:
int, optional
- standard_dataset_name
Datasets returned by get_standard_dataset have this attribute giving its name.
- Type:
str, optional
- __init__(space=None)[source]
The attributes that potentially should be set by the subclass are:
space
(can also be set by argument),shape
,train_len
,validation_len
,test_len
,random_access
andnum_elements_per_sample
.- Parameters:
space ([tuple of ]
odl.space.base_tensors.TensorSpace
, optional) – The spaces of the elements of samples as a tuple. If only one element per sample is provided, this attribute is the space of the element (i.e., no tuple). It is strongly recommended to set space in subclasses, as some functionality may depend on it.
- generator(part='train')[source]
Yield data.
The default implementation calls
get_sample()
if the dataset implements it (i.e., supports random access).- Parameters:
part ({
'train'
,'validation'
,'test'
}, optional) – Whether to yield train, validation or test data. Default is'train'
.- Yields:
data (odl element or tuple of odl elements) – Sample of the dataset.
- get_len(part='train')[source]
Return the number of elements the generator will yield.
- Parameters:
part ({
'train'
,'validation'
,'test'
}, optional) – Whether to return the number of train, validation or test elements. Default is'train'
.
- get_shape()[source]
Return the shape of each element.
Returns
shape
if it is set. Otherwise, it is inferred fromspace
(which is strongly recommended to be set in every subclass). If alsospace
is not set, aNotImplementedError
is raised.- Returns:
shape
- Return type:
[tuple of ] tuple
- get_num_elements_per_sample()[source]
Return number of elements per sample.
Returns
num_elements_per_sample
if it is set. Otherwise, it is inferred fromspace
(which is strongly recommended to be set in every subclass). If alsospace
is not set, aNotImplementedError
is raised.- Returns:
num_elements_per_sample
- Return type:
int
- get_data_pairs(part='train', n=None)[source]
Return first samples from data part as
DataPairs
object.Only supports datasets with two elements per sample.``
- Parameters:
part ({
'train'
,'validation'
,'test'
}, optional) – The data part. Default is'train'
.n (int, optional) – Number of pairs (from beginning). If None, all available data is used (the default).
- get_data_pairs_per_index(part='train', index=None)[source]
Return specific samples from data part as
DataPairs
object.Only supports datasets with two elements per sample.
For datasets not supporting random access, samples are extracted from
generator()
, which can be computationally expensive.- Parameters:
part ({
'train'
,'validation'
,'test'
}, optional) – The data part. Default is'train'
.index (int or list of int, optional) – Indices of the samples in the data part. Default is
'[0]'
.
- create_torch_dataset(part='train', reshape=None, transform=None)[source]
Create a torch dataset wrapper for one part of this dataset.
If
supports_random_access()
returnsFalse
, a subclass of oftorch.utils.data.IterableDataset
is returned that fetches samples viagenerator()
. Note: When using torch’s DataLoader with multiple workers you might want to individually configure the datasets for each worker, see the PyTorch docs on IterableDataset. For this purpose it can be useful to modify the wrapped dival dataset inworker_init_fn()
, which can be accessed there viatorch.utils.data.get_worker_info().dataset.dataset
.If
supports_random_access()
returns True, a subclass of oftorch.utils.data.Dataset
is returned that retrieves samples usingget_sample()
.- Parameters:
part ({
'train'
,'validation'
,'test'
}, optional) – The data part. Default is'train'
.reshape (tuple of (tuple or None), optional) – Shapes to which the elements of each sample will be reshaped. If None is passed for an element, no reshape is applied.
transform (callable, optional) – Transform to be applied on each sample, useful for augmentation. Default: None, i.e. no transform.
- Returns:
dataset – The torch dataset wrapping this dataset. The wrapped dival dataset is assigned to the attribute
dataset.dataset
.- Return type:
torch.utils.data.Dataset
ortorch.utils.data.IterableDataset
- create_keras_generator(part='train', batch_size=1, shuffle=True, reshape=None)[source]
Create a keras data generator wrapper for one part of this dataset.
If
supports_random_access()
returnsFalse
, a generator wrappinggenerator()
is returned. In this case no shuffling is performed regardless of the passed shuffle parameter. Also, parallel data loading (with multiple workers) is not applicable.If
supports_random_access()
returns True, atf.keras.utils.Sequence
is returned, which is implemented usingget_sample()
. For datasets that support parallel calls toget_sample()
, the returned data generator (sequence) can be used by multiple workers.- Parameters:
part ({
'train'
,'validation'
,'test'
}, optional) – The data part. Default is'train'
.batch_size (int, optional) – Batch size. Default is 1.
shuffle (bool, optional) – Whether to shuffle samples each epoch. This option has no effect if
supports_random_access()
returnsFalse
, since in that case samples are fetched directly fromgenerator()
. The default is True.reshape (tuple of (tuple or None), optional) – Shapes to which the elements of each sample will be reshaped. If None is passed for an element, no reshape is applied.
- get_sample(index, part='train', out=None)[source]
Get single sample by index.
- Parameters:
index (int) – Index of the sample.
part ({
'train'
,'validation'
,'test'
}, optional) – The data part. Default is'train'
.out (array-like or tuple of (array-like or bool) or None) –
Array(s) (or e.g. odl element(s)) to which the sample is written. A tuple should be passed, if the dataset returns two or more arrays per sample (i.e. pairs, …). If a tuple element is a bool, it has the following meaning:
True
Create a new array and return it.
False
Do not return this array, i.e. None is returned.
- Returns:
sample – E.g. for a pair dataset:
(array, None)
ifout=(True, False)
.- Return type:
[tuple of ] (array-like or None)
- get_samples(key, part='train', out=None)[source]
Get samples by slice or range.
The default implementation calls
get_sample()
if the dataset implements it.- Parameters:
key (slice or range) – Indexes of the samples.
part ({
'train'
,'validation'
,'test'
}, optional) – The data part. Default is'train'
.out (array-like or tuple of (array-like or bool) or None) –
Array(s) (or e.g. odl element(s)) to which the sample is written. The first dimension must match the number of samples requested. A tuple should be passed, if the dataset returns two or more arrays per sample (i.e. pairs, …). If a tuple element is a bool, it has the following meaning:
True
Create a new array and return it.
False
Do not return this array, i.e. None is returned.
- Returns:
samples – If the dataset has multiple arrays per sample, a tuple holding arrays is returned. E.g. for a pair dataset:
(array, None)
ifout=(True, False)
. The samples are stacked in the first (additional) dimension of each array.- Return type:
[tuple of ] (array-like or None)
- supports_random_access()[source]
Whether random access seems to be supported.
If the object has the attribute self.random_access, its value is returned (this is the preferred way for subclasses to indicate whether they support random access). Otherwise, a simple duck-type check is performed which tries to get the first sample by random access.
- Returns:
supports –
True
if the dataset supports random access, otherwiseFalse
.- Return type:
bool
- dival.get_standard_dataset(name, **kwargs)[source]
Return a standard dataset by name.
The standard datasets are (currently):
'ellipses'
A typical synthetical CT dataset with ellipse phantoms.
EllipsesDataset is used as ground truth dataset, a ray transform with parallel beam geometry using 30 angles is applied, and white gaussian noise with a standard deviation of 2.5% (i.e.
0.025 * mean(abs(observation))
) is added.In order to avoid the inverse crime, the ground truth images of shape (128, 128) are upscaled by bilinear interpolation to a resolution of (400, 400) before the ray transform is applied (whose discretization is different from the one of
ray_trafo
).- Attributes of the returned dataset:
- ray_trafo
odl.tomo.RayTransform
Ray transform corresponding to the noiseless forward operator.
get_ray_trafo(**kwargs)
functionFunction that returns a ray transform corresponding to the noiseless forward operator. Keyword arguments (e.g. impl) are forwarded to the
RayTransform
constructor.
- ray_trafo
'lodopab'
The LoDoPaB-CT dataset, which is documented in the Data Descriptor article https://www.nature.com/articles/s41597-021-00893-z and hosted on https://zenodo.org/record/3384092. It is a simulated low dose CT dataset based on real reconstructions from the LIDC-IDRI dataset.
The dataset contains 42895 pairs of images and projection data. For simulation, a ray transform with parallel beam geometry using 1000 angles and 513 detector pixels is used. Poisson noise corresponding to 4096 incident photons per pixel before attenuation is applied to the projection data.
- Attributes of the returned dataset:
- ray_trafo
odl.tomo.RayTransform
Ray transform corresponding to the noiseless forward operator.
- ray_trafo
- Methods of the returned dataset:
get_ray_trafo(**kwargs)
Function that returns a ray transform corresponding to the noiseless forward operator. Keyword arguments (e.g. impl) are forwarded to the
RayTransform
constructor.
- Parameters:
name (str) – Name of the dataset.
kwargs (dict) –
Keyword arguments. Supported parameters for the datasets are:
'ellipses'
- impl{
'skimage'
,'astra_cpu'
,'astra_cuda'
}, optional Implementation passed to
odl.tomo.RayTransform
Default:'astra_cuda'
.- fixed_seedsdict or bool, optional
Seeds to use for random ellipse generation, passed to
EllipsesDataset.__init__()
. Default:False
.- fixed_noise_seedsdict or bool, optional
Seeds to use for noise generation, passed as noise_seeds to
GroundTruthDataset.create_pair_dataset()
. IfTrue
is passed (the default), the seeds{'train': 1, 'validation': 2, 'test': 3}
are used.
- impl{
'lodopab'
- num_anglesint, optional
Number of angles to use from the full 1000 angles. Must be a divisor of 1000.
- observation_model{
'post-log'
,'pre-log'
}, optional The observation model to use. Default is
'post-log'
.- min_photon_countfloat, optional
Replacement value for a simulated photon count of zero. If
observation_model == 'post-log'
, a value greater than zero is required in order to avoid undefined values. The default is 0.1, both for'post-log'
and'pre-log'
model.- sorted_by_patientbool, optional
Whether to sort the samples by patient id. Useful to resplit the dataset. Default:
False
.- impl{
'skimage'
,'astra_cpu'
,'astra_cuda'
}, optional Implementation passed to
odl.tomo.RayTransform
Default:'astra_cuda'
.
- Returns:
dataset – The standard dataset. It has an attribute standard_dataset_name that stores its name.
- Return type:
- dival.get_reference_reconstructor(reconstructor_key_name_or_type, dataset_name, pretrained=True, **kwargs)[source]
Return a reference reconstructor.
- Parameters:
reconstructor_key_name_or_type (str or type) – Key name of configuration or reconstructor type.
dataset_name (str) – Standard dataset name.
pretrained (bool, optional) – Whether learned parameters should be loaded (if any). Default: True.
kwargs (dict) – Keyword arguments (passed to
construct_reconstructor()
). For CT configurations this includes the'impl'
used byodl.tomo.RayTransform
.
- Raises:
RuntimeError – If parameter files are missing and the user chooses not to download.
- Returns:
reconstructor – The reference reconstructor.
- Return type:
- class dival.Reconstructor(reco_space=None, observation_space=None, name='', hyper_params=None)[source]
Bases:
object
Abstract reconstructor base class.
There are two ways of implementing a Reconstructor subclass:
Implement
reconstruct()
. It has to support optional in-place and out-of-place evaluation.Implement
_reconstruct()
. It must have one of the following signatures:_reconstruct(self, observation, out)
(in-place)_reconstruct(self, observation)
(out-of-place)_reconstruct(self, observation, out=None)
(optional in-place)
The class attribute
HYPER_PARAMS
defines the hyper parameters of the reconstructor class. The current values for a reconstructor instance are given by the attributehyper_params
. Properties wrappinghyper_params
are automatically created by the metaclass (for hyper parameter names that are valid identifiers), such that the hyper parameters can be written and read like instance attributes.- reco_space
Reconstruction space.
- Type:
odl.discr.DiscretizedSpace
, optional
- observation_space
Observation space.
- Type:
odl.discr.DiscretizedSpace
, optional
- name
Name of the reconstructor.
- Type:
str
- hyper_params
Current hyper parameter values. Initialized automatically using the default values from
HYPER_PARAMS
(but may be overridden by hyper_params passed to__init__()
). It is expected to have the same keys asHYPER_PARAMS
. The values for these keys in this dict are wrapped by properties with the key as identifier (if possible), so an assignment to the property changes the value in this dict and vice versa.- Type:
dict
- HYPER_PARAMS = {}
Specification of hyper parameters.
This class attribute is a dict that lists the hyper parameter of the reconstructor. It should not be hidden by an instance attribute of the same name (i.e. by assigning a value to self.HYPER_PARAMS in an instance of a subtype).
Note: in order to inherit
HYPER_PARAMS
from a super class, the subclass should create a deep copy of it, i.e. executeHYPER_PARAMS = copy.deepcopy(SuperReconstructorClass.HYPER_PARAMS)
in the class body.The keys of this dict are the names of the hyper parameters, and each value is a dict with the following fields.
Standard fields:
'default'
Default value.
'retrain'
bool, optionalWhether training depends on the parameter. Default:
False
. Any custom subclass of LearnedReconstructor must set this field toTrue
if training depends on the parameter value.
Hyper parameter search fields:
'range'
(float, float), optionalInterval of valid values. If this field is set, the parameter is taken to be real-valued. Either
'range'
or'choices'
has to be set.'choices'
sequence, optionalSequence of valid values of any type. If this field is set,
'range'
is ignored. Can be used to perform manual grid search. Either'range'
or'choices'
has to be set.'method'
{‘grid_search’, ‘hyperopt’}, optionalOptimization method for the parameter. Default:
'grid_search'
. Options are:'grid_search'
Grid search over a sequence of fixed values. Can be configured by the dict
'grid_search_options'
.'hyperopt'
Random search using the
hyperopt
package. Can be configured by the dict'hyperopt_options'
.
'grid_search_options'
dictOption dict for grid search.
The following fields determine how
'range'
is sampled (in case it is specified and no'choices'
are specified):'num_samples'
int, optionalNumber of values. Default:
10
.'type'
{‘linear’, ‘logarithmic’}, optionalType of grid, i.e. distribution of the values. Default:
'linear'
. Options are:'linear'
Equidistant values in the
'range'
.'logarithmic'
Values in the
'range'
that are equidistant in the log scale.
'log_base'
int, optionalLog-base that is used if
'type'
is'logarithmic'
. Default:10.
.
'hyperopt_options'
dictOption dict for
'hyperopt'
method with the fields:'space'
hyperopt space, optionalCustom hyperopt search space. If this field is set,
'range'
and'type'
are ignored.'type'
{‘uniform’}, optionalType of the space for sampling. Default:
'uniform'
. Options are:'uniform'
Uniform distribution over the
'range'
.
- reconstruct(observation, out=None)[source]
Reconstruct input data from observation data.
The default implementation calls _reconstruct, automatically choosing in-place or out-of-place evaluation.
- Parameters:
observation (
observation_space
element-like) – The observation data.out (
reco_space
element-like, optional) – Array to which the result is written (in-place evaluation). If None, a new array is created (out-of-place evaluation). If None, the new array is initialized with zero before calling_reconstruct()
.
- Returns:
reconstruction – The reconstruction.
- Return type:
reco_space
element or out
- save_hyper_params(path)[source]
Save hyper parameters to JSON file. See also
load_hyper_params()
.- Parameters:
path (str) – Path of the file in which the hyper parameters should be saved. The ending
'.json'
is automatically appended if not included.
- load_hyper_params(path)[source]
Load hyper parameters from JSON file. See also
save_hyper_params()
.- Parameters:
path (str) – Path of the file in which the hyper parameters are stored. The ending
'.json'
is automatically appended if not included.
- save_params(path=None, hyper_params_path=None)[source]
Save all parameters to file. E.g. for learned reconstructors, both hyper parameters and learned parameters should be included. The purpose of this method, together with
load_params()
, is to define a unified way of saving and loading any kind of reconstructor. The default implementation callssave_hyper_params()
. Subclasses must reimplement this method in order to include non-hyper parameters.Implementations should derive a sensible default for hyper_params_path from path, such that all parameters can be saved and loaded by specifying only path. Recommended patterns are:
if non-hyper parameters are stored in a single file and path specifies it without file ending:
hyper_params_path=path + '_hyper_params.json'
if non-hyper parameters are stored in a directory:
hyper_params_path=os.path.join(path, 'hyper_params.json')
.if there are no non-hyper parameters, this default implementation can be used:
hyper_params_path=path + '_hyper_params.json'
- Parameters:
path (str[, optional]) – Path at which all (non-hyper) parameters should be saved. This argument is required if the reconstructor has non-hyper parameters or hyper_params_path is omitted. If the reconstructor has non-hyper parameters, the implementation may interpret it as a file path or as a directory path for multiple files (the dir should be created by this method if it does not exist). If the implementation expects a file path, it should accept it without file ending.
hyper_params_path (str, optional) – Path of the file in which the hyper parameters should be saved. The ending
'.json'
is automatically appended if not included. If not specified, it should be determined from path (see method description above). The default implementation saves to the filepath + '_hyper_params.json'
.
- load_params(path=None, hyper_params_path=None)[source]
Load of parameters from file. E.g. for learned reconstructors, both hyper parameters and learned parameters should be included. The purpose of this method, together with
save_params()
, is to define a unified way of saving and loading any kind of reconstructor. The default implementation callsload_hyper_params()
. Subclasses must reimplement this method in order to include non-hyper parameters.See
save_params()
for recommended patterns to derive a default hyper_params_path from path.- Parameters:
path (str[, optional]) – Path at which all (non-hyper) parameters are stored. This argument is required if the reconstructor has non-hyper parameters or hyper_params_path is omitted. If the reconstructor has non-hyper parameters, the implementation may interpret it as a file path or as a directory path for multiple files. If the implementation expects a file path, it should accept it without file ending.
hyper_params_path (str, optional) – Path of the file in which the hyper parameters are stored. The ending
'.json'
is automatically appended if not included. If not specified, it should be determined from path (see description ofsave_params()
). The default implementation reads from the filepath + '_hyper_params.json'
.
- class dival.IterativeReconstructor(callback=None, **kwargs)[source]
Bases:
Reconstructor
Iterative reconstructor base class. It is recommended to use
StandardIterativeReconstructor
as a base class for iterative reconstructors if suitable, which provides some default implementation.Subclasses must call
callback
after each iteration inself.reconstruct
. This is e.g. required by theevaluation
module.- callback
Callback to be called after each iteration.
- Type:
odl.solvers.util.callback.Callback
or None
- HYPER_PARAMS = {'iterations': {'default': 100, 'retrain': False}}
Specification of hyper parameters.
This class attribute is a dict that lists the hyper parameter of the reconstructor. It should not be hidden by an instance attribute of the same name (i.e. by assigning a value to self.HYPER_PARAMS in an instance of a subtype).
Note: in order to inherit
HYPER_PARAMS
from a super class, the subclass should create a deep copy of it, i.e. executeHYPER_PARAMS = copy.deepcopy(SuperReconstructorClass.HYPER_PARAMS)
in the class body.The keys of this dict are the names of the hyper parameters, and each value is a dict with the following fields.
Standard fields:
'default'
Default value.
'retrain'
bool, optionalWhether training depends on the parameter. Default:
False
. Any custom subclass of LearnedReconstructor must set this field toTrue
if training depends on the parameter value.
Hyper parameter search fields:
'range'
(float, float), optionalInterval of valid values. If this field is set, the parameter is taken to be real-valued. Either
'range'
or'choices'
has to be set.'choices'
sequence, optionalSequence of valid values of any type. If this field is set,
'range'
is ignored. Can be used to perform manual grid search. Either'range'
or'choices'
has to be set.'method'
{‘grid_search’, ‘hyperopt’}, optionalOptimization method for the parameter. Default:
'grid_search'
. Options are:'grid_search'
Grid search over a sequence of fixed values. Can be configured by the dict
'grid_search_options'
.'hyperopt'
Random search using the
hyperopt
package. Can be configured by the dict'hyperopt_options'
.
'grid_search_options'
dictOption dict for grid search.
The following fields determine how
'range'
is sampled (in case it is specified and no'choices'
are specified):'num_samples'
int, optionalNumber of values. Default:
10
.'type'
{‘linear’, ‘logarithmic’}, optionalType of grid, i.e. distribution of the values. Default:
'linear'
. Options are:'linear'
Equidistant values in the
'range'
.'logarithmic'
Values in the
'range'
that are equidistant in the log scale.
'log_base'
int, optionalLog-base that is used if
'type'
is'logarithmic'
. Default:10.
.
'hyperopt_options'
dictOption dict for
'hyperopt'
method with the fields:'space'
hyperopt space, optionalCustom hyperopt search space. If this field is set,
'range'
and'type'
are ignored.'type'
{‘uniform’}, optionalType of the space for sampling. Default:
'uniform'
. Options are:'uniform'
Uniform distribution over the
'range'
.
- __init__(callback=None, **kwargs)[source]
- Parameters:
callback (
odl.solvers.util.callback.Callback
, optional) – Callback to be called after each iteration.
- reconstruct(observation, out=None, callback=None)[source]
Reconstruct input data from observation data.
Same as
Reconstructor.reconstruct()
, but with additional optional callback parameter.- Parameters:
observation (
observation_space
element-like) – The observation data.out (
reco_space
element-like, optional) – Array to which the result is written (in-place evaluation). If None, a new array is created (out-of-place evaluation).callback (
odl.solvers.util.callback.Callback
, optional) – Additional callback for this reconstruction that is temporarily composed withcallback
, i.e. also called after each iteration. If None, justcallback
is called.
- Returns:
reconstruction – The reconstruction.
- Return type:
reco_space
element or out
- property iterations
- class dival.StandardIterativeReconstructor(x0=None, callback=None, **kwargs)[source]
Bases:
IterativeReconstructor
Standard iterative reconstructor base class.
Provides a default implementation that only requires subclasses to implement
_compute_iterate()
and optionally_setup()
.- x0
Default initial value for the iterative reconstruction. Can be overridden by passing a different
x0
toreconstruct()
.- Type:
reco_space
element-like or None
- callback
Callback that is called after each iteration.
- Type:
odl.solvers.util.callback.Callback
or None
- __init__(x0=None, callback=None, **kwargs)[source]
- Parameters:
x0 (
reco_space
element-like, optional) – Default initial value for the iterative reconstruction. Can be overridden by passing a differentx0
toreconstruct()
.callback (
odl.solvers.util.callback.Callback
, optional) – Callback that is called after each iteration.
- reconstruct(observation, out=None, x0=None, last_iter=0, callback=None)[source]
Reconstruct input data from observation data.
Same as
Reconstructor.reconstruct()
, but with additional options for iterative reconstructors.- Parameters:
observation (
observation_space
element-like) – The observation data.out (
reco_space
element-like, optional) – Array to which the result is written (in-place evaluation). If None, a new array is created (out-of-place evaluation).x0 (
reco_space
element-like, optional) – Initial value for the iterative reconstruction. Overrides the attributex0
, which can be set when calling__init__()
. If bothx0
and this argument are None, the default implementation uses the value of out if called in-place, or zero if called out-of-place.last_iter (int, optional) – If x0 is the result of an iteration by this method, this can be used to specify the number of iterations so far. The number of iterations for the current call is
self.hyper_params['iterations'] - last_iter
.callback (
odl.solvers.util.callback.Callback
, optional) – Additional callback for this reconstruction that is temporarily composed withcallback
, i.e. also called after each iteration. If None, justcallback
is called.
- Returns:
reconstruction – The reconstruction.
- Return type:
reco_space
element or out
- property iterations
- class dival.LearnedReconstructor(reco_space=None, observation_space=None, name='', hyper_params=None)[source]
Bases:
Reconstructor
- train(dataset)[source]
Train the reconstructor with a dataset by adapting its parameters.
Should only use the training and validation data from dataset.
- Parameters:
dataset (
Dataset
) – The dataset from which the training data should be used.
- save_params(path, hyper_params_path=None)[source]
Save all parameters to file.
Calls
save_hyper_params()
andsave_learned_params()
, wheresave_learned_params()
should be implemented by the subclass.This implementation assumes that path is interpreted as a single file name, preferably specified without file ending. If path is a directory, the subclass needs to reimplement this method in order to follow the recommended default value pattern:
hyper_params_path=os.path.join(path, 'hyper_params.json')
.- Parameters:
path (str) – Path at which the learned parameters should be saved. Passed to
save_learned_params()
. If the implementation interprets it as a file path, it is preferred to exclude the file ending (otherwise the default value of hyper_params_path is suboptimal).hyper_params_path (str, optional) – Path of the file in which the hyper parameters should be saved. The ending
'.json'
is automatically appended if not included. If not specified, this implementation saves to the filepath + '_hyper_params.json'
.
- load_params(path, hyper_params_path=None)[source]
Load all parameters from file.
Calls
load_hyper_params()
andload_learned_params()
, whereload_learned_params()
should be implemented by the subclass.This implementation assumes that path is interpreted as a single file name, preferably specified without file ending. If path is a directory, the subclass needs to reimplement this method in order to follow the recommended default value pattern:
hyper_params_path=os.path.join(path, 'hyper_params.json')
.- Parameters:
path (str) – Path at which the parameters are stored. Passed to
load_learned_params()
. If the implementation interprets it as a file path, it is preferred to exclude the file ending (otherwise the default value of hyper_params_path is suboptimal).hyper_params_path (str, optional) – Path of the file in which the hyper parameters are stored. The ending
'.json'
is automatically appended if not included. If not specified, this implementation reads from the filepath + '_hyper_params.json'
.
- save_learned_params(path)[source]
Save learned parameters to file.
- Parameters:
path (str) – Path at which the learned parameters should be saved. Implementations may interpret this as a file path or as a directory path for multiple files (which then should be created if it does not exist). If the implementation expects a file path, it should accept it without file ending.
- load_learned_params(path)[source]
Load learned parameters from file.
- Parameters:
path (str) – Path at which the learned parameters are stored. Implementations may interpret this as a file path or as a directory path for multiple files. If the implementation expects a file path, it should accept it without file ending.
- class dival.Measure(short_name=None)[source]
Bases:
ABC
Abstract base class for measures used for evaluation.
In subclasses, either
__init__()
should be inherited or it should callsuper().__init__()
in order to register theshort_name
and to ensure it is unique.- measure_type
The measure type. Measures with type
'distance'
should attain small values if the reconstruction is good. Measures with type'quality'
should attain large values if the reconstruction is good.- Type:
{
'distance'
,'quality'
}
- short_name
Short name of the measure, used as identifier (key in
measure_dict
).- Type:
str
- name
Name of the measure.
- Type:
str
- description
Description of the measure.
- Type:
str
- measure_type = None
Class attribute, default value for
measure_type
.
- description = ''
Class attribute, default value for
description
.
- measure_dict = {'l2': <dival.measure.L2Measure object>, 'mse': <dival.measure.MSEMeasure object>, 'psnr': <dival.measure.PSNRMeasure object>, 'ssim': <dival.measure.SSIMMeasure object>}
Class attribute, registry of all measures with their
short_name
as key.
- __init__(short_name=None)[source]
- Parameters:
short_name (str, optional) – The short name of this measure, used as identifier in
measure_dict
. If None is passed andshort_name
was not set by the subclass, the__name__
of the subclass is used. If short_name is already taken by another instance, a unique short name is generated by appending a suffix of format'_%d'
.
- short_name = ''
Class attribute, default value for
short_name
.
- abstract apply(reconstruction, ground_truth)[source]
Calculate the value of this measure.
- Parameters:
reconstruction (odl element) – The reconstruction.
ground_truth (odl element) – The ground truth to compare with.
- Returns:
value – The value of this measure for the given reconstruction and ground_truth.
- Return type:
float
- classmethod get_by_short_name(short_name)[source]
Return
Measure
instance with given short name by registry lookup.- Parameters:
short_name (str) – Short name, identifier in
measure_dict
.- Returns:
measure – The instance.
- Return type:
- class dival.TaskTable(name='')[source]
Bases:
object
Task table containing reconstruction tasks to evaluate.
- name
Name of the task table.
- Type:
str
- tasks
Tasks that shall be run. The fields of each dict are set from the parameters to
append()
(orappend_all_combinations()
). Cf. documentation ofappend()
for details.- Type:
list of dict
- run(save_reconstructions=True, reuse_iterates=True, show_progress='text')[source]
Run all tasks and return the results.
The returned
ResultTable
object is also stored asresults
.- Parameters:
save_reconstructions (bool, optional) –
Whether the reconstructions should be saved in the results. The default is
True
.If measures shall be applied after this method returns, it must be
True
.If
False
, no iterates (intermediate reconstructions) will be saved, even iftask['options']['save_iterates']==True
.reuse_iterates (bool, optional) –
Whether to reuse iterates from other sub-tasks if possible. The default is
True
.If there are sub-tasks whose hyper parameter choices differ only in the number of iterations of an
IterativeReconstructor
, only the sub-task with the maximum number of iterations is run and the results for the other ones determined by storing iterates if this option isTrue
.Note 1: If enabled, the callbacks assigned to the reconstructor will be run only for the above specified sub-tasks with the maximum number of iterations.
Note 2: If the reconstructor is non-deterministic, this option can affect the results as the same realization is used for multiple sub-tasks.
show_progress (str, optional) –
Whether and how to show progress. Options are:
'text'
(default)print a line before running each task
'tqdm'
show a progress bar with
tqdm
- None
do not show progress
- Returns:
results – The results.
- Return type:
ResultTable
- append(reconstructor, test_data, measures=None, dataset=None, hyper_param_choices=None, options=None)[source]
Append a task.
- Parameters:
reconstructor (
Reconstructor
) – The reconstructor.test_data (
DataPairs
) – The test data.measures (sequence of (
Measure
or str), optional) – Measures that will be applied. EitherMeasure
objects or their short names can be passed.dataset (
Dataset
, optional) – The dataset that will be passed toreconstructor.train
if it is aLearnedReconstructor
.hyper_param_choices (dict of list or list of dict, optional) –
Choices of hyper parameter combinations to try as sub-tasks.
If a dict of lists is specified, all combinations of the list elements (cartesian product space) are tried.
If a list of dicts is specified, each dict is taken as a parameter combination to try.
The current parameter values are read from
Reconstructor.hyper_params
in the beginning and used as default values for all parameters not specified in the passed dicts. Afterwards, the original values are restored.options (dict) –
Options that will be used. Options are:
'skip_training'
bool, optionalWhether to skip training. Can be used for manual training of reconstructors (or loading of a stored state). Default:
False
.'save_best_reconstructor'
dict, optionalIf specified, save the best reconstructor from the sub-tasks (cf. hyper_param_choices) by calling
Reconstructor.save_params()
. Forhyper_param_choices=None
, the reconstructor from the single sub-task is saved. This option requires measures to be non-empty if there are multiple sub-tasks. The fields are:'path'
strThe path to save the best reconstructor at (argument to
save_params()
). Note that this path is used during execution of the task to store the best reconstructor params so far, so the file(s) are most likely updated multiple times.'measure'
Measure
or str, optionalThe measure used to define the “best” reconstructor (in terms of mean performance). Must be one of the measures. By default
measures[0]
is used. This field is ignored if there is only one sub-task.
'save_iterates'
bool, optionalWhether to save the intermediate reconstructions of iterative reconstructors. Default:
False
. Will be ignored ifsave_reconstructions=False
is passed to run. Ifreuse_iterates=True
is passed to run and there are sub-tasks for which iterates are reused, these iterates are the same objects for all of those sub-tasks (i.e. no copies).'save_iterates_measure_values'
bool, optionalWhether to compute and save the measure values for each intermediate reconstruction of iterative reconstructors (the default is
False
).'save_iterates_step'
int, optionalStep size for
'save_iterates'
and'save_iterates_measure_values'
(the default is 1).
- append_all_combinations(reconstructors, test_data, measures=None, datasets=None, hyper_param_choices=None, options=None)[source]
Append tasks of all combinations of test data, reconstructors and optionally datasets. The order is taken from the lists, with test data changing slowest and reconstructor changing fastest.
- Parameters:
reconstructors (list of Reconstructor) – Reconstructor list.
test_data (list of DataPairs) – Test data list.
measures (sequence of (Measure or str)) – Measures that will be applied. The same measures are used for all combinations of test data and reconstructors. Either Measure objects or their short names can be passed.
datasets (list of Dataset, optional) – Dataset list. Required if reconstructors contains at least one LearnedReconstructor.
hyper_param_choices (list of (dict of list or list of dict), optional) – Choices of hyper parameter combinations for each reconstructor, which are tried as sub-tasks. The i-th element of this list is used for the i-th reconstructor. See append for documentation of how the choices are passed.
options (dict) – Options that will be used. The same options are used for all combinations of test data and reconstructors. See append for documentation of the options.