ACR Phantoms#

Overview#

New in version 3.2.

Warning

These algorithms have only a limited amount of testing data and results should be scrutinized. Further, the algorithm is more likely to change in the future when a more robust test suite is built up. If you’d like to submit data, enter it here.

The ACR module provides routines for automatically analyzing DICOM images of the ACR CT 464 phantom and Large MR phantom. It can load a folder or zip file of images, correcting for translational and rotational offsets.

Phantom reference information is drawn from the ACR CT solution article and the analysis is drawn from the ACR CT testing article. MR analysis is drawn from the ACR Guidance document.

Warning

Due to the rectangular ROIs on the MRI phantom analysis, rotational errors should be <= 1 degree. Translational errors are still accounted for however for any reasonable amount.

Typical Use#

The ACR CT and MR analyses follows a similar pattern of load/analyze/output as the rest of the library. Unlike the CatPhan analysis, customization is not a goal, as the phantoms and analyses are much more well-defined. I.e. there’s less of a use case for custom phantoms in this scenario. CT is mostly used here but is interchangeable with the MRI class.

To use the ACR analysis, import the class:

from pylinac import ACRCT, ACRMRILarge

And then load, analyze, and view the results:

  • Load images – Loading can be done with a directory or zip file:

    acr_ct_folder = r"C:/CT/ACR/Sept 2021"
    ct = ACRCT(acr_ct_folder)
    acr_mri_folder = r"C:/MRI/ACR/Sept 2021"
    mri = ACRMRILarge(acr_mri_folder)
    

    or load from zip:

    acr_ct_zip = r"C:/CT/ACR/Sept 2021.zip"
    ct = ACRCT.from_zip(acr_ct_zip)
    
  • Analyze – Analyze the dataset:

    ct.analyze()
    
  • View the results – Reviewing the results can be done in text or dict format as well as images:

    # print text to the console
    print(ct.results())
    # view analyzed image summary
    ct.plot_analyzed_image()
    # view images independently
    ct.plot_images()
    # save the images
    ct.save_analyzed_image()
    # or
    ct.save_images()
    # finally, save a PDF
    ct.publish_pdf()
    

Choosing an MR Echo#

With MRI, a dual echo scan can be obtained. These can result in a combined DICOM dataset but are distinct acquisitions. To select between multiple echos, use the echo_number parameter:

from pylinac import ACRMRILarge

mri = ACRMRILarge(...)  # load zip or dir with dual echo image set
mri.analyze(echo_number=2)
mri.results()

If no echo number is passed, the first and lowest echo number is selected and analyzed.

Customizing MR/CT Modules#

To customize aspects of the MR analysis modules, subclass the relevant module and set the attribute in the analysis class. E.g. to customize the “Slice1” MR module:

from pylinac.acr import ACRMRILarge, MRSlice1Module


class Slice1Modified(MRSlice1Module):
    """Custom location for the slice thickness ROIs"""

    thickness_roi_settings = {
        "Top": {"width": 100, "height": 4, "distance": -3},
        "Bottom": {"width": 100, "height": 4, "distance": 2.5},
    }


# now pass to the MR analysis class
class MyMRI(ACRMRILarge):
    slice1 = Slice1Modified


# use as normal
mri = MyMRI(...)
mri.analyze(...)

There are 4 modules in ACR MRI Large analysis that can be overridden. The attribute name should stay the same but the name of the subclassed module can be anything as long as it subclasses the original module:

class ACRMRILarge:
    # overload these as you wish. The attribute name cannot change.
    slice1 = MRSlice1Module
    geometric_distortion = GeometricDistortionModule
    uniformity_module = MRUniformityModule
    slice11 = MRSlice11PositionModule


class ACRCT:
    ct_calibration_module = CTModule
    low_contrast_module = LowContrastModule
    spatial_resolution_module = SpatialResolutionModule
    uniformity_module = UniformityModule

Customizing module offsets#

Customizing the module offsets in the ACR module is easier than for the CT module. To do so, simply override any relevant constant like so:

import pylinac

pylinac.acr.MR_SLICE11_MODULE_OFFSET_MM = 95

mri = pylinac.ACRMRILarge(...)  # will use offset above

The options for module offsets are as follows along with their default value:

# CT
CT_UNIFORMITY_MODULE_OFFSET_MM = 70
CT_SPATIAL_RESOLUTION_MODULE_OFFSET_MM = 100
CT_LOW_CONTRAST_MODULE_OFFSET_MM = 30

# MR
MR_SLICE11_MODULE_OFFSET_MM = 100
MR_GEOMETRIC_DISTORTION_MODULE_OFFSET_MM = 40
MR_UNIFORMITY_MODULE_OFFSET_MM = 60

Advanced Use#

Using results_data#

Using the ACR module in your own scripts? While the analysis results can be printed out, if you intend on using them elsewhere (e.g. in an API), they can be accessed the easiest by using the results_data() method which returns a ACRCTResult instance. For MRI this is results_data() method and ACRMRIResult respectively.

Continuing from above:

data = ct.results_data()
data.ct_module.roi_radius_mm
# and more

# return as a dict
data_dict = ct.results_data(as_dict=True)
data_dict["ct_module"]["roi_radius_mm"]
...

MRI Algorithm#

The ACR MR analysis is based on the official guidance document. Because the guidance document is extremely specific (nice job ACR!) only a few highlights are given here. The guidance is followed as reasonably close as possible.

Allowances#

  • Multiple MR sequences can be present in the dataset.

  • The phantom can have significant cartesian shifts.

Restrictions#

  • There should be 11 slices per scan (although multiple echo scans are allowed) per the guidance document (section 0.3).

  • The phantom should have very little pitch, yaw, or roll (<1 degree).

Analysis#

Section 0.4 specifies the 7 tests to perform. Pylinac can perform 6 of these 7. It cannot yet perform the low-contrast visibility test.

  • Geometric Accuracy - The geometric accuracy is measured using profiles of slice 5. The only difference is that pylinac will use the 60th percentile pixel value of the image as a high-pass filter so that minor background fluctuations are removed and then take the FWHM of several profiles of this new image. The width between the two pixels defining the FWHM is the diameter.

  • High Contrast - High contrast is hard to measure for the ACR MRI phantom simply because it does not use line pairs, but rather offset dots as well as the qualitative description in the guidance document about how to score these. Pylinac measures the high-contrast by sampling a circular ROI on the left ROI (phantom right) set. This is the baseline which all other measurements will be normalized to. The actual dot-ROIs are sampled by taking a circular ROI of the row-based set and the column-based set. Each row-based ROI is evaluated against the other row-based ROIs. The same is done for column-based ROIs. The ROIs use the maximum and minimum pixel values inside the sample ROI. No dot-counting is performed.

    Tip

    It is suggested to perform the contrast measurement visually and compare to pylinac values to establish a cross-comparison ratio. After a ratio has been established, the pylinac MTF can be used as the baseline value moving forward.

  • Slice thickness - Slice thickness is measured using the FWHM of two rectangular ROIs. This is very similar to the guidance document explanation.

    Slice thickness is defined the same as in the guidance document:

    \[Thickness = 0.2 * \frac{Top * Bottom}{Top + Bottom}\]
  • Slice Position - Slice position accuracy is measured very similarly to the manual method described in the document: “The display level setting … should be set to a level roughly half that of the signal in the bright, all-water portions of the phantom.” For each vertical bar, the pixel nearest to the mid-value between min and max of the rectangular ROI is used as the bar position:

    \[position_{bar} = \frac{ROI_{max} - ROI_{min}}{2} + ROI_{min}\]

    The difference in positions between the bars is the value reported.

  • Uniformity - Uniformity is measured using a circular ROI at the center of the phantom and ROIs to the top, bottom, left, and right of the phantom, very similar to the guidance document.

    The percent integral uniformity (PIU) is defined as:

    \[PIU = 100 * (1 - \frac{high-low}{high+low})\]

    Instead of using the WL/WW to find the low and high 1cm2 ROI, pylinac uses the 1st and 99th percentile of pixel values inside the central ROI.

    The ghosting ratio is defined the same as the ACR guidance document:

    \[ghosting_{ratio} = |\frac{(top + bottom) - (left + right)}{2*ROI_{large}}|\]

    where all values are the median pixel values of their respective ROI. The percent-signal ghosting (PSG) is:

    \[PSG = ghosting_{ratio} * 100\]

API Documentation#

class pylinac.acr.ACRCT(folderpath: str | Sequence[str] | Path | Sequence[Path] | Sequence[BytesIO], check_uid: bool = True, memory_efficient_mode: bool = False)[source]#

Bases: CatPhanBase

Parameters#

folderpathstr, list of strings, or Path to folder

String that points to the CBCT image folder location.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

memory_efficient_modebool

Whether to use a memory efficient mode. If True, the DICOM stack will be loaded on demand rather than all at once. This will reduce the memory footprint but will be slower by ~25%. Default is False.

Raises#

NotADirectoryError

If folder str passed is not a valid directory.

FileNotFoundError

If no CT images are found in the folder

ct_calibration_module#

alias of CTModule

low_contrast_module#

alias of LowContrastModule

spatial_resolution_module#

alias of SpatialResolutionModule

uniformity_module#

alias of UniformityModule

plot_analyzed_subimage(*args, **kwargs)[source]#

Plot a specific component of the CBCT analysis.

Parameters#

subimage{‘hu’, ‘un’, ‘sp’, ‘lc’, ‘mtf’, ‘lin’, ‘prof’, ‘side’}

The subcomponent to plot. Values must contain one of the following letter combinations. E.g. linearity, linear, and lin will all draw the HU linearity values.

  • hu draws the HU linearity image.

  • un draws the HU uniformity image.

  • sp draws the Spatial Resolution image.

  • lc draws the Low Contrast image (if applicable).

  • mtf draws the RMTF plot.

  • lin draws the HU linearity values. Used with delta.

  • prof draws the HU uniformity profiles.

  • side draws the side view of the phantom with lines of the module locations.

deltabool

Only for use with lin. Whether to plot the HU delta or actual values.

showbool

Whether to actually show the plot.

save_analyzed_subimage(*args, **kwargs)[source]#

Save a component image to file.

Parameters#

filenamestr, file object

The file to write the image to.

subimagestr

See plot_analyzed_subimage() for parameter info.

deltabool

Only for use with lin. Whether to plot the HU delta or actual values.

analyze() None[source]#

Analyze the ACR CT phantom

plot_analyzed_image(show: bool = True, **plt_kwargs) Figure[source]#

Plot the analyzed image

Parameters#

show

Whether to show the image.

plt_kwargs

Keywords to pass to matplotlib for figure customization.

save_analyzed_image(filename: str | Path | BytesIO, **plt_kwargs) None[source]#

Save the analyzed image to disk or stream

Parameters#

filename

Where to save the image to

plt_kwargs

Keywords to pass to matplotlib for figure customization.

plot_images(show: bool = True, **plt_kwargs) dict[str, Figure][source]#

Plot all the individual images separately

Parameters#

show

Whether to show the images.

plt_kwargs

Keywords to pass to matplotlib for figure customization.

save_images(directory: Path | str | None = None, to_stream: bool = False, **plt_kwargs) list[Path | BytesIO][source]#

Save separate images to disk or stream.

Parameters#

directory

The directory to write the images to. If None, will use current working directory

to_stream

Whether to write to stream or disk. If True, will return streams. Directory is ignored in that scenario.

plt_kwargs

Keywords to pass to matplotlib for figure customization.

find_phantom_roll(func=<function ACRCT.<lambda>>) float[source]#

Determine the “roll” of the phantom.

Only difference of base method is that we sort the ROIs by size, not by being in the center since the two we’re looking for are both right-sided.

results() str[source]#

Return the results of the analysis as a string. Use with print().

results_data(as_dict=False) ACRCTResult | dict[source]#

Present the results data and metadata as a dataclass or dict. The default return type is a dataclass.

publish_pdf(filename: str | Path, notes: str | None = None, open_file: bool = False, metadata: dict | None = None, logo: Path | str | None = None) None[source]#

Publish (print) a PDF containing the analysis and quantitative results.

Parameters#

filename(str, file-like object}

The file to write the results to.

notesstr, list of strings

Text; if str, prints single line. If list of strings, each list item is printed on its own line.

open_filebool

Whether to open the file using the default program after creation.

metadatadict

Extra data to be passed and shown in the PDF. The key and value will be shown with a colon. E.g. passing {‘Author’: ‘James’, ‘Unit’: ‘TrueBeam’} would result in text in the PDF like: ————– Author: James Unit: TrueBeam ————–

logo: Path, str

A custom logo to use in the PDF report. If nothing is passed, the default pylinac logo is used.

property catphan_size: float#

The expected size of the phantom in pixels, based on a 20cm wide phantom.

find_origin_slice() int#

Using a brute force search of the images, find the median HU linearity slice.

This method walks through all the images and takes a collapsed circle profile where the HU linearity ROIs are. If the profile contains both low (<800) and high (>800) HU values and most values are the same (i.e. it’s not an artifact), then it can be assumed it is an HU linearity slice. The median of all applicable slices is the center of the HU slice.

Returns#

int

The middle slice of the HU linearity module.

find_phantom_axis()#

We fit all the center locations of the phantom across all slices to a 1D poly function instead of finding them individually for robustness.

Normally, each slice would be evaluated individually, but the RadMachine jig gets in the way of detecting the HU module (🤦‍♂️). To work around that in a backwards-compatible way we instead look at all the slices and if the phantom was detected, capture the phantom center. ALL the centers are then fitted to a 1D poly function and passed to the individual slices. This way, even if one slice is messed up (such as because of the phantom jig), the poly function is robust to give the real center based on all the other properly-located positions on the other slices.

classmethod from_demo_images()#

Construct a CBCT object from the demo images.

classmethod from_url(url: str, check_uid: bool = True)#

Instantiate a CBCT object from a URL pointing to a .zip object.

Parameters#

urlstr

URL pointing to a zip archive of CBCT images.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

classmethod from_zip(zip_file: str | zipfile.ZipFile | BinaryIO, check_uid: bool = True, memory_efficient_mode: bool = False)#

Construct a CBCT object and pass the zip file.

Parameters#

zip_filestr, ZipFile

Path to the zip file or a ZipFile object.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

memory_efficient_modebool

Whether to use a memory efficient mode. If True, the DICOM stack will be loaded on demand rather than all at once. This will reduce the memory footprint but will be slower by ~25%. Default is False.

Raises#

FileExistsError : If zip_file passed was not a legitimate zip file. FileNotFoundError : If no CT images are found in the folder

localize() None#

Find the slice number of the catphan’s HU linearity module and roll angle

property mm_per_pixel: float#

The millimeters per pixel of the DICOM images.

property num_images: int#

The number of images loaded.

plot_side_view(axis: Axes) None#

Plot a view of the scan from the side with lines showing detected module positions

class pylinac.acr.ACRCTResult(phantom_model: str, phantom_roll_deg: float, origin_slice: int, num_images: int, ct_module: CTModuleOutput, uniformity_module: UniformityModuleOutput, low_contrast_module: LowContrastModuleOutput, spatial_resolution_module: SpatialResolutionModuleOutput)[source]#

Bases: ResultBase

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

phantom_model: str#
phantom_roll_deg: float#
origin_slice: int#
num_images: int#
ct_module: CTModuleOutput#
uniformity_module: UniformityModuleOutput#
low_contrast_module: LowContrastModuleOutput#
spatial_resolution_module: SpatialResolutionModuleOutput#
class pylinac.acr.CTModuleOutput(offset: int, roi_distance_from_center_mm: int, roi_radius_mm: int, roi_settings: dict, rois: dict)[source]#

Bases: object

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

class pylinac.acr.UniformityModuleOutput(offset: int, roi_distance_from_center_mm: int, roi_radius_mm: int, roi_settings: dict, rois: dict, center_roi_stdev: float)[source]#

Bases: CTModuleOutput

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

center_roi_stdev: float#
class pylinac.acr.SpatialResolutionModuleOutput(offset: int, roi_distance_from_center_mm: int, roi_radius_mm: int, roi_settings: dict, rois: dict, lpmm_to_rmtf: dict)[source]#

Bases: CTModuleOutput

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

lpmm_to_rmtf: dict#
class pylinac.acr.LowContrastModuleOutput(offset: int, roi_distance_from_center_mm: int, roi_radius_mm: int, roi_settings: dict, rois: dict, cnr: float)[source]#

Bases: CTModuleOutput

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

cnr: float#
class pylinac.acr.ACRMRILarge(folderpath: str | Sequence[str] | Path | Sequence[Path] | Sequence[BytesIO], check_uid: bool = True, memory_efficient_mode: bool = False)[source]#

Bases: CatPhanBase

Parameters#

folderpathstr, list of strings, or Path to folder

String that points to the CBCT image folder location.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

memory_efficient_modebool

Whether to use a memory efficient mode. If True, the DICOM stack will be loaded on demand rather than all at once. This will reduce the memory footprint but will be slower by ~25%. Default is False.

Raises#

NotADirectoryError

If folder str passed is not a valid directory.

FileNotFoundError

If no CT images are found in the folder

slice1#

alias of MRSlice1Module

geometric_distortion#

alias of GeometricDistortionModule

uniformity_module#

alias of MRUniformityModule

slice11#

alias of MRSlice11PositionModule

plot_analyzed_subimage(*args, **kwargs)[source]#

Plot a specific component of the CBCT analysis.

Parameters#

subimage{‘hu’, ‘un’, ‘sp’, ‘lc’, ‘mtf’, ‘lin’, ‘prof’, ‘side’}

The subcomponent to plot. Values must contain one of the following letter combinations. E.g. linearity, linear, and lin will all draw the HU linearity values.

  • hu draws the HU linearity image.

  • un draws the HU uniformity image.

  • sp draws the Spatial Resolution image.

  • lc draws the Low Contrast image (if applicable).

  • mtf draws the RMTF plot.

  • lin draws the HU linearity values. Used with delta.

  • prof draws the HU uniformity profiles.

  • side draws the side view of the phantom with lines of the module locations.

deltabool

Only for use with lin. Whether to plot the HU delta or actual values.

showbool

Whether to actually show the plot.

save_analyzed_subimage(*args, **kwargs)[source]#

Save a component image to file.

Parameters#

filenamestr, file object

The file to write the image to.

subimagestr

See plot_analyzed_subimage() for parameter info.

deltabool

Only for use with lin. Whether to plot the HU delta or actual values.

localize() None[source]#

Find the slice number of the catphan’s HU linearity module and roll angle

find_phantom_roll() float[source]#

Determine the “roll” of the phantom. This algorithm uses the circular left-upper hole on slice 1 as the reference

Returns#

float : the angle of the phantom in degrees.

analyze(echo_number: int | None = None) None[source]#

Analyze the ACR CT phantom

Parameters#

echo_number:

The echo to analyze. If not passed, uses the minimum echo number found.

plot_analyzed_image(show: bool = True, **plt_kwargs) Figure[source]#

Plot the analyzed image

Parameters#

show

Whether to show the image.

plt_kwargs

Keywords to pass to matplotlib for figure customization.

plot_images(show: bool = True, **plt_kwargs) dict[str, Figure][source]#

Plot all the individual images separately

Parameters#

show

Whether to show the images.

plt_kwargs

Keywords to pass to matplotlib for figure customization.

save_images(directory: Path | str | None = None, to_stream: bool = False, **plt_kwargs) list[Path | BytesIO][source]#

Save separate images to disk or stream.

Parameters#

directory

The directory to write the images to. If None, will use current working directory

to_stream

Whether to write to stream or disk. If True, will return streams. Directory is ignored in that scenario.

plt_kwargs

Keywords to pass to matplotlib for figure customization.

publish_pdf(filename: str | Path, notes: str | None = None, open_file: bool = False, metadata: dict | None = None, logo: Path | str | None = None) None[source]#

Publish (print) a PDF containing the analysis and quantitative results.

Parameters#

filename(str, file-like object}

The file to write the results to.

notesstr, list of strings

Text; if str, prints single line. If list of strings, each list item is printed on its own line.

open_filebool

Whether to open the file using the default program after creation.

metadatadict

Extra data to be passed and shown in the PDF. The key and value will be shown with a colon. E.g. passing {‘Author’: ‘James’, ‘Unit’: ‘TrueBeam’} would result in text in the PDF like: ————– Author: James Unit: TrueBeam ————–

logo: Path, str

A custom logo to use in the PDF report. If nothing is passed, the default pylinac logo is used.

results(as_str: bool = True) str | tuple[source]#

Return the results of the analysis as a string. Use with print().

results_data(as_dict: bool = False) ACRMRIResult | dict[source]#

Present the results data and metadata as a dataclass or dict. The default return type is a dataclass.

property catphan_size: float#

The expected size of the phantom in pixels, based on a 20cm wide phantom.

find_origin_slice() int#

Using a brute force search of the images, find the median HU linearity slice.

This method walks through all the images and takes a collapsed circle profile where the HU linearity ROIs are. If the profile contains both low (<800) and high (>800) HU values and most values are the same (i.e. it’s not an artifact), then it can be assumed it is an HU linearity slice. The median of all applicable slices is the center of the HU slice.

Returns#

int

The middle slice of the HU linearity module.

find_phantom_axis()#

We fit all the center locations of the phantom across all slices to a 1D poly function instead of finding them individually for robustness.

Normally, each slice would be evaluated individually, but the RadMachine jig gets in the way of detecting the HU module (🤦‍♂️). To work around that in a backwards-compatible way we instead look at all the slices and if the phantom was detected, capture the phantom center. ALL the centers are then fitted to a 1D poly function and passed to the individual slices. This way, even if one slice is messed up (such as because of the phantom jig), the poly function is robust to give the real center based on all the other properly-located positions on the other slices.

classmethod from_demo_images()#

Construct a CBCT object from the demo images.

classmethod from_url(url: str, check_uid: bool = True)#

Instantiate a CBCT object from a URL pointing to a .zip object.

Parameters#

urlstr

URL pointing to a zip archive of CBCT images.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

classmethod from_zip(zip_file: str | zipfile.ZipFile | BinaryIO, check_uid: bool = True, memory_efficient_mode: bool = False)#

Construct a CBCT object and pass the zip file.

Parameters#

zip_filestr, ZipFile

Path to the zip file or a ZipFile object.

check_uidbool

Whether to enforce raising an error if more than one UID is found in the dataset.

memory_efficient_modebool

Whether to use a memory efficient mode. If True, the DICOM stack will be loaded on demand rather than all at once. This will reduce the memory footprint but will be slower by ~25%. Default is False.

Raises#

FileExistsError : If zip_file passed was not a legitimate zip file. FileNotFoundError : If no CT images are found in the folder

property mm_per_pixel: float#

The millimeters per pixel of the DICOM images.

property num_images: int#

The number of images loaded.

plot_side_view(axis: Axes) None#

Plot a view of the scan from the side with lines showing detected module positions

save_analyzed_image(filename: str | Path | BinaryIO, **kwargs) None#

Save the analyzed summary plot.

Parameters#

filenamestr, file object

The name of the file to save the image to.

kwargs :

Any valid matplotlib kwargs.

class pylinac.acr.ACRMRIResult(phantom_model: str, phantom_roll_deg: float, origin_slice: int, num_images: int, slice1: MRSlice1ModuleOutput, slice11: MRSlice11ModuleOutput, uniformity_module: MRUniformityModuleOutput, geometric_distortion_module: MRGeometricDistortionModuleOutput)[source]#

Bases: ResultBase

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

phantom_model: str#
phantom_roll_deg: float#
origin_slice: int#
num_images: int#
slice1: MRSlice1ModuleOutput#
slice11: MRSlice11ModuleOutput#
uniformity_module: MRUniformityModuleOutput#
geometric_distortion_module: MRGeometricDistortionModuleOutput#
class pylinac.acr.MRSlice11ModuleOutput(offset: int, roi_settings: dict, rois: dict, bar_difference_mm: float, slice_shift_mm: float)[source]#

Bases: object

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

offset: int#
roi_settings: dict#
rois: dict#
bar_difference_mm: float#
slice_shift_mm: float#
class pylinac.acr.MRSlice1ModuleOutput(offset: int, roi_settings: dict, rois: dict, bar_difference_mm: float, slice_shift_mm: float, measured_slice_thickness_mm: float, row_mtf_50: float, col_mtf_50: float)[source]#

Bases: object

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

offset: int#
roi_settings: dict#
rois: dict#
bar_difference_mm: float#
slice_shift_mm: float#
measured_slice_thickness_mm: float#
row_mtf_50: float#
col_mtf_50: float#
class pylinac.acr.MRUniformityModuleOutput(offset: int, roi_settings: dict, rois: dict, ghost_roi_settings: dict, ghost_rois: dict, psg: float, ghosting_ratio: float, piu_passed: bool, piu: float)[source]#

Bases: object

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

offset: int#
roi_settings: dict#
rois: dict#
ghost_roi_settings: dict#
ghost_rois: dict#
psg: float#
ghosting_ratio: float#
piu_passed: bool#
piu: float#
class pylinac.acr.MRGeometricDistortionModuleOutput(offset: int, profiles: dict, distances: dict)[source]#

Bases: object

This class should not be called directly. It is returned by the results_data() method. It is a dataclass under the hood and thus comes with all the dunder magic.

Use the following attributes as normal class attributes.

offset: int#
profiles: dict#
distances: dict#