z-logo
open-access-imgOpen Access
Impact of lossy compression of X‐ray projections onto reconstructed tomographic slices
Author(s) -
Marone Federica,
Vogel Jakob,
Stampai Marco
Publication year - 2020
Publication title -
journal of synchrotron radiation
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.172
H-Index - 99
ISSN - 1600-5775
DOI - 10.1107/s1600577520007353
Subject(s) - lossy compression , computer science , data compression , tomographic reconstruction , lossless compression , compression ratio , image compression , compression (physics) , tomography , data compression ratio , detector , iterative reconstruction , artificial intelligence , optics , physics , image processing , image (mathematics) , telecommunications , thermodynamics , internal combustion engine
Modern detectors used at synchrotron tomographic microscopy beamlines typically have sensors with more than 4–5 mega‐pixels and are capable of acquiring 100–1000 frames per second at full frame. As a consequence, a data rate of a few TB per day can easily be exceeded, reaching peaks of a few tens of TB per day for time‐resolved tomographic experiments. This data needs to be post‐processed, analysed, stored and possibly transferred, imposing a significant burden onto the IT infrastructure. Compression of tomographic data, as routinely done for diffraction experiments, is therefore highly desirable. This study considers a set of representative datasets and investigates the effect of lossy compression of the original X‐ray projections onto the final tomographic reconstructions. It demonstrates that a compression factor of at least three to four times does not generally impact the reconstruction quality. Potentially, compression with this factor could therefore be used in a transparent way to the user community, for instance, prior to data archiving. Higher factors (six to eight times) can be achieved for tomographic volumes with a high signal‐to‐noise ratio as it is the case for phase‐retrieved datasets. Although a relationship between the dataset signal‐to‐noise ratio and a safe compression factor exists, this is not simple and, even considering additional dataset characteristics such as image entropy and high‐frequency content variation, the automatic optimization of the compression factor for each single dataset, beyond the conservative factor of three to four, is not straightforward.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here