Researchers will continue their work to develop novel algorithms for reconstructing x-ray images of thick, real-life materials. Their approach aims to advance the full range of future nanoscale imaging activities, including cell and brain imaging research, at Argonne's Advanced Photon Source and other DOE light sources.

X-ray microscopy is unique in combining the potential for nanoscale spatial resolution with the ability to image samples up to centimeters thick. The $800M upgrade of Argonne National Laboratory’s Advanced Photon Source (APS) will increase beam brightness more than 100x, allowing one to fully realize this potential. However, a fundamental challenge remains: as one improves (decreases) the spatial resolution r, the depth of focus (DOF) of imaging decreases as 5r2/λ. This means that the beam profile may undergo significant variation when a thick sample is imaged, which violates the standard approximation that a 2D image is a pure projection through a 3D object. This challenge is not yet widely appreciated in x-ray microscopy, but if unmet it will present severe limitations to fully exploiting the capabilities of the APS Upgrade.

With this ALCC project, team of researchers will continue their work to develop novel algorithms in response to this challenge. In their approach, diffraction of x-rays in a guessed object is modeled using the multislice method, which predicts the detected wavefield intensity downstream the sample. The object function is then iteratively updated using a gradient-based approach, so that the data mismatch between the prediction and the experimental measurement (i.e., the loss function) is minimized. The team has proven that this method works with simulated data in a previous implementation with the gradients derived manually. In an earlier project supported by the ALCF Data Science Program (ADSP), the researchers took a different approach where the implementation of their algorithm is based on automatic differentiation (AD). In this way, the gradients can be derived automatically using available AD tools such as TensorFlow and Autograd. This not only reduces the development cost of the algorithm, but also makes it easily adaptable for different imaging techniques.

While the team’s initial work was to solve small data arrays like 2563, this is clearly inadequate for their ultimate goal of nanometer resolution in centimeter-sized samples. As part of this ALCC project, the researchers will compare parallelized AD reconstruction approaches that are more efficient for large samples. This involves comparing sample-wide wavefield propagation with combining localized subregions. Another approach involves the use of a finite difference algorithm implemented using PETSc, a distributed PDE solver; preliminary results indicate that this gives a faster and more accurate solution of the forward problem with gigapixel wavefields. In addition, since PETSc provides the sensitivity of the loss function with respect to the parameters (in this case, the object) through its discrete adjoint routine, the computation of gradients for our loss function would be trivial.

The researchers have been developing the algorithms using Theta through their ADSP allocation. For the AD-based reconstruction algorithm, their efforts have led to better vectorization of the implementation. Also, to better align with the multi-node feature of Theta, the team optimized their scheme of job allocation so that a larger number of nodes can be used simultaneously with each node handling a smaller problem size. The finer discretization of the problem also led them to reduce the per-rank memory usage by assigning only a small fraction of the object to each process, thus making the algorithm possible for realistic samples that contains 109 or more voxels. With experimental data planned to be collected in 2020, this ALCC project will allow the team to further test and optimize the algorithm with real data that will be routinely encountered when the software is put into production mode.