GPU computing and geo-HPC. Researcher at ETH Zurich, Switzerland.
13:00 UTC
Why to wait hours for computations to complete, when it could take only a few seconds? Tired of prototyping code in an interactive, high-level language and rewriting it in a lower-level language to get high-performance code? Unsure about the applicability of differentiable programming? Or simply curious about parallel and GPU computing and automatic differentiation being game changers in physics-based and data-driven modelling.
14:30 UTC
The Julia for HPC minisymposium gathers current and prospective Julia practitioners from various disciplines in the field of high-performance computing (HPC). Each year, we invite participation from science, industry, and government institutions interested in Julia’s capabilities for supercomputing. Our goal is to provide a venue for showing the state of the art, share best practices, discuss current limitations, and identify future developments in the Julia HPC community.
15:30 UTC
We present an efficient and scalable approach to inverse PDE-based modelling with the adjoint method. We use automatic differentiation (AD) with Enzyme to automaticaly generate the buidling blocks for the inverse solver. We utilize the efficient pseudo-transient iterative method to achieve performance that is close to the hardware limit for both forward and adjont problems. We demonstrate close to optimal parallel efficiency on GPUs in series of benchmarks.
18:00 UTC
We present an efficient approach for the development of 3-D partial differential equation solvers that are able to tackle the hardware limit of modern GPUs and scale to the world's largest supercomputers. The approach relies on the accelerated pseudo-transient method and on the automatic generation of on-chip memory usage-optimized computation kernels for the implementation. We report performance and scaling results on LUMI and Piz Daint, an AMD-GPU and a NVIDIA-GPU supercomputer.