An Interview with Maria Girone, CTO, CERN openlab
Intel recently talked to CERN openlab CTO Maria Girone to discuss how CERN and Intel work together to deliver improvements in processing speed, sometimes by factors, and how that impacts CERN’s research on the basic constituents of matter. Part 1 of this interview describes how CERN and Intel work together through CERN openlab to promote the advancement of Modern Code, and highlights why that’s central to CERN’s strategy to maximize use of its hardware to move scientific research forward.
Good morning, Maria. Describe your role as CTO of CERN openlab.
CERN openlab is a public-private partnership that accelerates the development of cutting-edge ICT solutions for the research community. As the CTO, I'm responsible for managing and guiding projects with our industry collaborators. A large part of my role is interacting with industry members to identify the initiatives that could benefit the research community. In particular, I explore potential initiatives that can help address the challenges facing our laboratory and our scientific research programs in the next few years.
How does CERN participate with Intel in promoting the advancement of Modern Code? That is, code that has been rearchitected and optimized for parallelism to run on supercomputers and increase application performance?
Intel has been a CERN openlab partner for 15 years. Together we have a strong commitment to code modernization. Scientific progress is often limited by our ability to process, reconstruct, and analyze the data produced by the experiments on the Large Hadron Collider (LHC), so we’re working to create the most powerful software so that we can get the most out of the hardware.
Together, we’ve launched a number of initiatives, including hosting the Intel® Modern Code Developer Challenge, which has engaged thousands of young people internationally.
We’ve worked together to demonstrate that, by moving to highly parallel and vectorized code, we can achieve huge efficiency gains. These gains will make it possible for us to face the challenges coming with the High-Luminosity LHC.
What is the combined vision that CERN and Intel have regarding code modernization? What projects have you been working on?
We’ve launched several Modern Code initiatives with Intel’s support. It’s important for us to have a Modern Code strategy, both when we design new code and as we rethink the current legacy code, to ensure that we can fully take advantage of the capabilities of today’s architecture and new generations of hardware that are coming. Doing so supports CERN’s ability to advance scientific exploration. And by improving the code performance, sometimes by factors, we can save on processing resources. In fact, one of the single-largest sources of performance improvement has been through code modernization efforts.
For example, through a CERN openlab project, Intel is supporting our efforts to modernize the simulation code that we use in high-energy physics, GeantV. Depending on the experiment, nearly half of the computing time might be spent on simulation, so making gains within the simulation code will help us spare processing cycles. We are working with vectorization and multiple levels of parallelism and are already seeing good results. These simulations track tens of thousands of particles as they traverse detectors, with complicated geometries and many changes of materials to simulate the detector response.
Another initiative in this area is the ALFA project: a collaboration between the ALICE experiment at CERN and the FAIR experiment at the GSI Helmholtz Centre for Heavy Ion Research. This project is part of the Intel code modernization effort, which is working to achieve maximum performance from the Intel® Xeon Phi™ coprocessor.
In addition to the work with these high-energy physics projects, we’re also working with Intel, Newcastle University in the U.K., and both Innopolis University and Kazan Federal University in Russia on the BioDynaMo (Biology Dynamic Modeller) project, to create a new simulation of biological cell growth, with a particular focus on brain development.
What does that mean for breakthroughs and research at CERN?
By modernizing our code, we will be able to process and analyze more data with greater efficiency. This will help us reach the next scientific breakthroughs.
How does a code modernization effort inform your missions of fundamental research and bringing nations and cultures together?
Software development is one of the best examples of an area where many cultures can work together here at CERN in pursuit of common goals. Software development offers the right environment for us to build upon the contributions from many institutions worldwide. We see this at the level of experiments and at the infrastructure level, as well as within the Worldwide LHC Computing Grid.
How can other organizations and developers relate that vision to what they’re trying to accomplish?
With the CERN openlab collaborative projects, we are leading activities with the potential for big breakthroughs in terms of the way we compute. At the start of the LHC’s lifecycle, rather than focusing on parallelizing code, we were primarily making gains by parallelizing events and files. Code parallelization will continue to help us to address future challenges at CERN.
Our work to modernize code creates the potential for many other fields to make fundamental changes based on increased processing capabilities, and to build a foundation for future breakthroughs. We hope our vision will inspire and inform other developers.
- What’s the role of code modernization as CERN moves through the upgrade phases toward realization of the High-Luminosity LHC (which is meant to deliver 10 times the collisions as the current LHC)? How are you working with Intel to ensure that you have the compute power to address the kinds of computational challenges that will be faced?
The LHC works in cycles: about three years running, then around two years of shutdown, so we can properly maintain and upgrade the accelerators. The next two “run” phases (after the current one) will be 2020 to 2023 and 2026 to 2029. In these later runs, the LHC increases the possible luminosity, and the corresponding data collection rate of the experiments will also significantly increase. This will result in us needing to handle many more collision events, with much greater complexity. These high rates and complicated events represent a challenging new era in computing for high-energy physics.
For one of the experiments, we estimate we’ll need 60 to 200 times more computing capability to process and analyze the data than we are currently using in run two. However, based on budget constraints, we currently project only an eight to 10 times increase in the speed of our applications.
It’s therefore incredibly important for us to work with Intel to modernize our code so that we can make the most efficient use of our hardware, increase our processing capacity, and realize the large-factor gains we need for these experiments to run successfully.
Intel is very engaged in this program, and our collaborative work helps ensure that we can meet the challenges ahead of us. We’re also working to test Intel’s hardware and demonstrate improvements achievable using the newest Intel® Xeon Phi™ processors and other technology. These efforts help ensure that we can bring leading-edge technology that contributes to progress inside and outside of our research field.
Read more about CERN and Intel in our next blog post, where we’ll talk to Maria about the programs available to enable developers and students to develop their skills, advance their careers, and the bring large-factor improvements to the applications they work with—using Modern Code.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.