Neuroscience Research on Aurora Exascale
Argonne National Laboratory Senior Computer Scientist Nicola Ferrier explains how neuroscience research will process exabytes of data and build complex algorithms on the Aurora Exascale Supercomputer to address problems that nobody else can address.
Having access to the first exascale system means that we're really going to be able to address problems that nobody else can address. Neuroscientists would like to understand the structure of the brain and the connections, and so they want to know what types of cells they are, what types of neurons, who's connected and how, so that they can then go on and do later studies of things like aging, development, learning, disease states.
In order to do that, you have to extract this information, not just from one brain, but from lots of brains. And so trying to build what's called the connectome of a brain, the amount of data is massive. If you want to do a centimeter cubed of a brain, which is a very tiny portion, this is petabytes of data. As we go to a larger part of the brain, we're going to be at exabytes of data.
The computer scientist in me says the possibility to handle the really large data sets that are being produced will be a very useful tool for all domains of science beyond just neuroscience. I'm really driven by the algorithms and the algorithm development. We're not going to be compute limited if we have an exascale machine, and so the sky's the limit in terms of development. And often as a, computer scientist, you do worry about whether you'll be able to run an algorithm at least in your lifetime. I'm thrilled to have the resources to address those problems.
Argonne Lab Director Rick Stevens talks about conducting 3D modeling and simulation on Exascale Supercomputers.
Trish Damkroger, Intel VP and GM for HPC, talks about delivering supercomputing performance and driving the convergence of AI and HPC.
Aerospace Professor Ken Jansen explains how engineers will create faster and complex models and simulations on Exascale.
Research Scientist Jimmy Proudfoot talks about the impact of Exascale Supercomputing on his work researching our universe.
Describes how Pittsburgh Supercomputing Center and Intel® Omni-Path Architecture make HPC available to researchers.
A video introducing the newest 2nd Gen Intel® Xeon® Scalable processors.
From AI and analytics to simulation and modeling, Intel’s high performance computing (HPC) platform integrates powerful memory, storage, fabric, and acceleration to tackle your biggest challenges.
The Distributed Asynchronous Object Storage (DAOS) is an open-source software-defined object store designed from the ground up for massively distributed Non Volatile Memory (NVM). DAOS takes advantage of next-generation NVM technology like Storage Class Memory (SCM) and NVM Express (NVMe).
Tackle complex workloads and tomorrow's challenges with one data-centric platform designed for HPC, AI and Analytics.