Cenic.org

CENIC Recognizes IceCube Team for Accelerating Data Processing to Advance Scientific Discovery

Categories University of California RENS & NRENS Press Release

Tags award conference icecube pacific research platform supercomputing ucsd

CENIC Announces Recipient of 2020 Innovations in Networking Award for Experimental Applications

A cost-effective ExaFLOP hour in the Clouds for IceCube.

La Mirada, CA & Berkeley, CA — March 9, 2020 — In recognition of work to accelerate data processing capabilities and advance research conducted across the California Research and Education Network and the Pacific Research Platform, the IceCube Cloud Compute Takeover Team has been selected to receive the 2020 Award for Experimental Applications. The CENIC Innovations in Networking Awards recognize exemplary people, projects, and organizations that leverage high-bandwidth networking.

In November 2019, the IceCube Cloud Compute Takeover Team completed the largest cloud simulation in history, using 51,500 cloud Graphics Processing Units to analyze extremely large datasets and to ultimately help scientists at the Antarctic IceCube Neutrino Observatory better understand messages from the universe. Funded by the National Science Foundation to prepare for the exascale computing era, the project demonstrated that IceCube can effectively utilize a large number of GPUs in a single pool.

IceCube is one of the major users of the Pacific Research Platform Nautilus cluster, among other resource providers who participate in the Open Science Grid (OSG). The PRP builds on the optical backbone of Pacific Wave, a joint project of CENIC and the Pacific Northwest GigaPoP (PNWGP), to create a seamless research platform that enables collaboration in a broad range of data-intensive fields and projects.

As a result of the IceCube Cloud Compute Takeover Team’s efforts, researchers will be ready to enter a new era in the use of state-of-the-art cyberinfrastructure to enable scientific discoveries at any scale. The lessons learned will be widely applicable to any science program trying to achieve large-scale cloudburst simulations. Scientific discovery in data-intensive fields such as astrophysics, cancer genomics, biomedical science, and climate modeling will benefit from the process.

IceCube Cloud Compute Takeover Team members being recognized are: Benedikt Riedel, computing manager for the IceCube Neutrino Observatory and global computing coordinator at Wisconsin IceCube Particle Astrophysics Center (WIPAC); David Schultz, filtering programmer at WIPAC; Igor Sfiligoi, lead scientific software developer and researcher at the San Diego Supercomputer Center and Calit2; and Frank Würthwein, physics professor at the University of California, San Diego and executive director of the Open Science Grid.

"South Pole GAW Global station" by World Meteorological Organization is licensed under CC BY-NC-ND 2.0

The IceCube Neutrino Observatory searches for ghost-like massless particles called neutrinos deep within the ice at the South Pole using a buried cubic kilometer-size telescope consisting of 5,160 optical sensors. Exploding stars, black holes, and gamma-ray bursts send messengers in the form of neutrino particles providing insights into the nature of the universe. Analyzing these messengers requires exascale-level distributed computing that can be provided by cloud resources.

The IceCube Cloud Compute Takeover Team successfully marshaled all globally available for sale GPUs from all three major cloud providers — Amazon Web Services, Microsoft Azure, and the Google Cloud Platform — and across most of their regions, covering the US, Europe, and the Asia-Pacific areas. The central manager for the demonstration run was hosted at UCSD and thus relied on the CENIC network.

The demonstration reached a peak of about 380 PFLOP32s, achieving 90% of the capacity of the world’s largest supercomputer, and attaining about a week’s worth of simulation work for IceCube in a single hour.

“The results of this experiment tell us that we can elastically burst to very large scales of GPUs using the cloud, given that exascale computers don’t exist now but may soon be used in the coming years,” Würthwein said. “The experiment also shows such bursting of massive data computation is suitable for a wide range of challenges across astronomy and other sciences. To the extent that the elasticity is there, we believe that this can be applied across much of scientific research to get results quickly.”

Using 50k GPUs across multiple Clouds for IceCube Science

The experiment was conducted on a Saturday, a time deemed to have fewer competing demands for the GPU cloud resources, just prior to the opening of the International Conference for High-Performance Computing, Networking, Storage, and Analysis (SC19) in Denver, CO. In February, the team conducted a second cloudburst experiment with a smaller group of cloud GPUs. The second experiment was conducted on a regular workday, demonstrating the kind of cloudburst that could be done routinely by researchers.

This work is supported by National Science Foundation grants OAC-1941481, MPS-1148698, OAC-1841530, OPP-1600823, OAC-190444, and OAC-1826967.

Related blog posts

The San Diego Promise Zone: Connecting Potential with Promise

PRP: Fulfilling the Promise of Collapsing Space and Time