Loading…
September 27-30, 2021
Seattle, Washington, USA + Virtual
View More Details & Registration

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for Open Source Summit + Embedded Linux Conference + OSPOCon 2021 to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration.

This schedule is automatically displayed in Pacific Daylight Time (UTC -7). To see the schedule in your preferred timezone, please select from the drop-down menu to the right, above "Filter by Date." The schedule is subject to change.

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.

Back To Schedule
Monday, September 27 • 1:30pm - 2:20pm
(VIRTUAL) Designing High-Performance Scalable Middleware for HPC, AI, and Data Science in Exascale Systems and Clouds - Dhabaleswar K Panda & Hari Subramoni, The Ohio State University

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
We will discuss many exciting challenges and opportunities for HPC, AI, and Data Science researchers by highlighting recent advances in AI, Data Science, and HPC technologies to improve the performance of deep neural network training (DNN) and data science/HPC workflows/applications on modern HPC systems. Traditionally, DL & data science frameworks have utilized a single GPU to accelerate their performance. However, approaches to parallelize them are being actively explored. We will provide an overview of emerging trends in DL frameworks from an architectural/performance standpoint, and evaluate new high-level distributed frameworks like DeepSpeed, Horovod, Dask, & cuML. We'll highlight new challenges for message-passing interface (MPI) runtimes to efficiently support DNN training and communication backends for Dask and CuML. We will present performance evaluation results from multiple HPC clusters and demonstrate the efficiency of MPI-based backends using micro-benchmark results and applications like sum of cuPy array with transpose, cuDF merge, K-Means, Nearest Neighbors, Random Forest, and tSVD. Finally, we scale DNN training for very large pathology images using model-parallelism to 1,024 NVIDIA V100 GPUs.

Speakers
HS

Hari Subramoni

Research Scientist, The Ohio State University
Dr. Hari Subramoni received the Ph.D. degree in Computer Science from The Ohio State University, Columbus, OH, in 2013. He is a research scientist in the Department of Computer Science and Engineering at the Ohio State University, USA, since Sept 2015. His current research interests... Read More →
DK

Dhabaleswar K Panda

Professor, The Ohio State University
Dhabaleswar K. (DK) Panda is a Professor of Computer Science and Engineering at the Ohio State University. His research interests include parallel computer architecture, high-performance networking, InfiniBand, Exascale computing, Big Data, programming models, GPUs and accelerators... Read More →


Monday September 27, 2021 1:30pm - 2:20pm PDT
MeetingPlay Platform + Virtual Learning Lab