The 6th Deep Learning on Supercomputers Workshop
Program (July 2nd, 14:00–18:00, CET)
Time | Title | Speaker |
---|---|---|
14:00–14:10 | Opening | Workshop Chairs |
14:10–14:35 | Deep-learning approaches to Learn Interaction Patterns from Protein-Protein Interfaces | Alexandre Bonvin & Manon Réau, Utrecht University |
14:35–15:00 | Reconstruction MRIs with Deep Learning | Jonas Teuwen, Netherlands Cancer Institute (NKI) |
15:00–15:25 | JUWELS Booster: A Supercomputer for Large-Scale AI Research | Stefan Kesselheim, Jülich Supercomputing Center |
15:25–15:40 | Coffee Break | |
15:40–16:05 | AI-enabled COVID-19 Drug Discovery | Arvind Ramanathan, Argonne National Laboratory |
16:05–16:30 | Dataflow Optimized Systems for ML Accelerated HPC | Chen Liu, SambaNova Systems |
16:30–17:00 | ISC’21 Break | |
17:00–17:55 | Keynote: High-Performance Scalable Deep Learning | Torsten Hoefler, ETH Zürich |
17:55–18:00 | Closing Remarks | Workshop Chairs |
The Deep Learning (DL) on Supercomputers workshop provides a forum for practitioners working on any and all aspects of DL for scientific research in the High Performance Computing (HPC) context to present their latest research results and development, deployment, and application experiences. The general theme of this workshop series is the intersection of DL and HPC, while the theme of this particular workshop is centered around the applications of deep learning methods in scientific research: novel uses of deep learning methods, e.g., convolutional neural networks (CNN), recurrent neural networks (RNN), generative adversarial network (GAN), and reinforcement learning (RL), for both natural and social science research, and innovative applications of deep learning in traditional numerical simulation. Its scope encompasses application development in scientific scenarios using HPC platforms; DL methods applied to numerical simulation; fundamental algorithms, enhanced procedures, and software development methods to enable scalable training and inference; hardware changes with impact on future supercomputer design; and machine deployment, performance evaluation, and reproducibility practices for DL applications with an emphasis on scientific usage. This workshop will be centered around published papers. Submissions will be peer-reviewed, and accepted papers will be published as part of the Joint Workshop Proceeding by Springer.
Topics include but are not limited to:
- DL as a novel approach of scientific computing
- Emerging scientific applications driven by DL methods
- Novel interactions between DL and traditional numerical simulation
- Effectiveness and limitations of DL methods in scientific research
- Algorithms and procedures to enhance reproducibility of scientific DL applications
- DL for science workflows
- Data management through the life cycle of scientific DL applications
- General algorithms and procedures for efficient and scalable DL training
- Scalable DL methods to address the challenges of demanding scientific applications
- General algorithms and systems for large scale model serving for scientific use cases
- New software, and enhancements to existing software, for scalable DL
- DL communication optimization at scale
- I/O optimization for DL at scale
- DL performance evaluation and analysis on deployed systems
- DL performance modeling and tuning of DL on supercomputers
- DL benchmarks on supercomputers
- Novel hardware designs for more efficient DL
- Processors, accelerators, memory hierarchy, interconnect changes with impact on deep learning in the HPC context
As part of the reproducibility initiative, the workshop requires authors to provide information such as the algorithms, software releases, datasets, and hardware configurations used. For performance evaluation studies, we will encourage authors to use well-known benchmarks or applications with open accessible datasets: for example, MLPerf and ResNet-50 with the ImageNet-1K dataset.
Import Dates
- Technical paper due: April 30th, 2021 (AoE)
- Acceptance notification: May 26th, 2021
- Camera ready: June 17th, 2021
- Workshop date: July 2nd, 2021
Paper Submission
Authors are invited to submit unpublished, original work with a minimum of 6 pages and a maximum of 12 pages in single column text with LNCS style. All submissions should be in LNCS format and submitted using EasyChair tentatively.
Organizing Committee
- Valeriu Codreanu (co-chair), SURF, Netherlands
- Ian Foster (co-chair), UChicago & ANL, USA
- Zhao Zhang (co-chair), TACC, USA
- Weijia Xu (proceeding chair), TACC, USA
- Ahmed Al-Jarro, Fujitsu Laboratories of Europe, UK
- Takuya Akiba, Preferred Networks, Japan
- Thomas S. Brettin, ANL, USA
- Maxwell Cai, SURF, Netherlands
- Erich Elsen, DeepMind, USA
- Steve Farrell, LBNL, USA
- Song Feng, IBM Research, USA
- Boris Ginsburg, Nvidia, USA
- Torsten Hoefler, ETH, Switzerland
- Jessy Li, UT Austin, USA
- Zhengchun Liu, ANL, USA
- Peter Messmer, Nvidia, USA
- Damian Podareanu, SURF, Netherlands
- Simon Portegies Zwart, Leiden Observatory, Netherlands
- Qifan Pu, Google, USA
- Arvind Ramanathan, ANL, USA
- Vikram Saletore, Intel, USA
- Mikhail E. Smorkalov, Huawei, Russia
- Rob Schreiber, Cerebras, USA
- Dan Stanzione, TACC, USA
- Rick Stevens, UChicago & ANL, USA
- Wei Tan, Citadel, USA
- Jordi Torres, Barcelona Supercomputing Center, Spain
- Daniela Ushizima, LBNL, USA
- Sofia Vallecorsa , CERN, Switzerland
- David Walling, TACC, USA
- Markus Weimer, Microsoft, USA
- Kathy Yelick, UC Berkeley & LBNL, USA
- Huazhe Zhang, Facebook, USA