2017 ModSim Workshop

Jason Liu attended the 2017 Workshop on Modeling & Simulation of Systems and ApplicationsAugust 09-11, 2017, held at the University of Washington, Seattle, WA, USA.  The three-day workshop is organized by Adolfy Hoisie (PNNL) and provided a DOE-centric perspective on HPC modeling and simulation. In particular, the workshop explored the  impact of modeling and simulation on the traditional HPC and its potential on new computing paradigms in the era of exascale computing.

Detailed notes (private) from the workshop are available.

WSC’17 Paper: HPC Job Scheduling Simulation

Simulation of HPC Job Scheduling and Large-Scale Parallel Workloads, Mohammad Abu Obaida and Jason Liu. In Proceedings of the 2017 Winter Simulation Conference (WSC 2017), W. K. V. Chan, A. D’Ambrogio, G. Zacharewicz, N. Mustafee, G. Wainer, and E. Page, eds., December 2017. [paper]

Abstract

The paper presents a simulator designed specifically for evaluating job scheduling algorithms on large-scale HPC systems. The simulator was developed based on the Performance Prediction Toolkit (PPT), which is a parallel discrete-event simulator written in Python for rapid assessment and performance prediction of large-scale scientific applications on supercomputers. The proposed job scheduler simulator incorporates PPT’s application models, and when coupled with the sufficiently detailed architecture models, can represent more realistic job runtime behaviors. Consequently, the simulator can evaluate different job scheduling and task mapping algorithms on the specific target HPC platforms more accurately.

Bibtex

@inproceedings{wsc17-jobsched,
title = {Simulation of HPC Job Scheduling and Large-Scale Parallel Workloads}, 
author = {Mohammad Abu Obaida and Jason Liu},
booktitle = {Proceedings of the 2017 Winter Simulation Conference (WSC 2017)}, 
editor = {W. K. V. Chan and A. D’Ambrogio and G. Zacharewicz and N. Mustafee and G. Wainer and E. Page},
month = {December},
year = {2017}
}

WSC’17 Paper: HPC Simulation History

A Brief History of HPC Simulation and Future Challenges, Kishwar Ahmed, Jason Liu, Abdel-Hameed Badawy, and Stephan Eidenbenz. In Proceedings of the 2017 Winter Simulation Conference (WSC 2017), W. K. V. Chan, A. D’Ambrogio, G. Zacharewicz, N. Mustafee, G. Wainer, and E. Page, eds., December 2017. [paper]

Abstract

High-performance Computing (HPC) systems have gone through many changes during the past two decades in their architectural design to satisfy the increasingly large-scale scientific computing demand. Accurate, fast, and scalable performance models and simulation tools are essential for evaluating alternative architecture design decisions for the massive-scale computing systems. This paper recounts some of the influential work in modeling and simulation for HPC systems and applications, identifies some of the major challenges, and outlines future research directions which we believe are critical to the HPC modeling and simulation community.

Bibtex

@inproceedings{wsc17-history,
title = {A Brief History of HPC Simulation and Future Challenges}, 
author = {Kishwar Ahmed and Jason Liu and Abdel-Hameed Badawy and Stephan Eidenbenz},
booktitle = {Proceedings of the 2017 Winter Simulation Conference (WSC 2017)}, 
editor = {W. K. V. Chan and A. D’Ambrogio and G. Zacharewicz and N. Mustafee and G. Wainer and E. Page},
month = {December},
year = {2017}
}

BigData’17 Paper: Light Curve Anomaly Detection

Real-Time Anomaly Detection of Short Time-Scale GWAC Survey Light Curves, Tianzhi Feng, Zhihui Du, Yankui Sun, Jianyan Wei, Jing Bi, and Jason Liu. In Proceedings of 6th IEEE International Congress on Big Data, June 2017. [paper]

Abstract

Ground-based Wide-Angle Camera array (GWAC) is a short time-scale survey telescope that can take images covering a field of view of over 5,000 square degrees every 15 seconds or even shorter. One scientific missions of GWAC is to accurately and quickly detect anomaly astronomical events. For that, a huge amount of data must be handled in real time. In this paper, we propose a new time series analysis model, called DARIMA (or Dynamic Auto-Regressive Integrated Moving Average), to identify the anomaly events that occur in light curves obtained from GWAC as early as possible with high degree of confidence. A major advantage of DARIMA is that it can dynamically adjust its model parameters during the real-time processing of the time series data. We identify the anomaly points based on the weighted prediction result of different time windows to improve accuracy. Experimental results using real survey data show that the DARIMA model can identify the first anomaly point for all light curves. We also evaluate our model with simulated anomaly events of various types embedded in the real time series data. The DARIMA model is able to generate the early warning triggers for all of them. The results from the experiments demonstrate that the proposed DARIMA model is a promising method for real-time anomaly detection of short time-scale GWAC light curves.

Bibtex

@INPROCEEDINGS{bd17-lightcurve, 
author={Tianzhi Feng and Zhihui Du and Yankui Sun and Jianyan Wei and Jing Bi and Jason Liu},
booktitle={2017 IEEE International Congress on Big Data (BigData Congress)}, 
title={Real-Time Anomaly Detection of Short-Time-Scale GWAC Survey Light Curves}, 
pages={224-231}, 
month={June},
year={2017}
}

SIMUTOOLS’17 Paper: Improving Real-Time SDN Simulation

On Improving Parallel Real-Time Network Simulation for Hybrid Experimentation of Software Defined Networks, Mohammad Abu Obaida and Jason Liu. In Proceedings of the 10th EAI International Conference on Simulation Tools and Techniques (SIMUTOOLS 2017), September 2017. [paper]

Abstract

Real-time network simulation enables simulation to operate in real time, and in doing so allows experiments with simulated, emulated, and real network components acting in concert to test novel network applications or protocols. Real-time simulation can also run in parallel for large-scale network scenarios, in which case network traffic is represented as simulation events passed as messages to remote simulation instances running on different machines. We note that substantial overhead exists in parallel real-time simulation to support synchronization and communication among distributed instances, which can significantly limit the performance and scalability of the hybrid approach. To overcome these challenges, we propose several techniques for improving the performance of parallel real-time simulation, by eliminating parallel synchronization and reducing communication overhead. Our experiments show that the proposed techniques can indeed improve the overall performance. In a use case, we demonstrate that our hybrid technique can be readily integrated for studies of software-defined networks.

Bibtex

@inproceedings{st17-realtime,
title = {On Improving Parallel Real-Time Network Simulation for Hybrid Experimentation of Software Defined Networks},
author = {Mohammad Abu Obaida and Jason Liu},
booktitle = {Proceedings of the 10th EAI International Conference on Simulation Tools and Techniques (SIMUTOOLS 2017)},
month = {September},
year = {2017}
}

MASCOTS’17 Paper: Energy Demand Response Scheduling

An Energy Efficient Demand-Response Model for High Performance Computing Systems, Kishwar Ahmed, Jason Liu, and Xingfu Wu. In Proceedings of the 25th IEEE International Symposium on the Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS 2017), September 2017.  [paper]

Abstract

Demand response refers to reducing energy consumption of participating systems in response to transient surge in power demand or other emergency events. Demand response is particularly important for maintaining power grid transmission stability, as well as achieving overall energy saving. High Performance Computing (HPC) systems can be considered as ideal participants for demand-response programs, due to their massive energy demand. However, the potential loss of performance must be weighed against the possible gain in power system stability and energy reduction. In this paper, we explore the opportunity of demand response on HPC systems by proposing a new HPC job scheduling and resource provisioning model. More specifically, the proposed model applies power-bound energy-conservation job scheduling during the critical demand-response events, while maintaining the traditional performance-optimized job scheduling during the normal period. We expect such a model can attract willing articipation of the HPC systems in the demand response programs, as it can improve both power stability and energy saving without significantly compromising application performance. We implement the proposed method in a simulator and compare it with the traditional scheduling approach. Using trace-driven simulation, we demonstrate that the HPC demand response is a viable approach toward power stability and energy savings with only marginal increase in the jobs’ execution time.

Bibtex

@inproceedings{mascots17-energy,
  title={An Energy Efficient Demand-Response Model for High Performance Computing Systems},
  author={Ahmed, Kishwar and Liu, Jason and Wu, Xingfu},
  booktitle={Proceedings of the 25th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS 2017)},
  pages={175--186},
  month={September},
  year={2017}
}

Slides