WSC’18 Paper: IMCSim: Parameterized Performance Prediction for Implicit Monte Carlo Codes

IMCSim: Parameterized Performance Prediction for Implicit Monte Carlo Codes, Gopinath Chennupathi, Stephan Eidenbenz, Alex Long, Olena Tkachenko, Joseph Zerr, and Jason Liu. In Proceedings of the 2018 Winter Simulation Conference (WSC 2018), December 2018. (To appear).

Abstract

Monte Carlo techniques to radiation transport play a significant role in modeling complex astrophysical phenomena. In this paper, we design an application model (IMCSim) of an Implicit Monte Carlo (IMC) particle code using the Performance Prediction Toolkit (PPT), a discrete-event simulation-based modeling framework for predicting code performance on a large range of parallel platforms. We present validation results for IMCSim. We then use the fast parameter scanning that such a high-level loop-structure model of a complex code enables to predict optimal IMC parameter settings for interconnect latency hiding. We find that variations in interconnect bandwidth have a significant effect on optimal parameter values. Our results suggest potential value using IMCSim as a pre-step to substantial IMC runs to quickly identify optimal parameter values for the specific hardware platform on which IMC runs.

Bibtex

@inproceedings{imcsim,
title = {IMCSim: Parameterized Performance Prediction for Implicit Monte Carlo Codes},
author = {Chennupathi, Gopinath and Eidenbenz, Stephan and Long, Alex and Tkachenko, Olena and Zerr, Joseph and Liu, Jason},
booktitle = {Proceedings of the 2018 Winter Simulation Conference (WSC 2018)},
month = {December},
year = {2018}
}

SUSCOM’18 Paper: Program Power Profiling Based on Phase Behaviors

Program Power Profiling Based on Phase Behaviors, Xiaobin Ma, Zhihui Du, and Jason Liu. Sustainable Computing, Informatics and Systems, doi:10.1016/j.suscom.2018.05.001 – 17 May 2018. To appear. [preprint]

Abstract

Power profiling tools based on fast and accurate workload analysis can be useful for job scheduling and resource allocation aiming to optimize the power consumption of large-scale, high-performance computer systems. In this article, we propose a novel method for predicting the power consumption of a complete workload or application by extrapolating the power consumption of only a few code segments of the same application obtained from measurements. As such, it provides a fast and yet effective way for predicting the power consumption of the execution of both single and multi-threaded programs on arbitrary architectures without having to profile the entire program’s execution. The latter would be costly to obtain, especially if it is a long-running program. Our method employs a set of code analysis tools to capture the program’s phase behavior and then uses a multi-variable linear regression method to estimate the power consumption of the entire program. For validation, we select the SPEC 2006 benchmark suite and the NAS parallel benchmarks to evaluate the accuracy and effectiveness of our method. Experimental results on three generations of multicore processors show that our power profiling method achieves good accuracy in predicting program’s energy use with relatively small errors.

Bibtex

@Article{suscom18,
AUTHOR = {Ma, Xiaobin and Du, Zhihui and Liu, Jason},
TITLE = {Program Power Profiling Based on Phase Behaviors},
JOURNAL = {Sustainable Computing, Informatics and Systems},
URL = {https://doi.org/10.1016/j.suscom.2018.05.001},
DOI = {10.1016/j.suscom.2018.05.001}
}

HPCC’18 Paper: HPC Demand Response via Power Capping and Node Scaling

Enabling Demand Response for HPC Systems Through Power Capping and Node Scaling, Kishwar Ahmed, Jason Liu, and Kazutomo Yoshii. In Proceedings of the 20th IEEE International Conference on High Performance Computing and Communications (HPCC-2018), June 2018. [to appear]

Abstract

Demand response is an increasingly popular program ensuring power grid stability during a sudden surge in power demand. We expect high-performance computing (HPC) systems to be valued participants in such program for their massive power consumption. In this paper, we propose an emergency demand-response model exploiting both power capping of HPC systems and node scaling of HPC applications. First, we present power and performance prediction models for HPC systems with only power capping, upon which we propose our demand-response model. We validate the models with real-life measurements of application characteristics. Next, we present models to predict energy-to-solution for HPC applications with different numbers of nodes and power-capping values, and we validate the models. Based on the prediction models, we propose an emergency demand response participation model for HPC systems to determine optimal resource allocation based on power capping and node scaling. Finally, we demonstrate the effectiveness of our proposed demand-response model using real-life measurements and trace data. We show that our approach can reduce energy consumption with only a slight increase in the execution time for HPC applications during critical demand response periods.

Bibtex

@inproceedings{hpcc18-power,
title = {Enabling Demand Response for HPC Systems Through Power Capping and Node Scaling},
author = {Kishwar Ahmed and Jason Liu and Kazutomo Yoshii},
booktitle = {Proceedings of the 20th IEEE International Conference on High Performance Computing and Communications (HPCC'18)},
month = {June},
year = {2018}
}

SIGSIM-PADS’18 Paper: Parallel Application Performance Prediction

Parallel Application Performance Prediction Using Analysis Based Models and HPC Simulations, Mohammad Abu Obaida, Jason Liu, Gopinath Chennupati, Nandakishore Santhi, and Stephan Eidenbenz. In Proceedings of the 2018 SIGSIM Principles of Advanced Discrete Simulation (SIGSIM-PADS’18), May 2018. [paper]

Abstract

Parallel application performance models provide valuable insight about the performance in real systems. Capable tools providing fast, accurate, and comprehensive prediction and evaluation of high-performance computing (HPC) applications and system architectures have important value. This paper presents PyPassT, an analysis based modeling framework built on static program analysis and integrated simulation of target HPC architectures. More specifically, the framework analyzes application source code written in C with OpenACC directives and transforms it into an application model describing its computation and communication behavior (including CPU and GPU workloads, memory accesses, and message-passing transactions). The application model is then executed on a simulated HPC architecture for performance analysis. Preliminary experiments demonstrate that the proposed framework can represent the runtime behavior of benchmark applications with good accuracy.

Bibtex

@inproceedings{pads18-hpcpred,
title = {Parallel Application Performance Prediction Using Analysis Based Models and HPC Simulations},
author = {Mohammad Abu Obaida and Jason Liu and Gopinath Chennupati and Nandakishore Santhi and Stephan Eidenbenz},
booktitle = {Proceedings of the 2018 SIGSIM Principles of Advanced Discrete Simulation (SIGSIM-PADS’18)},
pages = {49--59},
month = {May},
year = {2018},
doi = {10.1145/3200921.3200937}
}

Slides

WSC’17 Paper: HPC Job Scheduling Simulation

Simulation of HPC Job Scheduling and Large-Scale Parallel Workloads, Mohammad Abu Obaida and Jason Liu. In Proceedings of the 2017 Winter Simulation Conference (WSC 2017), W. K. V. Chan, A. D’Ambrogio, G. Zacharewicz, N. Mustafee, G. Wainer, and E. Page, eds., December 2017. [paper]

Abstract

The paper presents a simulator designed specifically for evaluating job scheduling algorithms on large-scale HPC systems. The simulator was developed based on the Performance Prediction Toolkit (PPT), which is a parallel discrete-event simulator written in Python for rapid assessment and performance prediction of large-scale scientific applications on supercomputers. The proposed job scheduler simulator incorporates PPT’s application models, and when coupled with the sufficiently detailed architecture models, can represent more realistic job runtime behaviors. Consequently, the simulator can evaluate different job scheduling and task mapping algorithms on the specific target HPC platforms more accurately.

Bibtex

@inproceedings{wsc17-jobsched,
title = {Simulation of HPC Job Scheduling and Large-Scale Parallel Workloads}, 
author = {Mohammad Abu Obaida and Jason Liu},
booktitle = {Proceedings of the 2017 Winter Simulation Conference (WSC 2017)}, 
editor = {W. K. V. Chan and A. D’Ambrogio and G. Zacharewicz and N. Mustafee and G. Wainer and E. Page},
month = {December},
year = {2017}
}

WSC’17 Paper: HPC Simulation History

A Brief History of HPC Simulation and Future Challenges, Kishwar Ahmed, Jason Liu, Abdel-Hameed Badawy, and Stephan Eidenbenz. In Proceedings of the 2017 Winter Simulation Conference (WSC 2017), W. K. V. Chan, A. D’Ambrogio, G. Zacharewicz, N. Mustafee, G. Wainer, and E. Page, eds., December 2017. [paper]

Abstract

High-performance Computing (HPC) systems have gone through many changes during the past two decades in their architectural design to satisfy the increasingly large-scale scientific computing demand. Accurate, fast, and scalable performance models and simulation tools are essential for evaluating alternative architecture design decisions for the massive-scale computing systems. This paper recounts some of the influential work in modeling and simulation for HPC systems and applications, identifies some of the major challenges, and outlines future research directions which we believe are critical to the HPC modeling and simulation community.

Bibtex

@inproceedings{wsc17-history,
title = {A Brief History of HPC Simulation and Future Challenges}, 
author = {Kishwar Ahmed and Jason Liu and Abdel-Hameed Badawy and Stephan Eidenbenz},
booktitle = {Proceedings of the 2017 Winter Simulation Conference (WSC 2017)}, 
editor = {W. K. V. Chan and A. D’Ambrogio and G. Zacharewicz and N. Mustafee and G. Wainer and E. Page},
month = {December},
year = {2017}
}

MASCOTS’17 Paper: Energy Demand Response Scheduling

An Energy Efficient Demand-Response Model for High Performance Computing Systems, Kishwar Ahmed, Jason Liu, and Xingfu Wu. In Proceedings of the 25th IEEE International Symposium on the Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS 2017), September 2017.  [paper]

Abstract

Demand response refers to reducing energy consumption of participating systems in response to transient surge in power demand or other emergency events. Demand response is particularly important for maintaining power grid transmission stability, as well as achieving overall energy saving. High Performance Computing (HPC) systems can be considered as ideal participants for demand-response programs, due to their massive energy demand. However, the potential loss of performance must be weighed against the possible gain in power system stability and energy reduction. In this paper, we explore the opportunity of demand response on HPC systems by proposing a new HPC job scheduling and resource provisioning model. More specifically, the proposed model applies power-bound energy-conservation job scheduling during the critical demand-response events, while maintaining the traditional performance-optimized job scheduling during the normal period. We expect such a model can attract willing articipation of the HPC systems in the demand response programs, as it can improve both power stability and energy saving without significantly compromising application performance. We implement the proposed method in a simulator and compare it with the traditional scheduling approach. Using trace-driven simulation, we demonstrate that the HPC demand response is a viable approach toward power stability and energy savings with only marginal increase in the jobs’ execution time.

Bibtex

@inproceedings{mascots17-energy,
  title={An Energy Efficient Demand-Response Model for High Performance Computing Systems},
  author={Ahmed, Kishwar and Liu, Jason and Wu, Xingfu},
  booktitle={Proceedings of the 25th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS 2017)},
  pages={175--186},
  month={September},
  year={2017}
}

Slides

HPPAC’17 Paper: Energy-Aware Scheduling

When Good Enough Is Better: Energy-Aware Scheduling for Multicore Servers, Xinning Hui, Zhihui Dua, Jason Liu, Hongyang Sun, Yuxiong He, David A. Bader. In Proceedings of the 13th Workshop on High-Performance, Power-Aware Computing (HPPAC 2017), held in conjunction with 31st IEEE International Parallel and Distributed Processing Symposium (IPDPS 2017), May 2017. [paper]

Abstract

Power is a primary concern for mobile, cloud, and high-performance computing applications. Approximate computing refers to running applications to obtain results with tolerable errors under resource constraints, and it can be applied to balance energy consumption with service quality. In this paper, we propose a “Good Enough (GE)” scheduling algorithm that uses approximate computing to provide satisfactory QoS (Quality of Service) for interactive applications with significant energy savings. Given a user-specified quality level, the GE algorithm works in the AES (Aggressive Energy Saving) mode for the majority of the time, neglecting the low-quality portions of the workload. When the perceived quality falls below the required level, the algorithm switches to the BQ (Best Quality) mode with a compensation policy. To avoid core speed thrashing between the two modes, GE employs a hybrid power distribution scheme that uses the Equal-Sharing (ES) policy to distribute power among the cores when the workload is light (to save energy) and the Water-Filling (WF) policy when the workload is high (to improve quality). We conduct simulations to compare the performance of GE with existing scheduling algorithms. Results show that the proposed algorithm can provide large energy savings with satisfactory user experience.

Bibtex

@INPROCEEDINGS{ipdpsw17-approx,
author={X. Hui and Z. Du and J. Liu and H. Sun and Y. He and D. A. Bader},
booktitle={Proceedings of the 13th Workshop on High-Performance, Power-Aware Computing (HPPAC 2017), held in conjunction with 31st IEEE International Parallel and Distributed Processing Symposium (IPDPS 2017)},
title={When Good Enough Is Better: Energy-Aware Scheduling for Multicore Servers},
pages={984-993},
doi={10.1109/IPDPSW.2017.38},
month={May},
year={2017}
}

Invited Talk: Symbiotic Modeling and High-Performance Simulation

Symbiotic Modeling and High-Performance Simulation

January 19, 2017

Department of Computer Science, Colorado School of Mines
Host: Professor Tracy Camp

Abstract: Modeling and simulation plays an important role in the design analysis and performance evaluation of complex systems. Many of these systems, such as the internet and high-performance computing systems, involve a huge number of interrelated components and processes. Complex behaviors emerge as these components and processes inter-operate across multiple scales at various granularities. Modeling and simulation must be able to provide sufficiently accurate results while coping with the scale and the complexity of these systems. My talk will focus on some of our latest advances in high-performance modeling and simulation techniques. I will focus on two specific case studies, one on network emulation and the other on high-performance computing (HPC) modeling.
In the first case, I will present a novel distributed network emulation mechanism based on modeling symbiosis. Mininet is a container-based emulation environment that can study networks consisted of virtual hosts and OpenFlow-enabled virtual switches on Linux. It is well-known, however, that experiments using Mininet may lose fidelity for large-scale networks and heavy traffic load. We propose a symbiotic approach, where an abstract network model is used to coordinate the distributed emulation instances superimposed to represent the target network. In doing so, we can effectively study the behavior of real implementation of network applications on large-scale networks in a distributed environment.
In the second case, I will present our latest work on performance modeling of HPC architectures and applications. In collaboration with the Los Alamos National Laboratory, we have developed a highly efficient simulator, called Performance Prediction Toolkit (PPT), which can facilitate rapid and accurate performance prediction of large-scale scientific applications on existing and future HPC architectures.

HPCC’16 Paper: HPC Interconnect Model

Scalable Interconnection Network Models for Rapid Performance Prediction of HPC Applications, Kishwar Ahmed, Jason Liu, Stephan Eidenbenz, and Joe Zerr. In Proceedings of the 18th International Conference on High Performance Computing and Communications (HPCC 2016), December 2016. [paper] [slides]

Abstract

Performance Prediction Toolkit (PPT) is a simulator mainly developed at Los Alamos National Laboratory to facilitate rapid and accurate performance prediction of large-scale scientific applications on existing and future HPC architectures. In this paper, we present three interconnect models for performance prediction of large-scale HPC applications. They are based on interconnect topologies widely used in HPC systems: torus, dragonfly, and fat-tree. We conduct extensive validation tests of our interconnect models, in particular, using configurations of existing HPC systems. Results show that our models provide good accuracy for predicting the network behavior. We also present a performance study of a parallel computational physics application to show that our model can accurately predict the parallel behavior of large-scale applications.

Bibtex

@INPROCEEDINGS{Ahmed2016:scale-intercon,
author={K. Ahmed and J. Liu and S. Eidenbenz and J. Zerr},
booktitle={Proceedings of the IEEE 18th International Conference on High Performance Computing and Communications (HPCC)},
title={Scalable Interconnection Network Models for Rapid Performance Prediction of HPC Applications},
year={2016},
pages={1069-1078},
doi={10.1109/HPCC-SmartCity-DSS.2016.0151},
month={Dec},}