Invited Talk: Faster and Better Hybrid Testbeds for Future Network Research

Faster and Better Hybrid Testbeds for Future Network Research

June 1, 2017

Future Network Theory and Application Laboratory (FNL)
Beijing University of Posts and Telecommunications, Beijing, China
Host: Professor Tao Huang (黄韬)

Abstract: Modeling and simulation (M&S) plays an important role in the design analysis and performance evaluation of computer networks. The ability to execute large-scale simulation on high-end computing systems has enabled us to model detailed and complex network behaviors. However, the difficulty in reproducing realistic large-scale network phenomena goes beyond designing efficient parallel algorithms. This talk will cover some of the recent high-performance network modeling and simulation techniques, particularly in the context of developing testbeds for future network research. We will focus specifically on our recent research in real-time simulation, hybrid network traffic modeling, and symbiotic simulation and emulation.

HPPAC’17 Paper: Energy-Aware Scheduling

When Good Enough Is Better: Energy-Aware Scheduling for Multicore Servers, Xinning Hui, Zhihui Dua, Jason Liu, Hongyang Sun, Yuxiong He, David A. Bader. In Proceedings of the 13th Workshop on High-Performance, Power-Aware Computing (HPPAC 2017), held in conjunction with 31st IEEE International Parallel and Distributed Processing Symposium (IPDPS 2017), May 2017. [paper]

Abstract

Power is a primary concern for mobile, cloud, and high-performance computing applications. Approximate computing refers to running applications to obtain results with tolerable errors under resource constraints, and it can be applied to balance energy consumption with service quality. In this paper, we propose a “Good Enough (GE)” scheduling algorithm that uses approximate computing to provide satisfactory QoS (Quality of Service) for interactive applications with significant energy savings. Given a user-specified quality level, the GE algorithm works in the AES (Aggressive Energy Saving) mode for the majority of the time, neglecting the low-quality portions of the workload. When the perceived quality falls below the required level, the algorithm switches to the BQ (Best Quality) mode with a compensation policy. To avoid core speed thrashing between the two modes, GE employs a hybrid power distribution scheme that uses the Equal-Sharing (ES) policy to distribute power among the cores when the workload is light (to save energy) and the Water-Filling (WF) policy when the workload is high (to improve quality). We conduct simulations to compare the performance of GE with existing scheduling algorithms. Results show that the proposed algorithm can provide large energy savings with satisfactory user experience.

Bibtex

@INPROCEEDINGS{ipdpsw17-approx,
author={X. Hui and Z. Du and J. Liu and H. Sun and Y. He and D. A. Bader},
booktitle={Proceedings of the 13th Workshop on High-Performance, Power-Aware Computing (HPPAC 2017), held in conjunction with 31st IEEE International Parallel and Distributed Processing Symposium (IPDPS 2017)},
title={When Good Enough Is Better: Energy-Aware Scheduling for Multicore Servers},
pages={984-993},
doi={10.1109/IPDPSW.2017.38},
month={May},
year={2017}
}

Invited Talk: High-Performance Modeling and Simulation of Computer Networks

High-Performance Modeling and Simulation of Computer Networks

May 26, 2017

Department of Computer Science
Tsinghua University, Beijing, China
Host: Professor Zhihui Du (都志辉)

Abstract: Modeling and simulation (M&S) plays an important role in the design analysis and performance evaluation of complex systems. Many of these systems, such as computer networks, involve a large number of interrelated components and processes. Complex behaviors emerge as these components and processes inter-operate across multiple scales at various granularities. M&S must be able to provide sufficiently accurate results while coping with the scale and complexity.

My talk will focus on two novel techniques in high-performance network modeling and simulation. The first is a GPU-assisted hybrid network traffic modeling method. The hybrid approach offloads the computationally intensive bulk traffic calculations to the background onto GPU, while leaving detailed simulation of network transactions in the foreground on CPU. Our experiments show that the CPU-GPU hybrid approach can achieve significant performance improvement over the CPU-only approach.

The second technique is a distributed network emulation method based on simulation symbiosis. Mininet is a container-based emulation environment that can study networks consisted of virtual hosts and OpenFlow-enabled virtual switches on Linux. It is well-known, however, that experiments using Mininet may lose fidelity for large-scale networks and heavy traffic load. The proposed symbiotic approach uses an abstract network model to coordinate distributed Mininet instances with superimposed traffic to represent large-scale network scenarios.

ICC’17 Paper: Mininet Symbiosis

Distributed Mininet with Symbiosis, Rong Rong and Jason Liu. In Proceedings of the IEEE International Conference on Communications (ICC 2017), May 2017.  [paper]

Abstract

Mininet is a container-based emulation environment that can study networks with virtual hosts and OpenFlow- enabled virtual switches on Linux. However, it is well-known that experiments using Mininet may lose fidelity for large- scale networks and heavy traffic load. One solution is to use a distributed setup where an experiment constitutes multiple instances of Mininet running on a cluster, each handling a subset of virtual hosts and switches. Such arrangement, however, is still constrained by bandwidth and latency limitations in the physical connection between the instances. In this paper, we propose a novel method of integrating distributed Mininet instances using a symbiotic approach, which extends an existing method for combining real-time simulation and emulation. We use an abstract network model to coordinate the distributed instances, which are superimposed to represent the target network. In this case, one can more effectively study the behavior of real imple- mentation of network applications on large-scale networks, since the interaction between the Mininet instances is only capturing the effect of contentions among network flows in shared queues, as opposed to having to exchange individual network packets, which can be limited by bandwidth or sensitive to latency. We provide a prototype implementation of the new approach and present validation studies to show it can achieve accurate results. We also present a case study that successfully replicates the behavior of a denial-of-service (DoS) attack protocol.

Bibtex

@INPROCEEDINGS{icc2017-symbiosis,
author={R. Rong and J. Liu},
booktitle={2017 IEEE International Conference on Communications (ICC)},
title={Distributed mininet with symbiosis},
pages={1-6},
doi={10.1109/ICC.2017.7996343},
month={May},
year={2017}
}

Slides

Invited Talk: High-Performance Modeling and Simulation of Computer Networks

High-Performance Modeling and Simulation of Computer Networks

April 26, 2017

Laboratory of Information, Networking and Communication Sciences (LINCS), Paris, France
Host: Professor Dario Rossi

Abstract: Modeling and simulation (M&S) plays an important role in the design analysis and performance evaluation of complex systems. Many of these systems, such as computer networks, involve a large number of interrelated components and processes. Complex behaviors emerge as these components and processes inter-operate across multiple scales at various granularities. M&S must be able to provide sufficiently accurate results while coping with the scale and complexity.
My talk will focus on two novel techniques in high-performance network modeling and simulation. The first is a GPU-assisted hybrid network traffic modeling method. The hybrid approach offloads the computationally intensive bulk traffic calculations to the background onto GPU, while leaving detailed simulation of network transactions in the foreground on CPU. Our experiments show that the CPU-GPU hybrid approach can achieve significant performance improvement over the CPU-only approach.
The second technique is a distributed network emulation method based on simulation symbiosis. Mininet is a container-based emulation environment that can study networks consisted of virtual hosts and OpenFlow-enabled virtual switches on Linux. It is well-known, however, that experiments using Mininet may lose fidelity for large-scale networks and heavy traffic load. The proposed symbiotic approach uses an abstract network model to coordinate distributed Mininet instances with superimposed traffic to represent large-scale network scenarios.

Invited Talk: Extending PrimoGENI for Symbiotic Distributed Network Emulation

Extending PrimoGENI for Symbiotic Distributed Network Emulation

March 13, 2017

GENI Regional Workshop (GRW), held in conjunction with GEC25 Miami, Florida, USA

The talk includes recent development in hybrid at-scale network experimentation, which extends the previous PrimoGENI project.

[slides]

ICBDA’17 Paper: MOOC Learning Zipf Law

Zipf’s Law in MOOC Learning Behavior, Chang Men, Xiu Li, Zhihui Du, Jason Liu, Manli Li, and Xiaolei Zhang. In Proceedings of the 2nd IEEE International Conference on Big Data Analysis (ICBDA 2017), March 2017. [paper]

Abstract

Learners participating in Massive Open Online Courses (MOOC) have a wide range of backgrounds and motivations. Many MOOC learners sign up the courses to take a brief look; only a few go through the entire content, and even fewer are able to eventually obtain a certificate. We discovered this phenomenon after having examined 76 courses on the xuetangX platform. More specifically, we found that in many courses the learning coverage—one of the metrics used to estimate the learners’ active engagement with the online courses—observes a Zipf distribution. We apply the maximum likelihood estimation method to fit the Zipf’s law and test our hypothesis using a chi-square test. The result from our study is expected to bring insight to the unique learning behavior on MOOC and thus help improve the effectiveness of MOOC learning platforms and the design of courses.

Bibtex

@inproceedings{mooczipf,
title = {Zipf’s Law in MOOC Learning Behavior},
author = {Chang Men and Xiu Li and Zhihui Du and Jason Liu and Manli Li and Xiaolei Zhang},
booktitle = {Proceedings of the 2nd IEEE International Conference on Big Data Analysis (ICBDA 2017)},
month = {March},
year = {2017}
}