MULTI-TENANCY

Introduction of CloudMix

Background

CloudMix focuses on synthesizing and replicating resource usage and micro-architectural behaviors of a diverse range of cloud workloads. This is motivated by the fact that in today’s cloud data center, cloud workloads come from a wide range of applications. Typical examples are:

–     Long-running service workloads. These workloads offer online services such as web search engines and e-commerce sites to end users and the services usually keep running for months and years.
–     Short-term data analytic workloads. These workloads process input data of different scales (from KB to PB) using relatively short periods (from a few seconds to several hours). Example workloads are Hadoop, Spark and Shark jobs.
–     Other workloads. Examples are workloads of storage and monitoring services, and testing and development jobs.

Motivation

Cloud workloads have inherited heterogeneity and dynamicity in their workload characteristics, including the usages of virtualized resources (CPU and memory) the their pairing micro-architectural behaviors (cycles per instruction, CPI and memory access per instruction, MAI). We use an example of one-hour workloads from the Google cluster trace to demonstrate this.

–     Workload Heterogeneity. Figure 1 displays the probability density distributions of the cloud workloads in terms of their CPU usage, CPI, memory usage, and MAI. We can see that the values of these workload behaviors are widely distributed.

Figure 1. Workload heterogeneity of one-hour Google cloud workloads

heterogeneity
–     Workload Dynamicity. Figure 2 illustrates the fluctuations of accumulated CPU usage, CPI, memory usage, and MAI in the cloud workloads over the course of one hour (3600 seconds). We can observe that most of these workload behaviors vary every five minutes.

es.

Figure 2. Workload dynamicity of one-hour Google cloud workloads

dynamicity

–     Requirements for fast evaluation. In benchmarking,  workloads spanning sufficient durations such as hours or days are usually required to achieve representativeness in evaluation. However, in many practical scenarios, evaluations must be completed one or two orders of magnitude faster than the original duration.

Targets

To support the successful and efficient evaluation of cloud systems, CloudMix addresses two objectives.

–     Promoting the development of data center technology. Developing new architectures (processors, memory systems, and network systems), innovative theories, algorithms, techniques, and software stacks to manage big data and extract their value and hidden knowledge.
–     System Optimization. Assisting system owners to make decisions for planning system features, tuning system configurations, validating deployment strategies, and conducting other efforts to improve systems. For example, benchmarking results can identify the performance bottlenecks in big data systems, thus optimizing system configuration and resource allocation.

Related Work

Existing cloud benchmarks can be divided into two categories: application benchmarks that provide workloads for specific application scenarios; and synthetic benchmarks that generate synthetic workloads of I/O operations according to real-world workload traces.

Table 1. Existing benchmarks

Benchmarks Diverse workload  behaviors CPU and memory resource usages Micro-architectural operations 
 Application benchmarks: YCSB [1], Cloudstone [2], SPEC Cloud IaaS 2016 [3], CloudSuite [4], BigDataBench [5]  No Yes Yes
Synthetic benchmarks:  GridMix [6], SWIM [7]  Yes No No

[1] B. F. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan, and R. Sears. Benchmarking cloud serving systems with ycsb. In SoCC’10, SoCC’10, pages 143–154, New York, NY, USA, 2010. ACM.
[2] Will Sobel, Shanti Subramanyam, Akara Sucharitakul, Jimmy Nguyen, Hubert Wong, Arthur Klepchukov, Sheetal Patil, Armando Fox, and David Patterson. Cloudstone: Multi-platform, multi-language benchmark and measurement tools for web 2.0. In CCA’08, volume 8, 2008.
[3] The spec cloudT M iaas 2016 benchmark. [Online]. Available: https://www.spec.org/cloud iaas2016/.
[4] M. Ferdman, A. Adileh, O. Kocberber, S. Volos, M. Alisafaee, D. Jevdjic, C. Kaynak, A. D. Popescu, A. Ailamaki, and B. Falsafi. Clearing the clouds: a
study of emerging scale-out workloads on mo dern hardware. In ACM SIGPLAN Notices, volume 47, pages 37–48. ACM, 2012..
[5] Lei Wang, Jianfeng Zhan, Chunjie Luo, Yuqing Zhu, Qiang Yang, Yongqiang He, WanlingGao, Zhen Jia, Yingjie Shi, Shujie Zhang, Cheng Zhen, Gang Lu, Kent Zhan, Xiaona Li, and Bizhu Qiu. The 20th IEEE International Symposium On High Performance Computer Architecture (HPCA-2014), February 15-19, 2014, Orlando, Florida, USA.
[6] Gridmix. http://hadoop.apache.org/docs/stable1/gridmix.html.
[7] Y. Chen, S. Alspaugh, and R. Katz. Interactive analytical processing in big data systems: A cross-industry study of mapreduce workloads. Proceedings of the VLDB Endowment, 5(12):1802–1813, 2012.

System Overview

CloudMix is a benchmark tool that aims to generate synthetic workloads to mimic the resource usage and micro-architectural behaviors of diverse cloud workloads. The basic idea of CloudMix is to employ a repository of basic building blocks, called Reducible Workload Block (RWB), as the high-level representation of cloud workloads, and then combine RWBs to form workloads of different behaviors.

The overall processing of CloudMix is shown in Figure 3.

Figure 3. The CloudMix Overview



–     Step 1: trace selection. This step allows benchmark users to select traces according to their benchmarking requirements, including the machine type (target platform), the machine number, and the evaluation duration. The selected trace is then divided into multiple segments.

Figure 4. Step 1 (trace Selection)

step-1

–     Step 2: RWB profiling.  This step profiles all the CPU and memory RWBs on the target platform and collects their workload behaviors, including usages of two most important cloud resources (CPU and memory) the their pairing micro-architectural operations (CPI and MAI).

Figure 5. Step 2 (RWB profiling)



–     Step 3: workload replication script generation.  This step generate the workload replication scripts for all the segments in the selected trace. Each script consists of an optimal combination of CPU and memory RWBs whose behaviors have the minimal estimated error to the real workloads in the trace segment.

–     Step 4: Scalable workload generation.  This step enables users generating synthetic workloads while flexibly controlling the evaluation duration according to their benchmarking requirements.

Figure 6. Steps 3 (workload replication script generation) and step 4 (scalable workload generation)

Download 

Downloading the CloudMix software package (238KB) [CloudMix]

Downloading the CloudMix-based job scheduling optimizer  on Hadoop YARN (136KB) [Job Schduling Optimizer]

Downloading the 24-hour Google trace stored in MySQL database (1.8GB) [Google Trace]

(Please contact us if you need the full version of workload trace stored in Impala (57GB))

Downloading the CloudMix user manual [User Manual]

Contacts

Email:
hanrui@ict.ac.cn

People

Rui Han
Zang Zon
Fan Zhang
Lei Wang
Zhen Jia

Alumni

Chenrong Shao
Shulin Zhan
Junwei Wang
Jiangtao Xu
Jianping Luo
Wenqian Zhang
Gang Lu
Zhentao Wang

For Citations

If you need a citation for the multi-tenancy version of BigDataBench, please cite the following papers related with your work:

BigDataBench: a Big Data Benchmark Suite from Internet Services. [PDF]

Lei Wang, Jianfeng Zhan, Chunjie Luo, Yuqing Zhu, Qiang Yang, Yongqiang He, WanlingGao, Zhen Jia, Yingjie Shi, Shujie Zhang, Cheng Zhen, Gang Lu, Kent Zhan, Xiaona Li, and Bizhu Qiu. The 20th IEEE International Symposium On High Performance Computer Architecture (HPCA-2014), February 15-19, 2014, Orlando, Florida, USA.

BigDataBench-MT: A Benchmark Tool for Generating Realistic Mixed Data Center Workloads. [PDF][Software page]

Rui Han,  Shulin Zhan, Chenrong Shao, Junwei Wang, Lizy K. John, Gang Lu, Lei Wang. The 2015 ACM Symposium on Cloud Computing conference (SoCC 2015) [Poster]. Published by Springer, LNCS, Volume 9495, Page 7-18 [PDF].