High Volume Computing: The Motivations, Metrics, and Benchmarks Suites for Data Center Computer Systems

in conjunction with The 19th IEEE International Symposium on High Performance Computer Architecture(HPCA 2013)
Big data benchmarks Data Center benchmarks Cloud benchmarks
read more

Overview

The data center workloads consist of not a big job but a large amount of loosely coupled ones. The nature of this class of workloads is throughput-oriented, and the target of data center computer systems designed for them is to increase the volume of throughput in terms of processed requests, or processed data, performed in data centers. So as to pay attention to this class of workloads and systems designed for them, we coin a new terms high volume computing (in short HVC) to describe this class of workloads.

This tutorial is aimed at presenting our efforts in benchmarking data centers, big data, and cloud computer systems. We will release several benchmark suites related to data center computing, including big data benchmarks (BigDataBench), cloud benchmarks ( CloudRank ), and Data Center benchmarks ( DCBench ). In this tutorial, we would like to presents the details of those benchmarks and demonstrate how to use them in the research and evaluation efforts.

Date & Location

Date: Feb 23,2013

Location: Room Madrid 1, InterContinental Shenzhen ,China.(in conjunction with HPCA 2013)


Agenda



HPCA-19: February 23-17,2013,Shenzhen,China