Carnegie Mellon University
Browse
Meza_cmu_0041E_10340.pdf (2.28 MB)

Large Scale Studies of Memory, Storage, and Network Failures in a Modern Data Center

Download (2.28 MB)
thesis
posted on 2018-12-01, 00:00 authored by Justin MezaJustin Meza
The workloads running in the modern data centers of large scale Internet service providers (such as
Alibaba, Amazon, Baidu, Facebook, Google, and Microsoft) support billions of users and span globally
distributed infrastructure. Yet, the devices used in modern data centers fail due to a variety of causes, from
faulty components to bugs to misconfiguration. Faulty devices make operating large scale data centers
challenging because the workloads running in modern data centers consist of interdependent programs
distributed across many servers, so failures that are isolated to a single device can still have a widespread
effect on a workload.
In this dissertation, we measure and model the device failures in a large scale Internet service company,
Facebook. We focus on three device types that form the foundation of Internet service data center
infrastructure: DRAM for main memory, SSDs for persistent storage, and switches and backbone links
for network connectivity. For each of these device types, we analyze long term device failure data broken
down by important device attributes and operating conditions, such as age, vendor, and workload. We
also build and release statistical models of the failure trends for the devices we analyze.
For DRAM devices, we analyze the memory errors in the entire fleet of servers at Facebook over the
course of fourteen months, representing billions of device days of operation. The systems we examine
cover a wide range of devices commonly used in modern servers, with DIMMs that use the modern
DDR3 communication protocol, manufactured by 4 vendors in capacities ranging from 2GB to 24GB.
We observe several new reliability trends for memory systems that have not been discussed before in
literature, develop a model for memory reliability, show how system design choices such as using lower
density DIMMs and fewer cores per chip can reduce failure rates of a baseline server by up to 57.7%.
We perform the first implementation and real-system analysis of page offlining at scale, on a cluster of
thousands of servers, identify several real-world impediments to the technique, and show that it can
reduce memory error rate by 67%. We also examine the efficacy of a new technique to reduce DRAM
faults, physical page randomization, and examine its potential for improving reliability and its overheads.
For SSD devices, we perform a large scale study of flash-based SSD reliability at Facebook. We analyze
data collected across a majority of flash-based solid state drives over nearly four years and many
millions of operational hours in order to understand failure properties and trends of flash-based SSDs.
Our study considers a variety of SSD characteristics, including: the amount of data written to and read
from flash chips; how data is mapped within the SSD address space; the amount of data copied, erased,
and discarded by the flash controller; and flash board temperature and bus power. Based on our field
analysis of how flash memory errors manifest when running modern workloads on modern SSDs, we make several major observations and find that SSD failure rates do not increase monotonically with flash
chip wear, but instead they go through several distinct periods corresponding to how failures emerge and
are subsequently detected.
For network devices, we perform a large scale, longitudinal study of data center network reliability
based on operational data collected from the production network infrastructure at Facebook. Our study
covers reliability characteristics of both intra and inter data center networks. For intra data center networks,
we study seven years of operation data comprising thousands of network incidents across two
different data center network designs, a cluster network design and a state-of-the-art fabric network design.
For inter data center networks, we study eighteen months of recent repair tickets from the field to
understand the reliability of Wide Area Network (WAN) backbones. In contrast to prior work, we study
the effects of network reliability on software systems, and how these reliability characteristics evolve over
time. We discuss the implications of network reliability on the design, implementation, and operation of
large scale data center systems and how the network affects highly-available web services.
Our key conclusion in this dissertation is that we can gain a deep understanding of why devices
fail—and how to predict their failure—using measurement and modeling. We hope that the analysis,
techniques, and models we present in this dissertation will enable the community to better measure,
understand, and prepare for the hardware reliability challenges we face in the future.

History

Date

2018-12-01

Degree Type

  • Dissertation

Department

  • Electrical and Computer Engineering

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Onur Mutlu

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC