Software:Quantcast File System

From HandWiki
Revision as of 07:05, 9 February 2024 by JTerm (talk | contribs) (correction)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Distributed file system
Quantcast File System (QFS)
Developer(s)Sriram Rao, Michael Ovsiannikov, Quantcast
Stable release
1.1.4 / March 5, 2015; 9 years ago (2015-03-05)[1]
Written inC++
TypeDistributed File System
LicenseApache License 2.0
Websitequantcast.github.com/qfs

Quantcast File System (QFS) is an open-source distributed file system software package for large-scale MapReduce or other batch-processing workloads. It was designed as an alternative to the Apache Hadoop Distributed File System (HDFS), intended to deliver better performance and cost-efficiency for large-scale processing clusters.

Design

QFS is software that runs on a cluster of hundreds or thousands of commodity Linux servers and allows other software layers to interact with them as if they were one giant hard drive. It has three components:

  • A chunk server runs on each machine that will host data, manages I/O to its hard drives, and monitors its activity and capacity.
  • A central process called the metaserver keeps the directory structure and maps of files to physical storage. It coordinates activities of all the chunk servers and monitors the overall health of the file system. For high performance it holds all its data in memory, writing checkpoints and transaction logs to disk for recovery.
  • A client component is the interface point that presents a file system application programming interface (API) to other layers of the software. It makes requests of the metaserver to identify which chunk servers hold (or will hold) its data, then interacts with the chunk servers directly to read and write.

In a cluster of hundreds or thousands of machines, the odds are low that all will be running and reachable at any given moment, so fault tolerance is the central design challenge. QFS meets it with Reed–Solomon error correction. The form of Reed–Solomon encoding used in QFS stores redundant data in nine places and can reconstruct the file from any six of these stripes.[2] When it writes a file, it by default stripes it across nine physically different machines — six holding the data, three holding parity information. Any three of those can become unavailable. If any six remain readable, QFS can reconstruct the original data. The result is fault tolerance at a cost of a 50% expansion of data.

QFS is written in the programming language C++, operates within a fixed memory footprint, and uses direct input and output (I/O).

History

QFS evolved from the Kosmos File System (KFS), an open source project started by Kosmix in 2005. Quantcast adopted KFS in 2007, built its own improvements on it over the next several years, and released QFS 1.0 as an open source project in September, 2012.[3]

References

External links