Software:IBM Spectrum Scale
Developer(s) | IBM |
---|---|
Operating system | AIX / Linux / Windows Server |
Type | File system |
License | Proprietary |
Website | IBM Spectrum Scale |
Developer(s) | IBM |
---|---|
Full name | IBM Spectrum Scale |
Introduced | 1998 with AIX |
Limits | |
Max. volume size | 8 YB |
Max. file size | 8 EB |
Max. number of files | 264 per file system |
Features | |
File system permissions | POSIX |
Transparent encryption | yes |
Other | |
Supported operating systems | AIX, Linux, Windows Server |
IBM Spectrum Scale is high-performance clustered file system software developed by IBM. It can be deployed in shared-disk or shared-nothing distributed parallel modes. It is used by many of the world's largest commercial companies, as well as some of the supercomputers on the Top 500 List.[1] For example, it was the filesystem of the ASC Purple Supercomputer[2] which was composed of more than 12,000 processors and had 2 petabytes of total disk storage spanning more than 11,000 disks.
Before 2015, Spectrum Scale was known as IBM General Parallel File System (GPFS).[3]
Like typical cluster filesystems, Spectrum Scale provides concurrent high-speed file access to applications executing on multiple nodes of clusters. It can be used with AIX 5L clusters, Linux clusters,[4] on Microsoft Windows Server, or a heterogeneous cluster of AIX, Linux and Windows nodes. In addition to providing filesystem storage capabilities, Spectrum Scale provides tools for management and administration of the Spectrum Scale cluster and allows for shared access to file systems from remote Spectrum Scale clusters.
Spectrum Scale has been available on IBM's AIX since 1998, on Linux since 2001, and on Windows Server since 2008.
History
Spectrum Scale, then known as GPFS, began as the Tiger Shark file system, a research project at IBM's Almaden Research Center as early as 1993. Tiger Shark was initially designed to support high throughput multimedia applications. This design turned out to be well suited to scientific computing.[5]
Another ancestor of Spectrum Scale is IBM's Vesta filesystem, developed as a research project at IBM's Thomas J. Watson Research Center between 1992 and 1995.[6] Vesta introduced the concept of file partitioning to accommodate the needs of parallel applications that run on high-performance multicomputers with parallel I/O subsystems. With partitioning, a file is not a sequence of bytes, but rather multiple disjoint sequences that may be accessed in parallel. The partitioning is such that it abstracts away the number and type of I/O nodes hosting the filesystem, and it allows a variety of logically partitioned views of files, regardless of the physical distribution of data within the I/O nodes. The disjoint sequences are arranged to correspond to individual processes of a parallel application, allowing for improved scalability.[7][8]
Vesta was commercialized as the PIOFS filesystem around 1994,[9] and was succeeded by GPFS around 1998.[10][11] The main difference between the older and newer filesystems was that GPFS replaced the specialized interface offered by Vesta/PIOFS with the standard Unix API: all the features to support high performance parallel I/O were hidden from users and implemented under the hood.[5][11]
Spectrum Scale has been available on IBM's AIX since 1998, on Linux since 2001, and on Windows Server since 2008.
Spectrum Scale was offered as part of the IBM System Cluster 1350.
Today, Spectrum Scale is used by many of the top 500 supercomputers listed on the Top 500 Supercomputing Sites web site. Since inception, Spectrum Scale has been successfully deployed for many commercial applications including digital media, grid analytics, and scalable file services.
In 2010, IBM previewed a version of GPFS that included a capability known as GPFS-SNC, where SNC stands for Shared Nothing Cluster. This was officially released with GPFS 3.5 in December 2012, and is now known as FPO [12] (File Placement Optimizer). This allows Spectrum Scale to use locally attached disks on a cluster of network connected servers rather than requiring dedicated servers with shared disks (e.g. using a SAN). FPO is suitable for workloads with high data locality such as shared nothing database clusters such as SAP HANA and DB2 DPF, and can be used as a HDFS-compatible filesystem.
Features
Features of Spectrum Scale file systems include high availability, ability to be used in a heterogeneous cluster, disaster recovery, security, DMAPI, HSM and ILM.
Architecture
Spectrum Scale is a clustered file system. It breaks a file into blocks of a configured size, less than 1 megabyte each, which are distributed across multiple cluster nodes.
The system stores data on standard block storage values, but includes an internal RAID layer (called Spectrum Scale RAID) that can virtualize those volumes for redundancy and parallel access much like a RAID block storage system. It also has the ability to replicate across volumes at the higher file level.
Features of the architecture include
- Distributed metadata, including the directory tree. There is no single "directory controller" or "index server" in charge of the filesystem.
- Efficient indexing of directory entries for very large directories. Many filesystems are limited to a small number of files in a single directory (often, 65536 or a similar small binary number). Spectrum Scale does not have such limits.
- Distributed locking. This allows for full POSIX filesystem semantics, including locking for exclusive file access.
- Partition Aware. A failure of the network may partition the filesystem into two or more groups of nodes that can only see the nodes in their group. This can be detected through a heartbeat protocol, and when a partition occurs, the filesystem remains live for the largest partition formed. This offers a graceful degradation of the filesystem — some machines will remain working.
- Filesystem maintenance can be performed online. Most of the filesystem maintenance chores (adding new disks, rebalancing data across disks) can be performed while the filesystem is live. This ensures the filesystem is available more often, so keeps the supercomputer cluster itself available for longer.
Compared to HDFS
Hadoop's HDFS filesystem, is designed to store similar or greater quantities of data on commodity hardware — that is, datacenters without RAID disks and a Storage Area Network (SAN). Compared to Spectrum Scale:
- HDFS also breaks files up into blocks, and stores them on different filesystem nodes.
- Spectrum Scale has full Posix filesystem semantics. HDFS and GFS are not fully POSIX compliant.
- Spectrum Scale distributes its directory indices and other metadata across the filesystem. Hadoop, in contrast, keeps this on the Primary and Secondary Namenodes, large servers which must store all index information in-RAM.
- Spectrum Scale breaks files up into small blocks. Hadoop HDFS likes blocks of 64 MB or more, as this reduces the storage requirements of the Namenode. Small blocks or many small files fill up a filesystem's indices fast, so limit the filesystem's size.
Information lifecycle management
Storage pools allow for the grouping of disks within a file system. An administrator can create tiers of storage by grouping disks based on performance, locality or reliability characteristics. For example, one pool could be high-performance Fibre Channel disks and another more economical SATA storage.
A fileset is a sub-tree of the file system namespace and provides a way to partition the namespace into smaller, more manageable units. Filesets provide an administrative boundary that can be used to set quotas and be specified in a policy to control initial data placement or data migration. Data in a single fileset can reside in one or more storage pools. Where the file data resides and how it is migrated is based on a set of rules in a user defined policy.
There are two types of user defined policies in Spectrum Scale: file placement and file management. File placement policies direct file data as files are created to the appropriate storage pool. File placement rules are selected by attributes such as file name, the user name or the fileset. File management policies allow the file's data to be moved or replicated or files to be deleted. File management policies can be used to move data from one pool to another without changing the file's location in the directory structure. File management policies are determined by file attributes such as last access time, path name or size of the file.
The Spectrum Scale policy processing engine is scalable and can be run on many nodes at once. This allows management policies to be applied to a single file system with billions of files and complete in a few hours.[citation needed]
See also
- ASM Cluster File System (ACFS)
- Veritas Cluster Filesystem / Veritas InfoScale Enterprise (VxFS)
- Alluxio
- Scale-out File Services – IBM's NAS-grid product using GPFS
- List of file systems
- Shared disk file system
- Google File System
- GFS2
- MapR FS
- OCFS2
- ZFS
- QFS
- Lustre (file system)
- Gluster
- BeeGFS
References
- ↑ Schmuck, Frank; Roger Haskin (January 2002). "GPFS: A Shared-Disk File System for Large Computing Clusters". Proceedings of the FAST'02 Conference on File and Storage Technologies. Monterey, California, US: USENIX. pp. 231–244. ISBN 1-880446-03-0. http://www.usenix.org/events/fast02/full_papers/schmuck/schmuck.pdf. Retrieved 2008-01-18.
- ↑ "Storage Systems - Projects - GPFS". IBM. http://www.almaden.ibm.com/StorageSystems/projects/gpfs/. Retrieved 2008-06-18.
- ↑ "IBM Redefines Storage Economics with New Software". http://www-03.ibm.com/press/us/en/pressrelease/46093.wss.
- ↑ "BPAR: A Bundle-Based Parallel Aggregation Framework for Decoupled I/O Execution" (PDF). IEEE. Nov 2014. https://ieeexplore.ieee.org/document/7079023/.
- ↑ 5.0 5.1 May, John M. (2000). Parallel I/O for High Performance Computing. Morgan Kaufmann. p. 92. ISBN 978-1-55860-664-7. https://books.google.com/books?id=iLj516DOIKkC&pg=PA92&lpg=PA92&dq=shark+vesta+gpfs. Retrieved 2008-06-18.
- ↑ Corbett, Peter F.; Feitelson, Dror G.; Prost, J.-P.; Baylor, S. J. (1993). "Parallel access to files in the Vesta file system". Supercomputing. Portland, Oregon, United States: ACM/IEEE. pp. 472–481. doi:10.1145/169627.169786. ISBN 978-0818643408.
- ↑ Corbett, Peter F.; Feitelson, Dror G. (August 1996). "The Vesta parallel file system". Transactions on Computer Systems 14 (3): 225–264. doi:10.1145/233557.233558. http://www.cs.umd.edu/class/fall2002/cmsc818s/Readings/vesta-tocs96.pdf. Retrieved 2008-06-18.
- ↑ Teng Wang; Kevin Vasko; Zhuo Liu; Hui Chen; Weikuan Yu (2016). "Enhance parallel input/output with cross-bundle aggregation". The International Journal of High Performance Computing Applications 30 (2): 241–256. doi:10.1177/1094342015618017.
- ↑ Corbett, P. F.; D. G. Feitelson; J.-P. Prost; G. S. Almasi; S. J. Baylor; A. S. Bolmarcich; Y. Hsu; J. Satran et al. (1995). "Parallel file systems for the IBM SP computers". IBM Systems Journal 34 (2): 222–248. doi:10.1147/sj.342.0222. http://www.research.ibm.com/journal/sj/342/corbett.pdf. Retrieved 2008-06-18.
- ↑ Barris, Marcelo; Terry Jones; Scott Kinnane; Mathis Landzettel Safran Al-Safran; Jerry Stevens; Christopher Stone; Chris Thomas; Ulf Troppens (September 1999). Sizing and Tuning GPFS. IBM Redbooks, International Technical Support Organization. see page 1 ("GPFS is the successor to the PIOFS file system"). http://www.redbooks.ibm.com/redbooks/pdfs/sg245610.pdf.
- ↑ 11.0 11.1 Snir, Marc (June 2001). "Scalable parallel systems: Contributions 1990-2000". HPC seminar, Computer Architecture Department, Universitat Politècnica de Catalunya. http://research.ac.upc.edu/HPCseminar/SEM0001/snir.pdf. Retrieved 2008-06-18.
- ↑ "IBM GPFS FPO (DCS03038-USEN-00)". IBM Corporation. 2013. http://public.dhe.ibm.com/common/ssi/ecm/en/dcs03038usen/DCS03038USEN.PDF. Retrieved 2012-08-12.
External links
- Spectrum Scale official homepage
- Spectrum Scale resources (including download)
- Spectrum Scale at Almaden
- Spectrum Scale Mailing List
- Spectrum Scale User Group
- IBM Spectrum Scale Product Documentation
- IBM Spectrum Scale Wiki