NetApp filer
In computer storage, a so called NetApp "filer" was referring to the storage systems product by NetApp, before block protocols were supported. It can serve storage over a network using file-based protocols such as NFS, SMB, FTP, TFTP, and HTTP. But the so-called "Filers" can also serve data over block-based protocol, such as the SCSI command protocol over the Fibre Channel Protocol on a Fibre Channel network, Fibre Channel over Ethernet (FCoE), FC-NVMe or iSCSI transport layer.
The product is also known as NetApp Fabric-Attached Storage (FAS) and NetApp All Flash FAS (AFF) [1]
NetApp Filers implement their physical storage in large disk arrays.
While most large-storage filers are implemented with commodity computers with an operating system such as Microsoft Windows Server, VxWorks or tuned Linux, NetApp filers use highly customized hardware and the proprietary Data ONTAP operating system with WAFL file system, all originally designed by NetApp founders David Hitz and James Lau specifically for storage-serving purposes. Data ONTAP is NetApp's internal operating system, specially optimised for storage functions at high and low level. It boots from FreeBSD as a stand-alone kernel-space module and uses some functions of FreeBSD (command interpreter and drivers stack, for example).
All filers have battery-backed non-volatile random access memory or NVDIMM, referred to as NVRAM or NVDIMM,[citation needed] which allows them to commit writes to stable storage more quickly than traditional systems with only volatile memory. Early filers connected to external disk enclosures via parallel SCSI, while modern models ((As of 2009 )) use fibre channel and SAS (Serial Attach SCSI) SCSI transport protocols. The disk enclosures (shelves) use fibre channel hard disk drives, as well as parallel ATA, serial ATA and Serial attached SCSI. Starting with AFF A800 NVRAM PCI card no longer used for NVLOGs, it was replaced with NVDIMM memory directly connected to memory bus.
Implementers often organize two filers in a high-availability cluster with a private high-speed link, either Fibre Channel, InfiniBand, 10 Gigabit Ethernet, 40 Gigabit Ethernet or 100 Gigabit Ethernet. One can additionally group such clusters together under a single namespace when running in the "cluster mode" of the Data ONTAP 8 operating system.
Internal architecture
Modern NetApp filers consist of customized computers with Intel processors using PCI. Each filer has non-volatile random access memory, called NVRAM, in the form of a proprietary PCI NVRAM adapter or NVDIMM-based memory, to log all writes for performance and to play the data log forward in the event of an unplanned shutdown. One can link two filers together as a cluster, which NetApp (as of 2009) refers to using the less ambiguous term "Active/Active".
Hardware
Each filer model comes with a set configuration of processor, RAM, and non-volatile memory, which users cannot expand after purchase. With the exception of some of the entry point storage controllers, the NetApp filers have at least one PCIe-based slot available for additional network, tape and/or disk connections. In June 2008 NetApp announced the Performance Acceleration Module (or PAM) to optimize the performance of workloads which carry out intensive random reads. This optional card goes into a PCIe slot and provides additional memory (or cache) between the disk and the filer RAM and system memory, thus improving performance.
All-Flash FAS
Also known as AFF A-series. Usually, AFF systems based on the same hardware as FAS but first one optimized and works only with SSD drives on the back end, for example, AFF A700 & FAS9000, A300 & FAS8200, A200 & FAS2600, A220 & FAS2700 use the same hardware but AFF systems do not include Flash Cache cards. Also, AFF systems do not support FlexArray third-party storage array virtualization functionality. Both AFF & FAS using the same firmware image and nearly all noticeable functionality for the end user are the same for both storage systems. However internally data processed and handled differently in ONTAP on AFF systems, for example, used different Write Allocation algorithms than on FAS systems. Because AFF systems have faster underlying SSD drives Inline data deduplication in ONTAP systems nearly not noticeable (about 2% performance impact on low-end systems)[2].
Storage
NetApp uses either SATA, Fibre Channel, SAS or SSD disk drives, which it groups into RAID (Redundant Array of Inexpensive Disks or Redundant Array of Independent Disks) groups of up to 28 (26 data disks plus 2 parity disks). NetApp FAS storage systems which contain only SSD drives with installed SSD-oprimzed ONTAP OS called All Flash FAS (AFF).
Disks
FAS and AFF filers are using enterprise level HDD and SSD (i.e. NVMe SSD) drives with two ports, each port connected to each controller in an HA pair. HDD and SSD drives can only be bought from NetApp and installed in NetApp's Disk Shelves for FAS/AFF platform. Psychical HDD and SSD drives, partitions on them and LUNs imported from third party arrays with FlexArray functionality considered in ONTAP as a Disk. In SDS systems like ONTAP Select & ONTAP Cloud a logical block storage like virtual disk or RDM inside ONTAP also considered as a Disk. Do not confuse general term "disk drive" and "disk drive term used in ONTAP system" because with ONTAP it could be entire physical HDD or SSD drive, an LUN or a partition on a physical HDD or SSD drive. LUNs imported from third party arrays with FlexArray functionality in HA pair configuration must be accessible from both nodes of the HA pair. Each disk have ownership on it to show which controller own and serve the disk. An Aggregates can include only disks owned by a single node, therefore each aggregate owned by a node and any upper objects as FlexVol volumes, LUNs, File Shares are served with a single controller. Each controller can have its own disks and aggregates an serve them where both nodes can be utilized simultaneously even though they not serving the same data.
NetApp RAID in ONTAP
RAID and WAFL in ONTAP systems tightly integrated. There are few RAID types available with NetApp FAS / AFF systems: RAID-4 with 1 dedicated parity drive, allows any 1 drive to fail in a RAID group; RAID-DP US patent 7409625 with 2 dedicated parity drives, allows any 2 drives to fail simultaneously in a RAID group[3], RAID-TEC US patent 7640484 with 3 dedicated parity drives, allows any 3 drives to fail simultaneously in a RAID group [4] . RAID-DP similar to RAID-6 because have same resiliency of 2 disk drives but all the NetApp's RIADs have dedicated parity disks and with combination of NetApp implementation of non-volatile memory and WAFL characteristic to allays write to new place dedicated parity disks are never bottlenecks compare to traditional RAID-4 and RAID-6 on write/rewrite operations[5]. Each aggregate consist of one or two plexes, an plex consists of one or more RAID groups. Typical NetApp FAS or AFF storage system have only 1 plex in each aggregate, two plexes used in local SyncMirror or MetroCluster configurations. Each RAID group consists of disk drives of same type, speed, geometry and capacity. Though NetApp Support could allow a user to install a drive to an RAID group with same or bigger size and different type, speed and geometry for temporary basis. Ordinary data aggregates if containing more than one RAID group must have same RAID groups across the aggregate, same RAID group size is recommended, but NetApp allows to have exception in last RAID group and configure it as small as half of the RAID group size across aggregate. For example such an aggregate might consists of 3 RAID groups: RG0:16+2, RG1:16+2, RG2:7+2. Aggregates enabled as FlshPool and with both HDD and SSD drives called hybrid aggregates. In Flash Pool hybrid aggregates same rules applied to the hybrid aggregate as to ordinary aggregates but separately to HDD and SSD drives, thus it is allowed to have two different RAID types: only one RAID type for all HDD drives and only one RAID type for all SSD drives in a single hybrid aggregate. For example SAS HDD with RAID-TEC (RG0:18+3, RG1:18+3) and SSD with RAID-DP (RG3:6+2). NetApp filers combine underlying RAID groups similarly to RAID-0. Also in NetApp FAS systems with FlexArray feature third party LUNs could be combined in a Plex similarly to RAID-0. NetApp filers systems can be deployed in MetroCluster and SyncMirror configurations which are using technique comparably to RAID-1 with mirroring data between two plexes in an aggregate.
RAID Group Size (in number of drives) for Data Aggregates in AFF & FAS systems | |||||||||
---|---|---|---|---|---|---|---|---|---|
Drive Type | Minimum | Default | Maximum | Minimum | Default | Maximum | Minimum | Default | Maximum |
RAID-4 | RAID-DP | RAID-TEC | |||||||
NVMe SSD | 3 | 8 | 14 | 5 | 24 | 28 | 7 | 25 | 29 |
SSD | |||||||||
SAS | 16 | 24 | |||||||
SATA or NL-SAS < 6TB | 7 | 14 | 20 | 21 | |||||
SATA or NL-SAS (6TB, 8TB) | 14 | ||||||||
MSATA (6TB, 8TB) | Not possible | ||||||||
MSATA < 6TB | 20 | ||||||||
MSATA >= 10TB | Not possible | ||||||||
SATA or NL-SAS >= 10TB |
Flash Pool
NetApp Flash Pool is a feature on hybrid NetApp FAS systems allows creating hybrid aggregate with HDD drives and SSD drives in a single data aggregate. Both HDD and SSD drives form separate RAID groups. Since SSD used also write operations it requires RAID redundancy contrary to Flash Cache but allows to use different RAID types for HDD and SSD, for example, it is possible to have 20 HDD 8TB in RAID-TEC while 4 SSD in RAID-DP 960GB in a single aggregate. SSD RAID used as cache and improve performance for read-write operations for FlexVol volumes on the aggregate where SSD added as the cache. Flash Pool cache similarly to Flash Cache have policies for reading operations but also include write operations which could apply separately for each FlexVol volume located on the aggregate, therefore could be disabled on some volumes while others could benefit from SSD cache. Both FlashCache & FlashPool can be used simultaneously to cache data from a single FlexVolTo enable an aggregate with Flash Pool technology minimum 4 SSD disks required (2 data, 1 parity, and 1 hot spare), it is also possible to use ADP technology to partition SSD into 4 pieces (Storage Pool) and distribute those pieces between two controllers so each controller will benefit from SSD cache when there is small amount of SSD. Flash Pool is not available with FlexArray and is available only with NetApp FAS native disk drives in NetApp's disk shelves.
FlexArray
FlexArray is NetApp FAS functionality allows to visualize third party storage systems and other NetApp storage systems over SAN protocols and use them instead of NetApp's disk shelves. With FlexArray functionality RAID protection must be done with third party storage array thus NetApp's RAID-4, RAID-DP and RAID-TEC not used in such configurations. One or many LUNs from third party arrays could be added to a single aggregate similarly to RAID-0. FlexArray is licensed feature.
NetApp Storage Encryption
NetApp Storage Encryption (NSE) is using specialized purpose build disks with low level Hardware-based full disk encryption (FDE/SED) feature, compatible nearly with all NetApp ONTAP features and protocols but does not offer MetroCluster. NSE feature does overall nearly zero performance impact on the storage system. NSE feature similarly to NetApp Volume Encryption (NVE) in filers can store encryption key locally in Onboard Key Manager or on dedicated key manager systems using KMIP protocol like IBM Security Key Lifecycle Manager and SafeNet KeySecure. NSE is data at rest encryption which means it protects only from physical disks theft and does not give an additional level of data security protection in normal operational and running system.
PAM / Flash Cache
NetApp Filer can have PAM ( Performance Accelerate Module ) or Flash Cache (PAM II) which can reduce read latencies and allows the filer to process more read intensive work without adding any further disk to the underlying RAID since read operations do not require redundancy in case of Flash Cache failure. Flash Cache works on controller level and accelerates only read operations. Each separate volume on the controller can have a different caching policy or read cache could be disabled for a volume. Flash Cache caching policies applied on FlexVol level. Flash Cache technology compatible with the FlexArray feature. Starting with 9.1 a single FlexVol volume can benefit from both FlashPool & FlashCache caches simultaneously.
MetroCluster
MetroCluster (MC)s free functionality for FAS and AFF systems for metro high availability with synchronous replication between two sites, this configuration requires additional equipment. Available in both modes: 7-mode (old OS) and Cluster-Mode (or cDOR - a newer version of ONTAP OS). MetroCluster uses SyncMirror and plex technique where on one site number of disks form one or more RAID groups aggregated in a plex, while on the second site have the same number of disks with the same type and RAID configuration. One plex synchronously replicates to another in compound with non-volatile memory. Two plexes form an aggregate where data stored and in case of disaster on one site second site provide read-write access to data. MetroCluster Support FlexArray technology. MetroCluster configurations are possible only with mid-range and high-end models which provide the ability to install additional network cards required to MC to function.
Cluster-Mode MetroCluster
With MetroCluster it is possible to have one or more storage node per site to form a cluster or Clustered MetroCluster (MCC). Remote and local HA perter node must be the same model. MCC consists of two clusters each located on one of two sites. There may be only two sites. In MCC configuration each one remote and one local storage node form Metro HA or Disaster Recovery Pare (DR Pare) across two sites while two local nodes (if there is partner) form local HA pare, thus each node synchronously replicates data in non-volatile memory two nodes: one remote and one local (if there is one). It is possible to utilize only one storage node on each site (two single node clusters) configured as MCC. 8 node MCC consists of two clusters - 4 nodes each (2 HA pair), each storage node have only one remote partner and only one local HA partner, in such a configuration each site clusters can consist out of two different storage node models. For small distances, MetroCluster requires at least one FC-VI or newer iWARP card per node. FAS and AFF systems with ONTAP software versions 9.2 and older utilize FC-VI cards and for long distances require 4 dedicated Fibre Channel switches (2 on each site) and 2 FC-SAS bridges per each disk shelf stack, thus minimum 4 total for 2 sites and minimum 2 dark fiber ISL links with optional DWDMs for long distances. Data volumes, LUNs and LIFs could online migrate across storage nodes in the cluster only within a single site where data originated from: it is not possible to migrate individual volumes, LUNs or LIFs using cluster capabilities across sites unless MetroCluster switchover operation is used which disable entire half of the cluster on a site and transparently to its clients and applications switch access to all of the data to another site.
MetroCluster over IP
Starting with ONTAP 9.3 MetroCluster over IP was introduced with no need for a dedicated back-end Fibre Channel switches, FC-SAS bridges and dedicated dark fiber ISL. MetroCluster over IP require Ethernet cluster switches with installed ISL and utilize iWARP cards in each storage controller for synchronous replication.
Data ONTAP OS
NetApp filers using proprietary OS called ONTAP (Previously Data ONTAP). Main purpose for OS in a storage system is to serve data to clients in non-disruptive manner with data protocols like CIFS, NFS, iSCSI, Fiber Channel, NVMe over Fabrics (NVMe-oF, currently only FC-NVMe supported) and to provide enterprise features like High Availability, Disaster Recovery and data Backup. ONTAP OS provide enterprise level data management features like FlexClone, SnapMirror, SnapLock, MetroCluster etc, most of them snapshot-based WAFL File System capabilities.
WAFL File System
WAFL, as a robust versioning filesystem in NetApp's proprietary OS ONTAP, it provides snapshots, which allow end-users to see earlier versions of files in the file system. Snapshots appear in a hidden directory: ~snapshot
for Windows (SMB) or .snapshot
for Unix (NFS). Up to 255 snapshots can be made of any traditional or flexible volume. Snapshots are read-only, although ONTAP provides additional ability to make writable "virtual clones", based at "WAFL snapshots" technique, as "FlexClones".
ONTAP implements snapshots by tracking changes to disk-blocks between snapshot operations. It can set up snapshots in seconds because it only needs to take a copy of the root inode in the filesystem. This differs from the snapshots provided by some other storage vendors in which every block of storage has to be copied, which can take many hours.
7MTT
Each filer running Data ONTAP 8 could switch between modes either 7-Mode or Cluster mode. In reality each mode was a separate OS with its own version of WAFL, both 7-mode and Cluster mode where shipped on a single firmware image for filers till 8.3 where 7-mode was deprecated. It is possible to switch between modes on a filer but all the data on disks must be destroyed first since WAFL is not compatible and server-based application called 7MTT tool was introduced to migrate data from old 7-mode filers to new Cluster-Mode filers:
- With SnapMirror based replication called Copy-based transition which helped to migrate all the data with planned downtime using only storage vendor capabilities. Copy-based transition require new controllers and disks with space no less than on source system if all the data to be migrated. Both SAN and NAS data are possible.
- Starting with 7-mode 8.2.1 and Cluster-Mode 8.3.2 WAFL compatibility where introduced and new feature in 7MTT tool called Copy-free transition to replace old controllers running 7-mode with new controllers running Cluster-Mode and planned downtime, while new system require additional system disks with root aggregates for new controllers (it could be as less as 6 disks). Since with Copy-free transition no data copying required 7MTT tool helping only for new controllers reconfiguration. Both SAN and NAS data conversion supported.
Additional to 7MTT there are two other paths to migrate data based on protocol type:
- SAN data could be copied with foreign LUN import (FLI) functionality integrated in NetApp filer systems which can copy data over SAN protocol while new filer placed as SAN proxy between hosts and old storage system which require host reconfiguration and minimum downtime. FLI available as for old 7-mode systems and for some models of storage systems of competitors.
- NAS data could be copied with NetApp XCP free host-based utility thus host-based copy process processed with the utility from any copying data from source server with SMB or NFS protocols to ONTAP system with minimal downtime for client systems reconfiguration for new NAS server.
Previous limitations
Before the release of ONTAP 8, individual aggregate sizes were limited to a maximum of 2TB for FAS250 models and 16TB for all other models.
The limitation on aggregate size, coupled with increasing density of disk drives, served to limit the performance of the overall system. NetApp, like most storage vendors, increases overall system performance by parallelizing disk writes to many different spindles (disk drives). Large capacity drives, therefore limit the number of spindles that can be added to a single aggregate, and therefore limit the aggregate performance.
Each aggregate also incurs a storage capacity overhead of approximately 7-11%, depending on the disk type. On systems with many aggregates this can result in lost storage capacity.
However, the overhead comes about due to additional block-checksumming on the disk level as well as usual file system overhead, similar to the overhead in file systems like NTFS or EXT3. Block checksumming helps to ensure that data errors at the disk drive level do not result in data loss.
Data ONTAP 8.0 uses a new 64bit aggregate format, which increases the size limit of FlexVolume to approximately 100TB (depending on storage platform) and also increases the size limit of aggregates to more than 100 TB on newer models (depending on storage platform) thus restoring the ability to configure large spindle counts to increase performance and storage efficiency. ([1])
Performance
AI Performance testings (image distortion disabled):
AI | Resnet-50 | Resnet-152 | ||||||
---|---|---|---|---|---|---|---|---|
4 GPU | 8 GPU | 16 GPU | 32 GPU | 4 GPU | 8 GPU | 16 GPU | 32 GPU | |
NetApp A700 Nvidia | 1131 | 2048 | 4870 | |||||
NetApp A800 Nvidia | 6000 | 11200 | 22500 |
AI | AlexNet | |||
---|---|---|---|---|
4 GPU | 8 GPU | 16 GPU | 32 GPU | |
NetApp A700 Nvidia | 4243 | 4929 | ||
NetApp A800 Nvidia |
Model history
This list may omit some models. Information taken from spec.org, netapp.com and storageperformance.org
Model | Status | Released | CPU | Main system memory | Nonvolatile memory | Raw capacity | Benchmark | Result |
---|---|---|---|---|---|---|---|---|
FASServer 400 | Discontinued | Jan 1993 | 50 MHz Intel i486 | ? MB | 4 MB | 14 GB | ? | |
FASServer 450 | Discontinued | Jan 1994 | 50 MHz Intel i486 | ? MB | 4 MB | 14 GB | ? | |
FASServer 1300 | Discontinued | Jan 1994 | 50 MHz Intel i486 | ? MB | 4 MB | 14 GB | ? | |
FASServer 1400 | Discontinued | Jan 1994 | 50 MHz Intel i486 | ? MB | 4 MB | 14 GB | ? | |
FASServer | Discontinued | Jan 1995 | 50 MHz Intel i486 | 256 MB | 4 MB | ? GB | 640 | |
F330 | Discontinued | Sept 1995 | 90 MHz Intel Pentium | 256 MB | 8 MB | 117 GB | 1310 | |
F220 | Discontinued | Feb 1996 | 75 MHz Intel Pentium | 256 MB | 8 MB | ? GB | 754 | |
F540 | Discontinued | June 1996 | 275 MHz DEC Alpha 21064A | 256 MB | 8 MB | ? GB | 2230 | |
F210 | Discontinued | May 1997 | 75 MHz Intel Pentium | 256 MB | 8 MB | ? GB | 1113 | |
F230 | Discontinued | May 1997 | 90 MHz Intel Pentium | 256 MB | 8 MB | ? GB | 1610 | |
F520 | Discontinued | May 1997 | 275 MHz DEC Alpha 21064A | 256 MB | 8 MB | ? GB | 2361 | |
F630 | Discontinued | June 1997 | 500 MHz DEC Alpha 21164A | 512 MB | 32 MB | 464 GB | 4328 | |
F720 | Discontinued | Aug 1998 | 400 MHz DEC Alpha 21164A | 256 MB | 8 MB | 464 GB | 2691 | |
F740 | Discontinued | Aug 1998 | 400 MHz DEC Alpha 21164A | 512 MB | 32 MB | 928 GB | 5095 | |
F760 | Discontinued | Aug 1998 | 600 MHz DEC Alpha 21164A | 1 GB | 32 MB | 1.39 TB | 7750 | |
F85 | Discontinued | Feb 2001 | 256 MB | 64 MB | 648 GB | |||
F87 | Discontinued | Dec 2001 | 1.13 GHz Intel P3 | 256 MB | 64 MB | 576 GB | ||
F810 | Discontinued | Dec 2001 | 733 MHz Intel P3 Coppermine | 512 MB | 128 MB | 1.5 TB | 4967 | |
F820 | Discontinued | Dec 2000 | 733 MHz Intel P3 Coppermine | 1 GB | 128 MB | 3 TB | 8350 | |
F825 | Discontinued | Aug 2002 | 733 MHz Intel P3 Coppermine | 1 GB | 128 MB | 3 TB | 8062 | |
F840 | Discontinued | Aug/Dec? 2000 | 733 MHz Intel P3 Coppermine | 3 GB | 128 MB | 6 TB | 11873 | |
F880 | Discontinued | July 2001 | Dual 733 MHz Intel P3 Coppermine | 3 GB | 128 MB | 9 TB | 17531 | |
FAS920 | Discontinued | May 2004 | 2.0 GHz Intel P4 Xeon | 2 GB | 256 MB | 7 TB | 13460 | |
FAS940 | Discontinued | Aug 2002 | 1.8 GHz Intel P4 Xeon | 3 GB | 256 MB | 14 TB | 17419 | |
FAS960 | Discontinued | Aug 2002 | Dual 2.2 GHz Intel P4 Xeon | 6 GB | 256 MB | 28 TB | 25135 | |
FAS980 | Discontinued | Jan 2004 | Dual 2.8 GHz Intel P4 Xeon MP 2 MB L3 | 8 GB | 512 MB | 50 TB | 36036 | |
FAS250 | EOA 11/08 | Jan 2004 | 600 MHz Broadcom BCM1250 dual core MIPS | 512 MB | 64 MB | 4 TB | ||
FAS270 | EOA 11/08 | Jan 2004 | 650 MHz Broadcom BCM1250 dual core MIPS | 1 GB | 128 MB | 16 TB | 13620* | |
FAS2020 | EOA 8/12 | June 2007 | 2.2 GHz Mobile Celeron | 1 GB | 128 MB | 68 TB | ||
FAS2040 | EOA 8/12 | Sept 2009 | 1.66 GHz Intel Xeon | 4 GB | 512 MB | 136 TB | ||
FAS2050 | EOA 5/11 | June 2007 | 2.2 GHz Mobile Celeron | 2 GB | 256 MB | 104 TB | 20027* | |
FAS2220 | EOA 3/15 | June 2012 | 1.73 GHz Dual Core Intel Xeon C3528 | 6 GB | 768 MB | 180 TB | ||
FAS2240 | EOA 3/15 | November 2011 | 1.73 GHz Dual Core Intel Xeon C3528 | 6 GB | 768 MB | 432 TB | 38000 | |
FAS2520 | EOA 12/17 | June 2014 | 1.73 GHz Dual Core Intel Xeon C3528 | 36 GB | 4 GB | 840 TB | ||
FAS2552 | EOA 12/17 | June 2014 | 1.73 GHz Dual Core Intel Xeon C3528 | 36 GB | 4 GB | 1243 TB | ||
FAS2554 | EOA 12/17 | June 2014 | 1.73 GHz Dual Core Intel Xeon C3528 | 36 GB | 4 GB | 1440 TB | ||
FAS2620 | Nov 2016 | 1 x 6-core Intel Xeon D-1528 @ 1.90GHz | 32 GB | 8 GB | 1440 TB | |||
FAS2650 | Nov 2016 | 1 x 6-core Intel Xeon D-1528 @ 1.90GHz | 32 GB | 8 GB | 1243 TB | |||
FAS2720 | May 2018 | 1 x 12 core 1.50 GHz Xeon D-1557 | 32 GB | 8 GB | ||||
FAS2750 | May 2018 | 1 x 12 core 1.50 GHz Xeon D-1557 | 32 GB | 8 GB | ||||
FAS3020 | EOA 4/09 | May 2005 | 2.8 GHz Intel Xeon | 2 GB | 512 MB | 84 TB | 34089* | |
FAS3040 | EOA 4/09 | Feb 2007 | Dual 2.4 GHz AMD Opteron 250 | 4 GB | 512 MB | 336 TB | 60038* | |
FAS3050 | Discontinued | May 2005 | Dual 2.8 GHz Intel Xeon | 4 GB | 512 MB | 168 TB | 47927* | |
FAS3070 | EOA 4/09 | Nov 2006 | Dual 1.8 GHz AMD dual core Opteron | 8 GB | 512 MB | 504 TB | 85615* | |
FAS3140 | EOA 2/12 | June 2008 | Single 2.4 GHz AMD Opteron Dual Core 2216 | 4 GB | 512 MB | 420 TB | SFS2008 | 40109* |
FAS3160 | EOA 2/12 | Dual 2.6 GHz AMD Opteron Dual Core 2218 | 8 GB | 2 GB | 672 TB | SFS2008 | 60409* | |
FAS3170 | EOA 2/12 | June 2008 | Dual 2.6 GHz AMD Opteron Dual Core 2218 | 16 GB | 2 GB | 840 TB | SFS97_R1 | 137306* |
FAS3210 | EOA 11/13 | Nov 2010 | Single 2.3 GHz Intel Xeon(tm) Processor (E5220) | 8 GB | 2 GB | 480 TB | SFS2008 | 64292 |
FAS3220 | EOA 12/14 | Nov 2012 | Single 2.3 GHz Intel Xeon(tm) Quad Processor (L5410) | 12 GB | 3.2GB | 1.44 PB | ?? | ?? |
FAS3240 | EOA 11/13 | Nov 2010 | Dual 2.33 GHz Intel Xeon(tm) Quad Processor (L5410) | 16 GB | 2 GB | 1.20 PB | ?? | ?? |
FAS3250 | EOA 12/14 | Nov 2012 | Dual 2.33 GHz Intel Xeon(tm) Quad Processor (L5410) | 40 GB | 4 GB | 2.16 PB | SFS2008 | 100922 |
FAS3270 | EOA 11/13 | Nov 2010 | Dual 3.0 GHz Intel Xeon(tm) Processor (E5240) | 40 GB | 4 GB | 1.92 PB | SFS2008 | 101183 |
FAS6030 | EOA 6/09 | Mar 2006 | Dual 2.6 GHz AMD Opteron | 32 GB | 512 MB | 840 TB | SFS97_R1 | 100295* |
FAS6040 | EOA 3/12 | Dec 2007 | 2.6 GHz AMD dual core Opteron | 16 GB | 512 MB | 840 TB | ||
FAS6070 | EOA 6/09 | Mar 2006 | Quad 2.6 GHz AMD Opteron | 64 GB | 2 GB | 1.008 PB | 136048* | |
FAS6080 | EOA 3/12 | Dec 2007 | 2 x 2.6 GHz AMD dual core Opteron 280 | 64 GB | 4 GB | 1.176 PB | SFS2008 | 120011* |
FAS6210 | EOA 11/13 | Nov 2010 | 2 x 2.27 GHz Intel Xeon(tm) Processor E5520 | 48 GB | 8 GB | 2.40 PB | ||
FAS6220 | EOA 3/15 | Feb 2013 | 2 x 64-bit 4-core Intel Xeon(tm) Processor E5520 | 96 GB | 8 GB | 4.80 PB | ||
FAS6240 | EOA 11/13 | Nov 2010 | 2 x 2.53 GHz Intel Xeon(tm) Processor E5540 | 96 GB | 8 GB | 2.88 PB | SFS2008 | 190675 |
FAS6250 | EOA 3/15 | Feb 2013 | 2 x 64-bit 4-core | 144 GB | 8 GB | 5.76 PB | ||
FAS6280 | EOA 11/13 | Nov 2010 | 2 x 2.93 GHz Intel Xeon(tm) Processor X5670 | 192 GB | 8 GB | 2.88 PB | ||
FAS6290 | EOA 3/15 | Feb 2013 | 2 x 2.93 GHz Intel Xeon(tm) Processor X5670 | 192 GB | 8 GB | 5.76 PB | ||
FAS8020 | EOA 12/17 | Mar 2014 | 1 x Intel Xeon CPU E5-2620 @ 2.00GHz | 24 GB | 8 GB | 1.92 PB | SFS2008 | 110281 |
FAS8040 | EOA 12/17 | Mar 2014 | 1 x 64-bit 8-core 2.10 GHz E5-2658 | 64 GB | 16 GB | 2.88 PB | ||
FAS8060 | EOA 12/17 | Mar 2014 | 2 x 64-bit 8-core 2.10 GHz E5-2658 | 128 GB | 16 GB | 4.80 PB | ||
FAS8080EX | EOA 12/17 | Jun 2014 | 2 x 64-bit 10-core 2.80 GHz E5-2680 v2 | 256 GB | 32 GB | 8.64 PB | SPC-1 IOPS | 685,281.71* |
FAS8200 | Nov 2016 | 1 x 16 core 1.70 GHz D-1587 | 128 GB | 16 GB | 4.80 PB | SPEC SFS2014_swbuild | 4130 MBps / 260 020 IOPS @2.7ms (ORT = 1.04 ms) | |
FAS9000 | Nov 2016 | 2 x 18-core 2.30 GHz E5-2697 v4 | 512 GB | 64 GB | 14.4 PB | |||
AFF8040 | EOA 10/17 | Mar 2014 | 1 x 64-bit 8-core 2.10 GHz E5-2658 | 64 GB | 16 GB | |||
AFF8060 | EOA 11/16 | Mar 2014 | 2 x 64-bit 8-core 2.10 GHz E5-2658 | 128 GB | 16 GB | |||
AFF8080 | EOA 10/17 | Jun 2014 | 2 x 64-bit 10-core 2.80 GHz E5-2680 v2 | 256 GB | 32 GB | |||
AFF A200 | 2017 | 1 x 6-core Intel Xeon D-1528 @ 1.90GHz | 32 GB | 8 GB | ||||
AFF A220 | May 2018 | 1 x 12 core 1.50 GHz Xeon D-1557 | 32 GB | 8 GB | ||||
AFF A300 | 2016 | 1 x 16-core Intel Xeon D-1587 @ 1.70GHz | 128 GB | 16 GB | ||||
AFF A700 | 2016 | 2 x 18-core 2.30 GHz E5-2697 v4 | 512 GB | 64 GB | ||||
AFF A700s | 2017 | 2 x 18-core 2.30 GHz E5-2697 v4 | 512 GB | 32 GB | SPC-1 | 2 400 059 IOPS @0.69ms | ||
AFF A800 | May 2018 | 2 x 24-core 2.10 GHz 8160 Skylake | 1280 GB | 64 GB | SPC-1 v3.6 | 2 401 171 IOPS @0.59ms with FC protocol | ||
Model | Status | Released | CPU | Main system memory | Nonvolatile memory | Raw capacity | Benchmark | Result |
EOA = End of Availability
SPECsfs with "*" is clustered result. SPECsfs performed include SPECsfs93, SPECsfs97, SPECsfs97_R1 and SPECsfs2008. Results of different benchmark versions are not comparable.
See also
- Network attached storage
- NetApp
- ONTAP Operation System, used in NetApp storage systems
- Write Anywhere File Layout (WAFL), used in NetApp storage systems
References
- ↑ Nabrzyski, Jarek; Schopf, Jennifer M.; Węglarz, Jan (2004). Grid Resource Management: State of the Art and Future Trends. Springer. p. 342. ISBN 978-1-4020-7575-9. https://books.google.com/books?id=o5J8kyfreXAC&pg=PA535. Retrieved 11 June 2012.
- ↑ "NetApp AFF A200 VMmark 3 Results Published". Storage Review. 31 January 2018. Archived on 2018-06-01. Error: If you specify
|archivedate=
, you must also specify|archiveurl=
. http://www.storagereview.com/netapp_aff_a200_vmmark_3_results_publisheddf. Retrieved 1 June 2018.Template:Ref-en - ↑ "TR-3298. RAID-DP: NetApp Implementation of Double - Parity RAID for Data Protection". NetApp. 1 March 2013. Archived on 2018-01-29. Error: If you specify
|archivedate=
, you must also specify|archiveurl=
. https://www.netapp.com/us/media/tr-3298.pdf. Retrieved 29 January 2018.Template:Ref-en - ↑ "RAID Triple Parity". NetApp. Archived on 2015-09-27. Error: If you specify
|archivedate=
, you must also specify|archiveurl=
. https://atg.netapp.com/wp-content/uploads/2012/12/RTP_Goel.pdf. Retrieved 29 January 2018.Template:Ref-en - ↑ "Back to Basics: RAID-DP". NetApp. 11 October 2013. Archived on 2017-06-19. Error: If you specify
|archivedate=
, you must also specify|archiveurl=
. http://community.netapp.com/t5/Tech-OnTap-Articles/Back-to-Basics-RAID-DP/ta-p/86123. Retrieved 24 January 2018.Template:Ref-en
External links
- Storage Filer (definitions)
- SnapLock Technical Report
- NetApp training videos
- NETWORK-APPLIANCE (Mib file)
- NetApp end of availability information