Company:Dell EMC ScaleIO

From HandWiki
Revision as of 18:55, 9 February 2024 by Jworkorg (talk | contribs) (fixing)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Dell EMC
TypeSubsidiary
NYSE: EMC (1986–2016)[1]
IndustryComputer storage
Founded1979; 45 years ago (1979)
Founders
  • Richard Egan
  • Roger Marino
Headquarters
Hopkinton, Massachusetts
,
United States
Area served
Worldwide
Key people
Jeff Clarke
(President, Infrastructure Solutions Group, Dell EMC)
ProductsSee EMC products
ParentDell Technologies
Websitedellemc.com

Dell EMC PowerFlex (previously known as ScaleIO and VxFlex OS), is a commercial software-defined storage product from Dell EMC that creates a server-based storage area network (SAN) from local server storage using x86 servers. It converts this direct-attached storage into shared block storage than runs over an IP-based network.

PowerFlex can scale from three compute/storage nodes to over 1,000 nodes that can drive up to 240 million IOPS of performance. PowerFlex is bundled with Dell EMC commodity computing servers (officially called VxFlex Ready Nodes, PowerFlex appliance, and PowerFlex rack).

PowerFlex can be deployed as storage only or as a converged infrastructure combining storage, computational and networking resources into a single block. Capacity and performance of all available resources are aggregated and made available to every participating PowerFlex server and application. Storage tiers can be created with media types and drive types that match the ideal performance or capacity characteristics to best suit the application needs.

History

ScaleIO was founded in 2011 by Boaz Palgi, Erez Webman, Lior Bahat, Eran Borovik, and Erez Ungar in Israel.[2] The software was designed for high performance and large systems.[3]

A product was announced in November 2012.[4]

EMC Corporation bought ScaleIO in June 2013 for about $200 million, only about six months after the company emerged from stealth mode.[5][2] EMC began promoting ScaleIO in 2014 and 2015, marketing it in competition with EMC’s own data storage arrays. Also in 2015, EMC introduced a model of its VCE converged infrastructure hardware that supported ScaleIO storage.

At its 2015 trade show, EMC announced that ScaleIO would be made freely available to developers for testing. By May, 2015, developers could download the ScaleIO software. In September 2015, EMC announced the availability of the previously software-only ScaleIO pre-bundled on EMC commodity hardware, called EMC ScaleIO Node.

In May, 2017, Dell EMC announced ScaleIO.Next, featuring inline compression, thin provisioning and flash-based snapshots. The new release features enhanced snapshot tools and features and full support for VMware Virtual Volumes (VVols), as well as volume migration for deployments that want to take advantage of lower cost media for low-priority data.

In March, 2018, ScaleIO was rebranded to VxFlex OS and continued to be the software defined storage for VxFlex Ready Nodes, VxFlex appliance and VxFlex integrated system (VxRack FLEX).

In April, 2019, VxFlex OS 3.0 (ScaleIO 3.0) was released.

In June, 2020, VxFlex OS was rebranded to PowerFlex and version 3.5 was released which included updates such as native asynchronous replication, HTML5 Web UI, secure snapshots and other core improvements.

In June, 2021, PowerFlex 3.6 was launched and includes new features such as replication for HCI with VMware SRM support, 15 second RPO timing, CloudIQ, Oracle Linux Virtualization support, increased network resiliency enhancements, and support for up to 2000 SDC's.

Architecture

PowerFlex uses storage and compute resources of commodity hardware. It combines HDDs, SSDs, and PCIe flash cards to create a virtual pool of block storage with varying performance tiers. It features on-demand performance and storage scalability, as well as enterprise-grade data protection, multi-tenant capabilities, and add-on enterprise features such as QoS, thin provisioning and snapshots. PowerFlex operates on multiple hardware platforms and supports physical and/or virtual application servers.

PowerFlex works by installing software components on application hosts. Application hosts contribute internal disks and any other direct attached storage resources to the PowerFlex cluster by installing the SDS software. Hosts can then be presented volumes from the PowerFlex cluster by leveraging the SDC software. These components can run alongside other applications on any server (physical, virtual, or cloud) using any type of storage media (disk drives, flash drives, PCIe flash cards, or cloud storage).

The PowerFlex architecture is built on two components: a data client and a data server. The Storage Data Client (SDC) is a lightweight device driver situated in each host whose application or file system requires access to the PowerFlex virtual SAN block devices. The SDC exposes block devices representing the PowerFlex volumes that are currently mapped to that host. The SDCs maintain a small in-memory map, being able to maintain mapping of petabytes of data with just megabytes of RAM. The inter-node protocol used by SDCs is simpler than iSCSI and uses fewer network resources.[3]

The Storage Data Server (SDS) is situated in each host and contributes local storage to the central PowerFlex virtual SAN. Each node is part of a loosely coupled cluster.[3]

Performance is expected to increase as servers and storage devices are added to the cluster. Additional storage and compute resources (i.e., additional servers and drives) can be added modularly. Every server in the PowerFlex cluster is used in the processing of I/O operations, making all I/O and throughput accessible to any application within the cluster. Any needed rebuilds and rebalances are processed in the background. Workloads are evenly shared with a parallel I/O architecture.

PowerFlex can be deployed in either a "two-layer" multi-server cluster in which the application and storage are installed in separate servers, or as "hyper-converged" option where the application and storage are installed on the same servers in the PowerFlex cluster, creating a low-footprint, low-cost scalable single-layer architecture. Capacity and performance of all available resources are aggregated and made available to every participating PowerFlex server and application. Storage tiers can be created with media types and drive types that match the ideal performance or capacity characteristics to best suit the application needs.

Storage and compute resources can be added to or removed from the PowerFlex cluster as needed, with no downtime and minimal impact to application performance. The self-healing, auto-balancing capability of the PowerFlex cluster ensures that data is automatically rebuilt and rebalanced across resources when components are added, removed, or failed. Because every server and local storage device in the cluster is used in parallel to process I/O operations and protect data, system performance scales linearly as additional servers and storage devices are added to the configuration.

PowerFlex software takes each data chunk to be written and spreads it across many nodes, mirroring it as well. This makes data rebuilds from disk loss very fast as several nodes contribute their own smaller, faster and parallel rebuild efforts to the whole. PowerFlex supports VMware, Hyper-V, Xen and KVM hypervisors. It also supports OpenStack, Windows, Red Hat, SLES, CentOS, and CoreOS (docker). Any app needing block storage can use it, including Oracle and other high performance databases.

References