Open-channel SSD

From HandWiki

An open-channel solid state drive is a solid-state drive which does not have a firmware Flash Translation Layer implemented on the device, but instead leaves the management of the physical solid-state storage to the computer's operating system.[1][2] The Linux 4.4 kernel is an example of an operating system kernel that supports open-channel SSDs which follow the NVM Express specification. The interface used by the operating system to access open-channel solid state drives is called LightNVM.[3][4][5]

NAND Flash Characteristics

Since SSDs use NAND flash memory for storing data, it is important to understand the characteristics of this medium. NAND flash provides a read/write/erase interface. A NAND package is organized into a hierarchy of dies, planes, blocks and pages. There may be one or several dies within a single physical package. A die allows a single I/O command to be executed at a time. A plane allows similar flash commands to be executed in parallel within a die. There are three fundamental programming constraints that apply to NAND:

  1. a write command must always contain enough data to program one (or several) full flash page(s),
  2. writes must be sequential within a block,
  3. an erase must be performed before a page within a block can be (re)written.

The number of program/erase (PE) cycles is limited. Because of these constraints SSD controllers write data to NAND flash memory in another order than the logical block order. This implies that the SSD controller must maintain a mapping table from host (logical) to NAND (physical) addresses. This mapping is usually called the L2P table. The layer that performs the translation from logical to physical addresses is called the flash translation layer or FTL.[6]

Comparison with Traditional SSDs

Open Channel SSDs provide more flexibility with regard to data placement decisions, overprovisioning, scheduling, garbage collection and wear leveling.[7] Open-Channel SSDs can, however, not be considered a uniform class of devices, as critical device characteristics such as minimum unit of read and minimum unit of write varies from device to device.[8] One can therefore not design an FTL that automatically works on all Open-Channel SSDs.

Traditional SSDs maintain the L2P table in DRAM on the SSD and use their own CPU for maintaining that L2P table. With Open Channel SSDs the L2P table is stored in host memory and the host CPU maintains that table. While the Open Channel SSD approach is more flexible, a significant amount of host memory and host CPU cycles is required for L2P management. With an average write size of 4 KB, almost 3 GB RAM is required for an SSD with a size of 1 TB.[9]

File Systems for Open-Channel SSDs With open-channel SSDs, the L2P mapping can be directly integrated or merged with storage management in file systems.[10] This avoids the redundancy between system software and SSD firmware, and thus improves performance and endurance. Further, open-channel SSDs enables more flexible control over flash memory. The internal parallelism is exploited by coordinating the data layout, garbage collection and request scheduling of both system software and SSD firmware to remove the conflicts, and thus improves and smooths the performance.[11]

References

  1. Matias Bjørling (March 12, 2015). "Open-Channel Solid State Drives". https://events.static.linuxfound.org/sites/events/files/slides/LightNVM-Vault2015.pdf. 
  2. Lu, Youyou; Shu, Jiwu; Zheng, Weimin (2013). "Extending the Lifetime of Flash-based Storage through Reducing Write Amplification from File Systems". FAST. https://www.usenix.org/system/files/conference/fast13/fast13-final110.pdf. 
  3. Corbet, Jonathan (22 April 2015). "Taking control of SSDs with LightNVM". https://lwn.net/Articles/641247/. 
  4. Michael Larabel (15 November 2015). "A Look At The New Features Of The Linux 4.4 Kernel". Phoronix. https://www.phoronix.com/scan.php?page=article&item=linux-44-features&num=1. 
  5. Michael Larabel (3 November 2015). "LightNVM Support Is Going Into Linux 4.4". Phoronix. https://www.phoronix.com/scan.php?page=news_item&px=Linux-4.4-LightNVM. 
  6. Bjørling, Matias; Gonzalez, Javier; Bonnet, Philippe (2017). "LightNVM: The Linux Open-Channel SSD Subsystem". USENIX FAST. pp. 359–374. https://www.usenix.org/system/files/conference/fast17/fast17-bjorling.pdf. 
  7. Bjørling, Matias (12 March 2015). "Open-Channel Solid State Drives". Vault. https://events.static.linuxfound.org/sites/events/files/slides/LightNVM-Vault2015.pdf. Retrieved 3 February 2019. 
  8. Picoli, Ivan Luiz; Hedam, Niclas; Bonnet, Philippe; Tözün, Pınar (12 January 2020). "Open-Channel SSD (What is it Good For)". CIDR. http://cidrdb.org/cidr2020/papers/p17-picoli-cidr20.pdf. Retrieved 4 March 2020. 
  9. "Fusion ioMemory™ VSL® 3.2.15". SanDisk, a Western Digital Brand. https://www.osslab.com.tw/wp-content/uploads/2018/02/ioMemory_VSL_3.2.15_Release_Notes_2017-11-07.pdf. 
  10. Lu, Youyou; Shu, Jiwu; Zheng, Weimin (2013). "Extending the Lifetime of Flash-based Storage through Reducing Write Amplification from File Systems". FAST. https://www.usenix.org/system/files/conference/fast13/fast13-final110.pdf. 
  11. Zhang, Jiacheng; Shu, Jiwu; Lu, Youyou (2016). "ParaFS: A Log-Structured File System to Exploit the Internal Parallelism of Flash Devices". USENIX ATC. https://www.usenix.org/system/files/conference/atc16/atc16-paper-zhang.pdf.