Fusion-io NVMFS
SanDisk/Fusion-io's NVMFS file system, formerly known as Direct File System (DFS),[1][2] accesses flash memory via a virtual flash storage layer instead of using the traditional block layer API. This file system has two main novel features. First, it lays out files directly in a very large virtual storage address space. Second, it leverages the virtual flash storage layer to perform block allocations and atomic updates. As a result, NVMFS performs better and is much simpler than a traditional Unix file system with similar functionalities. Additionally, this approach avoids the log-on-log performance issues triggered by log-structured file systems.[3] Microbenchmark results show that NVMFS can deliver 94,000 I/O operations per second (IOPS) for direct reads and 71,000 IOPS for direct writes with the virtualized flash storage layer on top of a first-generation Fusion-io ioDrive. For direct access performance, NVMFS is consistently better than ext3 on the same platform, sometimes by 20%. For buffered access performance, NVMFS is also consistently better than ext3, and sometimes by over 149%. Application benchmarks show that NVMFS outperforms ext3 by 7% to 250% while requiring less CPU power.[1] Additionally, I/O latency is lower with NVMFS compared to ext3.[4]
Flash Memory API
The API used by NVMFS to access flash memory consists of:[5]
- An address space that is several orders of magnitude larger than the storage capacity of the flash memory.
- Read, append and trim/deallocate/discard primitives.
- Atomic writes.[6]
The layer that provides this API is called the virtualized flash storage layer in the DFS paper.[1] It is the responsibility of this layer to perform block allocation, wear leveling, garbage collection, crash recovery, address translation and also to make the address translation data structures persistent.
References
- ↑ 1.0 1.1 1.2 Josephson, William K.; Bongo, Lars A.; Flynn, David; Li, Kai (September 2010). "Dfs: A file system for virtualized flash storage.". ACM Transactions on Storage 6 (3). doi:10.1145/1837915.1837922. https://www.usenix.org/legacy/events/fast10/tech/full_papers/josephson.pdf.
- ↑ Talagala, Nisha (24 August 2012). "Native Flash Support For Applications". Flash Memory Summit. http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2012/20120823_S304B_Talagala.pdf.
- ↑ Yang, Jingpei; Plasson, Ned; Gillis, Greg; Talagala, Nisha; Sundararaman, Swaminathan (5 October 2014). "Don't stack your Log on my Log". 2nd Workshop on Interactions of NVM/Flash with Operating Systems and Workloads (INFLOW 14). https://www.usenix.org/system/files/conference/inflow14/inflow14-yang.pdf.
- ↑ Rochner, Thomas (19 September 2013). "Running NoSQL natively on flash". NoSQL Search Roadshow Zurich. http://nosqlroadshow.com/dl/basho-roadshow-zurich-2013/slides/ThomasRochner_RunningNoSQLNativelyOnFlash.pdf.
- ↑ Das, Dhananjoy (14 November 2014). "In a Battle of Hardware, Software Innovation Comes Out On Top". http://itblog.sandisk.com/in-a-battle-of-hardware-software-innovation-comes-out-on-top/.
- ↑ Ouyang, Xiangyong; Nellans, David; Wipfel, Robert; Flynn, David; Panda, Dhabaleswar K. (February 2011). "Beyond block I/O: Rethinking traditional storage primitives". 2011 IEEE 17th International Symposium on High Performance Computer Architecture. pp. 301–311. doi:10.1109/HPCA.2011.5749738. ISBN 978-1-4244-9432-3.
Original source: https://en.wikipedia.org/wiki/Fusion-io NVMFS.
Read more |