Fast path
Fast path is a term used in computer science to describe a path with shorter instruction path length through a program compared to the normal path. For a fast path to be effective it must handle the most commonly occurring tasks more efficiently than the normal path, leaving the latter to handle uncommon cases, corner cases, error handling, and other anomalies. Fast paths are a form of optimization.[1] For example dedicated packet routing hardware used to build computer networks will often support software dedicated to handling the most common kinds of packets, with other kinds, for example with control information or packets directed at the device itself instead of being routed elsewhere, put on the metaphorical "slow path", in this example usually implemented by software running on the control processor.
Specific implementations of networking software architectures have been developed that leverage the concept of a fast path to maximize the performance of packet processing software. In these implementations, the networking stack is split into two layers and the lower layer, typically called the fast path, processes the majority of incoming packets outside the OS environment without incurring any of the OS overheads that degrade overall performance. Only those rare packets that require complex processing are forwarded to the OS networking stack, which performs the necessary management, signaling and control functions.
Some hardware RAID controllers implement a "fast path" for write-through access which bypasses the controller's cache in certain situations. This tends to increase IOPS, particularly for solid-state drives.
See also
References
Original source: https://en.wikipedia.org/wiki/Fast path.
Read more |