Byzantine fault tolerance

From HandWiki
Revision as of 16:23, 6 February 2024 by Wikisleeper (talk | contribs) (linkage)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Byzantine fault tolerance (BFT) is the dependability of a fault-tolerant computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. In a "Byzantine failure", a component such as a server can inconsistently appear both failed and functioning to failure-detection systems, presenting different symptoms to different observers.

It is difficult for the other components to declare it failed and shut it out of the network, because they need to first reach a consensus regarding which component has failed in the first place. The term is derived from the Byzantine Generals' Problem,[1] where actors must agree on a concerted strategy to avoid catastrophic system failure, but some of the actors are unreliable. Byzantine fault tolerance has been also referred to with the phrases interactive consistency or source congruency, error avalanche, Byzantine agreement problem, Byzantine generals problem, and Byzantine failure.[2]

Background

A Byzantine fault is any fault presenting different symptoms to different observers.[3] A Byzantine failure is the loss of a system service due to a Byzantine fault in systems that require consensus.[4]

The objective of Byzantine fault tolerance is to be able to defend against failures of system components with or without symptoms that prevent other components of the system from reaching an agreement among themselves, where such an agreement is needed for the correct operation of the system.

Remaining correctly operational components of a Byzantine fault tolerant system will be able to continue providing the system's service as originally intended, assuming there are sufficiently many accurately operating components to maintain the service.

Byzantine failures are considered the most general and most difficult class of failures among the failure modes. The so-called fail-stop failure mode occupies the simplest end of the spectrum. Whereas fail-stop failure mode simply means that the only way to fail is a node crash, detected by other nodes, Byzantine failures imply no restrictions, which means that the failed node can generate arbitrary data, pretending to be a correct one. Thus, Byzantine failures can confuse failure detection systems, which makes fault tolerance difficult. Despite the analogy, a Byzantine failure is not necessarily a security problem involving hostile human interference: it can arise purely from electrical faults.

The terms fault and failure are used here according to the standard definitions[5] originally created by a joint committee on "Fundamental Concepts and Terminology" formed by the IEEE Computer Society's Technical Committee on Dependable Computing and Fault-Tolerance and IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance.[6] A version of these definitions is also described in the Dependability Wikipedia page.

Byzantine Generals' Problem

Byzantine refers to the Byzantine Generals' Problem, an agreement problem (described by Leslie Lamport, Robert Shostak and Marshall Pease in their 1982 paper, "The Byzantine Generals Problem")[7] in which a group of generals, each commanding a portion of the Byzantine army, encircle a city. These generals wish to formulate a plan for attacking the city. In its simplest form, the generals must decide only whether to attack or retreat. Some generals may prefer to attack, while others prefer to retreat. The important thing is that every general agree on a common decision, for a halfhearted attack by a few generals would become a rout, and would be worse than either a coordinated attack or a coordinated retreat.

The problem is complicated by the presence of treacherous generals who may not only cast a vote for a suboptimal strategy, they may do so selectively. For instance, if nine generals are voting, four of whom support attacking while four others are in favor of retreat, the ninth general may send a vote of retreat to those generals in favor of retreat, and a vote of attack to the rest. Those who received a retreat vote from the ninth general will retreat, while the rest will attack (which may not go well for the attackers). The problem is complicated further by the generals being physically separated and having to send their votes via messengers who may fail to deliver votes or may forge false votes.

Byzantine fault tolerance can be achieved if the loyal (non-faulty) generals have a majority agreement on their strategy. There can be a default vote value given to missing messages. For example, missing messages can be given the value <Null>. Further, if the agreement is that the <Null> votes are in the majority, a pre-assigned default strategy can be used (e.g., retreat).[7]

The typical mapping of this story onto computer systems is that the computers are the generals and their digital communication system links are the messengers. Although the problem is formulated in the analogy as a decision-making and security problem, in electronics, it cannot be solved simply by cryptographic digital signatures, because failures such as incorrect voltages can propagate through the encryption process. Thus, a component may appear functioning to one component and faulty to another, which prevents forming a consensus whether the component is faulty or not.

Examples of Byzantine failures

Several examples of Byzantine failures that have occurred are given in two equivalent journal papers.[3][4] These and other examples are described on the NASA DASHlink web pages.[8] These web pages also describe some phenomenology that can cause Byzantine faults.

Byzantine errors were observed infrequently and at irregular points during endurance testing for the newly constructed Virginia class submarines, at least through 2005 (when the issues were publicly reported).[9]

A similar problem faces honeybee swarms. They have to find a new home, and the many scouts and wider participants have to reach consensus about which of perhaps several candidate homes to fly to. And then they all have to fly there, with their queen.[10]

Early solutions

Several solutions were described by Lamport, Shostak, and Pease in 1982.[7] They began by noting that the Generals' Problem can be reduced to solving a "Commander and Lieutenants" problem where loyal Lieutenants must all act in unison and that their action must correspond to what the Commander ordered in the case that the Commander is loyal.

  • One solution considers scenarios in which messages may be forged, but which will be Byzantine-fault-tolerant as long as the number of traitorous generals does not equal or exceed one third of the generals. The impossibility of dealing with one-third or more traitors ultimately reduces to proving that the one Commander and two Lieutenants problem cannot be solved, if the Commander is traitorous. To see this, suppose we have a traitorous Commander A, and two Lieutenants, B and C: when A tells B to attack and C to retreat, and B and C send messages to each other, forwarding A's message, neither B nor C can figure out who is the traitor, since it is not necessarily A—another Lieutenant could have forged the message purportedly from A. It can be shown that if n is the number of generals in total, and t is the number of traitors in that n, then there are solutions to the problem only when n > 3t and the communication is synchronous (bounded delay).[11]
  • A second solution requires unforgeable message signatures. For security-critical systems, digital signatures (in modern computer systems, this may be achieved in practice using public-key cryptography) can provide Byzantine fault tolerance in the presence of an arbitrary number of traitorous generals. However, for safety-critical systems (where "security" addresses intelligent threats while "safety" addresses the inherent dangers of an activity or mission), simple error detecting codes, such as CRCs, provide weaker but often sufficient coverage at a much lower cost. This is true for both Byzantine and non-Byzantine faults. Furthermore, sometimes security measures weaken safety and vice versa--Thus, cryptographic digital signature methods are not a good choice for safety-critical systems, unless there is also a specific security threat as well.[12] While error detecting codes, such as CRCs, are better than cryptographic techniques, neither provide adequate coverage for active electronics in safety-critical systems. This is illustrated by the Schrödinger CRC scenario where a CRC-protected message with a single Byzantine faulty bit presents different data to different observers and each observer sees a valid CRC.[3][4]
  • Also presented is a variation on the first two solutions allowing Byzantine-fault-tolerant behavior in some situations where not all generals can communicate directly with each other.

Several system architectures were designed c. 1980 that implemented Byzantine fault tolerance. These include: Draper's FTMP,[13] Honeywell's MMFCS,[14] and SRI's SIFT.[15]

Practical Byzantine fault tolerance

In 1999, Miguel Castro and Barbara Liskov introduced the "Practical Byzantine Fault Tolerance" (PBFT) algorithm,[16] which provides high-performance Byzantine state machine replication, processing thousands of requests per second with sub-millisecond increases in latency.

After PBFT, several BFT protocols were introduced to improve its robustness and performance. For instance, Q/U,[17] HQ,[18] Zyzzyva,[19] and ABsTRACTs[20] , etc., addressed the performance and cost issues; whereas other protocols, like Aardvark[21] and RBFT[22] , addressed its robustness issues. Furthermore, Adapt[23] tried to make use of existing BFT protocols, through switching between them in an adaptive way, to improve system robustness and performance as the underlying conditions change. Furthermore, BFT protocols were introduced that leverage trusted components to reduce the number of replicas, e.g., A2M-PBFT-EA[24] and MinBFT.[25]

Software

UpRight[26] is an open source library for constructing services that tolerate both crashes ("up") and Byzantine behaviors ("right") that incorporates many of these protocols' innovations.

In addition to PBFT and UpRight, there is the BFT-SMaRt library,[27] a high-performance Byzantine fault-tolerant state machine replication library developed in Java. This library implements a protocol very similar to PBFT's, plus complementary protocols which offer state transfer and on-the-fly reconfiguration of hosts. BFT-SMaRt is the most recent effort to implement state machine replication, still being actively maintained.

Archistar[28] utilizes a slim BFT layer[29] for communication. It prototypes a secure multi-cloud storage system using Java licensed under LGPLv2. Focus lies on simplicity and readability, it aims to be the foundation for further research projects.

Askemos[30] is a concurrent, garbage-collected, persistent programming platform atop of replicated state machines which tolerates Byzantine faults. It prototypes an execution environment facilitating Smart contracts.

Tendermint[31] is general purpose software for BFT state machine replication. Using a socket protocol, it enables state machines to be written in any programming language, and provides means for the state machine to influence elements of the consensus, such as the list of active processes. Tendermint is implemented in the style of a blockchain, which amortizes the overhead of BFT and allows for faster recovery from failure.

Product implementations

One example of BFT in use is bitcoin, a peer-to-peer digital cash system[citation needed]. The bitcoin network works in parallel to generate a blockchain with proof-of-work allowing the system to overcome Byzantine failures and reach a coherent global view of the system's state.

Some aircraft systems, such as the Boeing 777 Aircraft Information Management System (via its ARINC 659 SAFEbus network),[32] [33] the Boeing 777 flight control system,[34] and the Boeing 787 flight control systems use Byzantine fault tolerance; because these are real-time systems, their Byzantine fault tolerance solutions must have very low latency. For example, SAFEbus can achieve Byzantine fault tolerance within the order of a microsecond of added latency.

Some spacecraft such as the SpaceX Dragon flight system[35] consider Byzantine fault tolerance in its design.

Byzantine fault tolerance mechanisms use components that repeat an incoming message (or just its signature) to other recipients of that incoming message. All these mechanisms make the assumption that the act of repeating a message blocks the propagation of Byzantine symptoms. For systems that have a high degree of safety or security criticality, these assumptions must be proven to be true to an acceptable level of fault coverage. When providing proof through testing, one difficulty is creating a sufficiently wide range of signals with Byzantine symptoms.[36] Such testing likely will require specialized fault injectors.[37][38]

See also

References

  1. Lamport, L.; Shostak, R.; Pease, M. (1982). "The Byzantine Generals Problem". ACM Transactions on Programming Languages and Systems 4 (3): 382–401. doi:10.1145/357172.357176. Archived from the original on 13 June 2018. https://web.archive.org/web/20180613015025/https://www.microsoft.com/en-us/research/publication/byzantine-generals-problem/?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fum%2Fpeople%2Flamport%2Fpubs%2Fbyz.pdf. 
  2. Kirrmann, Hubert (n.d.). "Fault Tolerant Computing in Industrial Automation". Switzerland: ABB Research Center. p. 94. http://lamspeople.epfl.ch/kirrmann/Pubs/FaultTolerance/Fault_Tolerance_Tutorial_HK.pdf#page=94. 
  3. 3.0 3.1 3.2 Driscoll, K.; Hall, B.; Paulitsch, M.; Zumsteg, P.; Sivencrona, H. (2004). The Real Byzantine Generals. pp. 6.D.4–61-11. doi:10.1109/DASC.2004.1390734. 
  4. 4.0 4.1 4.2 Driscoll, Kevin; Hall, Brendan; Sivencrona, Håkan; Zumsteg, Phil (2003). Byzantine Fault Tolerance, from Theory to Reality. 2788. pp. 235–248. doi:10.1007/978-3-540-39878-3_19. ISSN 0302-9743. 
  5. Avizienis. doi:10.1109/TDSC.2004.2. 
  6. "Dependable Computing and Fault Tolerance". http://www.dependability.org. Retrieved 2015-03-02. 
  7. 7.0 7.1 7.2 Lamport, L.; Shostak, R.; Pease, M. (1982). "The Byzantine Generals Problem". ACM Transactions on Programming Languages and Systems 4 (3): 387–389. doi:10.1145/357172.357176. Archived from the original on 7 February 2017. https://web.archive.org/web/20170207104645/http://research.microsoft.com/en-us/um/people/lamport/pubs/byz.pdf. 
  8. Driscoll, Kevin (2012-12-11). "Real System Failures". NASA. https://c3.nasa.gov/dashlink/resources/624/. 
  9. Walter, C.; Ellis, P.; LaValley, B. (2005). The Reliable Platform Service: A Property-Based Fault Tolerant Service Architecture. pp. 34–43. doi:10.1109/HASE.2005.23. 
  10. D., Seeley, Thomas (2010). Honeybee democracy. Princeton, N.J.: Princeton University Press. ISBN 9780691147215. OCLC 587249075. https://www.worldcat.org/oclc/587249075. 
  11. Feldman, P.; Micali, S. (1997). "An optimal probabilistic protocol for synchronous Byzantine agreement". SIAM J. Comput. 26 (4): 873–933. doi:10.1137/s0097539790187084. http://people.csail.mit.edu/silvio/Selected%20Scientific%20Papers/Distributed%20Computation/An%20Optimal%20Probabilistic%20Algorithm%20for%20Byzantine%20Agreement.pdf. 
  12. Paulitsch, M.; Morris, J.; Hall, B.; Driscoll, K.; Latronico, E.; Koopman, P. (2005). Coverage and the Use of Cyclic Redundancy Codes in Ultra-Dependable Systems. pp. 346–355. doi:10.1109/DSN.2005.31. 
  13. Hopkins, Albert L.; Lala, Jaynarayan H.; Smith, T. Basil (1987). The Evolution of Fault Tolerant Computing at the Charles Stark Draper Laboratory, 1955–85. 1. pp. 121–140. doi:10.1007/978-3-7091-8871-2_6. ISSN 0932-5581. 
  14. Driscoll, Kevin; Papadopoulos, Gregory; Nelson, Scott; Hartmann, Gary; Ramohalli, Gautham (1984), Multi-Microprocessor Flight Control System, Wright-Patterson Air Force Base, OH 45433, USA: AFWAL/FIGL U.S. Air Force Systems Command, AFWAL-TR-84-3076 
  15. "SIFT: design and analysis of a fault-tolerant computer for aircraft control". Microelectronics Reliability 19 (3): 190. 1979. doi:10.1016/0026-2714(79)90211-7. ISSN 0026-2714. 
  16. Castro, M.; Liskov, B. (2002). "Practical Byzantine Fault Tolerance and Proactive Recovery". ACM Transactions on Computer Systems (Association for Computing Machinery) 20 (4): 398–461. doi:10.1145/571637.571640. 
  17. Abd-El-Malek, M.; Ganger, G.; Goodson, G.; Reiter, M.; Wylie, J. (2005). Fault-scalable Byzantine Fault-Tolerant Services. Association for Computing Machinery. doi:10.1145/1095809.1095817. 
  18. Cowling, James; Myers, Daniel; Liskov, Barbara; Rodrigues, Rodrigo; Shrira, Liuba (2006). "HQ Replication: A Hybrid Quorum Protocol for Byzantine Fault Tolerance". Proceedings of the 7th USENIX Symposium on Operating Systems Design and Implementation. pp. 177–190. ISBN 1-931971-47-1. http://portal.acm.org/citation.cfm?id=1298455.1298473. 
  19. Kotla, Ramakrishna; Alvisi, Lorenzo; Dahlin, Mike; Clement, Allen; Wong, Edmund (December 2009). "Zyzzyva: Speculative Byzantine Fault Tolerance". ACM Transactions on Computer Systems (Association for Computing Machinery) 27 (4). doi:10.1145/1658357.1658358. 
  20. Guerraoui, Rachid; Kneževic, Nikola; Vukolic, Marko; Quéma, Vivien (2010). "The Next 700 BFT Protocols". Proceedings of the 5th European conference on Computer systems. EuroSys. http://infoscience.epfl.ch/record/144158. 
  21. Clement, A.; Wong, E.; Alvisi, L.; Dahlin, M.; Marchetti, M. (April 22–24, 2009). "Making Byzantine Fault Tolerant Systems Tolerate Byzantine Faults". Symposium on Networked Systems Design and Implementation. USENIX. http://www.usenix.org/events/nsdi09/tech/full_papers/clement/clement.pdf. 
  22. Aublin, P.-L.; Ben Mokhtar, S.; Quéma, V. (July 8–11, 2013). "RBFT: Redundant Byzantine Fault Tolerance". 33rd IEEE International Conference on Distributed Computing Systems. International Conference on Distributed Computing Systems. Archived from the original on August 5, 2013. https://web.archive.org/web/20130805115252/http://www.temple.edu/cis/icdcs2013/program.html. 
  23. Bahsoun, J. P.; Guerraoui, R.; Shoker, A. (2015-05-01). "Making BFT Protocols Really Adaptive". Parallel and Distributed Processing Symposium (IPDPS), 2015 IEEE International: 904–913. doi:10.1109/IPDPS.2015.21. http://ieeexplore.ieee.org/document/7161576/. 
  24. Chun, Byung-Gon; Maniatis, Petros; Shenker, Scott; Kubiatowicz, John (2007-01-01). "Attested Append-only Memory: Making Adversaries Stick to Their Word". Proceedings of Twenty-first ACM SIGOPS Symposium on Operating Systems Principles. SOSP '07 (New York, NY, USA: ACM): 189–204. doi:10.1145/1294261.1294280. ISBN 9781595935915. http://doi.acm.org/10.1145/1294261.1294280. 
  25. Veronese, G. S.; Correia, M.; Bessani, A. N.; Lung, L. C.; Verissimo, P. (2013-01-01). "Efficient Byzantine Fault-Tolerance". IEEE Transactions on Computers 62 (1): 16–30. doi:10.1109/TC.2011.221. ISSN 0018-9340. http://ieeexplore.ieee.org/document/6081855/. 
  26. UpRight. Google Code repository for the UpRight replication library.
  27. BFT-SMaRt. Google Code repository for the BFT-SMaRt replication library.
  28. Archistar. github repository for the Archistar project.
  29. Archistar-bft BFT state-machine. github repository for the Archistar project.
  30. Askemos/BALL project home page
  31. Tendermint github repository for the Tendermint project
  32. M., Paulitsch; Driscoll, K. (9 January 2015). "Chapter 48:SAFEbus". in Zurawski, Richard. Industrial Communication Technology Handbook, Second Edition. CRC Press. pp. 48-1–48-26. ISBN 978-1-4822-0733-0. https://books.google.com/books?id=ppzNBQAAQBAJ. 
  33. Thomas A. Henzinger; Christoph M. Kirsch (26 September 2001). Embedded Software: First International Workshop, EMSOFT 2001, Tahoe City, CA, USA, October 8-10, 2001. Proceedings. Springer Science & Business Media. pp. 307–. ISBN 978-3-540-42673-8. http://www.csl.sri.com/papers/emsoft01/emsoft01.pdf. 
  34. Yeh, Y.C. (2001). Safety critical avionics for the 777 primary flight controls system. 1. pp. 1C2/1–1C2/11. doi:10.1109/DASC.2001.963311. 
  35. ELC: SpaceX lessons learned [LWN.net]
  36. Nanya, T.; Goosen, H.A. (1989). "The Byzantine hardware fault model". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 8 (11): 1226–1231. doi:10.1109/43.41508. ISSN 0278-0070. 
  37. Martins, Rolando; Gandhi, Rajeev; Narasimhan, Priya; Pertet, Soila; Casimiro, António; Kreutz, Diego; Veríssimo, Paulo (2013). Experiences with Fault-Injection in a Byzantine Fault-Tolerant Protocol. 8275. pp. 41–61. doi:10.1007/978-3-642-45065-5_3. ISSN 0302-9743. 
  38. Kevin R. Driscoll, "Method for testing the sensitive input range of Byzantine filters", US patent 7475318, issued 2009-01-06, assigned to Honeywell International Inc.

External links

  • Byzantine Fault Tolerance in the RKBExplorer
  • UpRight is an open source library for Crash-tolerant and Byzantine-tolerant state machine replication.
  • Bft-SMaRt is a high-performance Byzantine fault-tolerant state machine replication library developed in Java with simplicity and robustness as primary requirements.