Optimistic replication
Optimistic replication, also known as lazy replication,[1][2] is a strategy for replication, in which replicas are allowed to diverge.[3]
Traditional pessimistic replication systems try to guarantee from the beginning that all of the replicas are identical to each other, as if there was only a single copy of the data all along. Optimistic replication does away with this in favor of eventual consistency, meaning that replicas are guaranteed to converge only when the system has been quiesced for a period of time. As a result, there is no longer a need to wait for all of the copies to be synchronized when updating data, which helps concurrency and parallelism. The trade-off is that different replicas may require explicit reconciliation later on, which might then prove difficult or even insoluble.
Algorithms
An optimistic replication algorithm consists of five elements:
- Operation submission: Users submit operations at independent sites.
- Propagation: Each site shares the operations it knows about with the rest of the system.
- Scheduling: Each site decides on an order for the operations it knows about.
- Conflict resolution: If there are any conflicts among the operations a site has scheduled, it must modify them in some way.
- Commitment: The sites agree on a final schedule and conflict resolution result, and the operations are made permanent.
There are two strategies for propagation: state transfer, where sites propagate a representation of the current state, and operation transfer, where sites propagate the operations that were performed (essentially, a list of instructions on how to reach the new state).
Scheduling and conflict resolution can either be syntactic or semantic. Syntactic systems rely on general information, such as when or where an operation was submitted. Semantic systems are able to make use of application-specific information to make smarter decisions. Note that state transfer systems generally have no information about the semantics of the data being transferred, and so they have to use syntactic scheduling and conflict resolution.
Examples
One well-known example of a system based on optimistic replication is the CVS version control system, or any other version control system which uses the copy-modify-merge paradigm. CVS covers each of the five elements:
- Operation submission: Users edit local versions of files.
- Propagation: Users manually pull updates from a central server, or push changes out once the user feels they are ready.
- Scheduling: Operations are scheduled in the order that they are received by the central server.
- Conflict resolution: When a user pushes to or pulls from the central repository, any conflicts will be flagged for that user to fix manually.
- Commitment: Once the central server accepts the changes which a user pushes, they are permanently committed.
A special case of replication is synchronization, where there are only two replicas. For example, personal digital assistants (PDAs) allow users to edit data either on the PDA or a computer, and then to merge these two datasets together. Note, however, that replication is a broader problem than synchronization, since there may be more than two replicas.
Other examples include:
- Usenet, and other systems which use the Thomas Write Rule (See Rfc677)
- Multi-master database replication[4]
- The Coda distributed filesystem
- Operational Transformation, a theoretical framework for group editing
- Peer-to-peer wikis
- Conflict-free replicated data types
- The Bayou[5] distributed database
- IceCube[6]
Implications
Applications built on top of optimistic replicated databases need to be careful about ensuring that the delayed updates observed do not impair the correctness of the application.
As a simple example, if an application contains a way of viewing some part of the database state, and a way of editing it, then users may edit that state but then not see it changing in the viewer. Alarmed that their edit "didn't work", they may try it again, potentially more than once. If the updates are not idempotent (e.g., they increment a value), this can lead to disaster. Even if they are idempotent, the spurious database updates can lead to performance bottlenecks, especially when the database systems are processing heavy loads; this can become a vicious circle.
Testing of applications is often done on a testing environment, smaller in size (perhaps only a single server) and less loaded than the "live" environment. The replication behaviour of such an installation may differ from a live environment in ways that mean that replication lag is unlikely to be observed in testing, masking replication-sensitive bugs. Application developers must be very careful about the assumptions they make about the effect of a database update, and must be sure to simulate lag in their testing environments.
Optimistically replicated databases have to be very careful about offering features such as validity constraints on data. If any given update may or may not be accepted based on the current state of the record, then two updates (A and B) may be individually legal against the starting state of the system, but one or more of the updates may not be legal against the state of the system after the other update (e.g., A and B are both legal, but AB or BA are illegal). If A and B are both initiated at roughly the same time within the database, then A may be successfully applied on some nodes and B on others, but as soon as A and B "meet" and one is attempted on a node which has already applied the other, a conflict will be found. The system must, in this case, decide which update finally "wins", and arrange for any nodes that have already applied the losing update to revert it. However, some nodes may temporarily expose the state with the reverted update, and there may be no way to inform the user who initiated the update of its failure, without requiring them to wait (potentially forever) for confirmation of acceptance at every node.
References
- ↑ Ladin, R.; Liskov, B.; Shrira, L.; Ghemawat, S. (1992). "Providing high availability using lazy replication". ACM Transactions on Computer Systems 10 (4): 360–391. doi:10.1145/138873.138877.
- ↑ Ladin, R.; Liskov, B.; Shrira, L. (1990). "Lazy replication: exploiting the semantics of distributed services". Proceedings of the Ninth Annual ACM Symposium on Principles of Distributed Computing. pp. 43–57. doi:10.1145/93385.93399.
- ↑ Saito, Yasushi; Shapiro, Marc (2005). "Optimistic replication". ACM Computing Surveys 37 (1): 42–81. doi:10.1145/1057977.1057980.
- ↑ Gray, J.; Helland, P.; O’Neil, P.; Shasha, D. (1996). "The dangers of replication and a solution". Proceedings of the 1996 ACM SIGMOD International Conference on Management of Data. pp. 173–182. doi:10.1145/233269.233330. ftp://ftp.research.microsoft.com/pub/tr/tr-96-17.pdf.
- ↑ Terry, D.B.; Theimer, M.M.; Petersen, K.; Demers, A.J.; Spreitzer, M.J.; Hauser, C.H. (1995). "Managing update conflicts in Bayou, a weakly connected replicated storage system". Proceedings of the Fifteenth ACM Symposium on Operating Systems Principles. pp. 172–182. doi:10.1145/224056.224070.
- ↑ Kermarrec, A.M.; Rowstron, A.; Shapiro, M.; Druschel, P. (2001). "The IceCube approach to the reconciliation of divergent replicas". Proceedings of the Twentieth Annual ACM Symposium on Principles of Distributed Computing. pp. 210–218. doi:10.1145/383962.384020.
External links
- Saito, Yasushi; Shapiro, Marc (September 2003). "Optimistic Replication". Microsoft. https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-2003-60.pdf.
Original source: https://en.wikipedia.org/wiki/Optimistic replication.
Read more |