Software:Tapestry (DHT)

From HandWiki
Tapestry
Written inC++
TypePeer-to-peer
Websitecurrent.cs.ucsb.edu/projects/chimera/

Tapestry is a peer-to-peer overlay network which provides a distributed hash table, routing, and multicasting infrastructure for distributed applications.[1] The Tapestry peer-to-peer system offers efficient, scalable, self-repairing, location-aware routing to nearby resources.

Introduction

The first generation of peer-to-peer applications, including Napster, Gnutella, had restricting limitations such as a central directory for Napster and scoped broadcast queries for Gnutella limiting scalability. To address these problems a second generation of P2P applications were developed including Tapestry, Chord, Pastry, and CAN. These overlays implement a basic key-based routing mechanism. This allows for deterministic routing of messages and adaptation to node failures in the overlay network. Of the named networks Pastry is very close to Tapestry as they both adopt the same routing algorithm by Plaxton et al.

Tapestry is an extensible infrastructure that provides decentralized object location and routing focusing on efficiency and minimizing message latency. This is achieved since Tapestry constructs locally optimal routing tables from initialization and maintains them in order to reduce routing stretch. Furthermore, Tapestry allows object distribution determination according to the needs of a given application. Similarly Tapestry allows applications to implement multicasting in the overlay network.

Algorithm

API

Each node is assigned a unique nodeID uniformly distributed in a large identifier space. Tapestry uses SHA-1 to produce a 160-bit identifier space represented by a 40 digit hex key. Application specific endpoints GUIDs are similarly assigned unique identifiers. NodeIDs and GUIDs are roughly evenly distributed in the overlay network with each node storing several different IDs. From experiments it is shown that Tapestry efficiency increases with network size, so multiple applications sharing the same overlay network increases efficiency. To differentiate between applications a unique application identifier is used. Tapestry uses best-effort to publish and route objects.

  • PublishObject
  • UnPublishObject
  • RouteToObject
  • RouteToNode (to exact match instead of closest match)

Routing

Routing mesh

Each identifier is mapped to a live node called the root. If a node's nodeID is G then it is the root else use the routing table's nodeIDs and IP addresses to find the nodes neighbors. At each hop a message is progressively routed closer to G by incremental suffix routing. Each neighbor map has multiple levels where each level contains links to nodes matching up to a certain digit position in the ID. The primary ith entry in the jth level is the ID and location of the closest node that begins with prefix (N, j-1)+i. This means that level 1 has links to nodes that have nothing in common, level 2 has the first digit in common, etc. Because of this, routing takes approximately [math]\displaystyle{ \log_B N }[/math] hops in a network of size N and IDs of base B (hex: B=16). If an exact ID can not be found, the routing table will route to the closest matching node. For fault tolerance, nodes keep [math]\displaystyle{ c }[/math] secondary links such that the routing table has size [math]\displaystyle{ c*B*\log_B N }[/math].

Object publication and location

Participants in the network can publish objects by periodically routing a publish message toward the root node. Each node along the path stores a pointer mapping the object. Multiple servers can publish pointers to the same object. The redundant links are prioritized by latency and/or locality.

Objects are located by routing a message towards the root of the object. Each node along the path checks the mapping and redirects the request appropriately. The effect of routing is convergence of nearby paths heading to the same destination....

Dynamic nodes

Node insertion

The new node becomes the root for its nodeID. The root finds the length of the longest prefix of the ID it shares. Then it sends a multicast message that reaches all existing nodes sharing the same prefix. These nodes then add the new node to their routing tables. The new node may take over being the root for some of the root's objects. The nodes will contact the new node to provide a temporary neighborhood list. The new node then performs an iterative nearest neighbor search to fill all levels in its routing table.

Node departure

To leave the network, a node broadcasts its intention of leaving and transmits the replacement node for each level in the routing tables of the other nodes. Objects at the leaving node are redistributed or replenished from redundant copies.

Node failure

Unexpected node failure is handled through redundancy in the network and backup pointers to reestablish damaged links.

Applications

Tapestry provides an overlay routing network that is stable under a variety of network conditions. This provides an ideal infrastructure for distributed applications and services. Applications based on tapestry are:

  • OceanStore − Distributed storage utility on PlanetLab
  • Mnemosyne − Steganographic file system
  • Bayeux − Self-organizing multicasting application
  • Spamwatch − Decentralized spam filter

Developers

Tapestry was developed by Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph and John D. Kubiatowicz.

See also

References

  1. Zhao, Ben Y.; Huang, Ling; Stribling, Jeremy; Rhea, Sean C.; Joseph, Anthony D.; Kubiatowicz, John D. (2004). "Tapestry: A Resilient Global-scale Overlay for Service Deployment". IEEE Journal on Selected Areas in Communications 22 (1): 41–53. doi:10.1109/JSAC.2003.818784. http://pdos.csail.mit.edu/~strib/docs/tapestry/tapestry_jsac03.pdf. Retrieved 13 January 2015. 

External links