共用方式為


Building a Distributed Hash Table with the DRT: Part 2

In the previous article in this series, we introduced the concept of a Distributed Hash Table (DHT) and explained that the DRT makes it easier to build a DHT by offering key based routing. In this article, we’ll describe an architecture for a DHT that uses key based routing. Next time, we’ll start looking at the actual DRT data structures and API calls in a real DHT implementation.

Like every other DHT, ours will allow machines to store and retrieve BLOBs (Binary Large Objects) using the operations put and get. This is really no different from a traditional hashtable that runs on a single machine. When an application calls put, it must supply the object that it wishes to store, and the key that represents that object. When an app calls get, it must supply the key representing the object it wishes to retrieve. Unlike a traditional hash table, a DHT allows multiple machines to simultaneously execute many get and put operations. BLOBs can be accessed and modified by multiple computers. Machines may join and leave the DHT and BLOBs will not be lost.

In our DHT, each node will represent itself with a node ID. The node ID will be a 256 bit number that must be unique amongst all the machines participating in the DHT network. When a node starts up, it will use the DRT API to publish its node ID, so that it can be discovered by peers. Using the DRT API, it is possible to search for other machines by node ID. It is also possible to search for the machine having a node ID closest to an arbitrary number. Just to be crystal clear, here’s an example: In my DHT I have six computers having node Ids 100, 261, 301, 455, 671 and 802. If I use the DRT API to find the machine publishing a node ID closest to the number 300, I will find the node with node ID 301.

This ability to find the machine with a node ID closest to a given number is critical in the implementation of a DHT.

Let’s look under the hood of a put operation. The application calls our DHT put method, supplying the BLOB to store and the key to identify it. Our DHT uses the DRT API to find the computer with the node ID closest to the key supplied by the application. This computer will become the new owner of the BLOB. The source computer retrieves the IP address and port of the target machine using the DRT API. It establishes a TCP connection and sends the BLOB to the target computer.

The get operation is very similar. The application calls our DHT get method, supplying the key of the object it wishes to retrieve. Our DHT uses the DRT API to locate the computer with the node ID closest to the key. It retrieves the IP address and port that the target computer is listening on, establishes a TCP connection. It communicates the key representing the object it wishes to retrieve, then downloads the object and supplies it back to the application.

This is all well and good, but what happens if a node goes offline? All of the BLOBs that it had previously stored disappear, right? To prevent this from happening, each node replicates its BLOBs to the machines having node Ids numerically closest to its own (its neighbors). If a node disappears, its neighbors are ready to take over. They can service get requests for keys that the departed node used to be responsible for. The DRT API raises events when neighboring nodes leave and when new neighbors join. The DRT API tracks the five computers publishing keys numerically greater and the five machines publishing keys numerically smaller than a published key. This group of ten machines in total is called the leaf set.

That’s all for now. Next time we’ll start looking at the DRT data structures and function calls that make all this actually happen.

Have fun!

-Tyler