Class Summary |
DataPublisherJob |
|
DataRepublishingSelectorJob |
|
ExpireLeasesJob |
Periodically search through all leases to find expired ones, failing those
keys and firing up a new search for each (in case we want it later, might as
well preemptively fetch it) |
ExpireRoutersJob |
Go through the routing table pick routers that are performing poorly or
is out of date, but don't expire routers we're actively tunneling through. |
ExploreJob |
Search for a particular key iteratively until we either find a value, we run
out of peers, or the bucket the key belongs in has sufficient values in it. |
ExploreKeySelectorJob |
Go through the kbuckets and generate random keys for routers in buckets not
yet full, attempting to keep a pool of keys we can explore with (at least one
per bucket) |
FloodfillDatabaseLookupMessageHandler |
Build a HandleDatabaseLookupMessageJob whenever a DatabaseLookupMessage arrives |
FloodfillDatabaseStoreMessageHandler |
Create a HandleDatabaseStoreMessageJob whenever a DatabaseStoreMessage arrives |
FloodfillNetworkDatabaseFacade |
|
FloodfillPeerSelector |
|
FloodfillStoreJob |
|
FloodfillVerifyStoreJob |
send a netDb lookup to a random floodfill peer - if it is found, great,
but if they reply back saying they dont know it, queue up a store of the
key to a random floodfill peer again (via FloodfillStoreJob) |
FloodLookupMatchJob |
|
FloodLookupSelector |
|
FloodLookupTimeoutJob |
|
FloodOnlyLookupMatchJob |
|
FloodOnlyLookupSelector |
|
FloodOnlyLookupTimeoutJob |
|
FloodOnlySearchJob |
Try sending a search to some floodfill peers, failing completely if we don't get
a match from one of those peers, with no fallback to the kademlia search |
FloodSearchJob |
Try sending a search to some floodfill peers, but if we don't get a successful
match within half the allowed lookup time, give up and start querying through
the normal (kademlia) channels. |
HandleFloodfillDatabaseLookupMessageJob |
Handle a lookup for a key received from a remote peer. |
HandleFloodfillDatabaseStoreMessageJob |
Receive DatabaseStoreMessage data and store it in the local net db |
HarvesterJob |
Simple job to try to keep our peer references up to date by aggressively
requerying them every few minutes. |
KademliaNetworkDatabaseFacade |
Kademlia based version of the network database |
KBucketImpl |
|
KBucketSet |
In memory storage of buckets sorted by the XOR metric from the local router's
identity, with bucket N containing routers BASE^N through BASE^N+1 away, up through
2^256 bits away (since we use SHA256). |
OnRepublishFailure |
|
OnRepublishSuccess |
|
PeerSelector |
|
PersistentDataStore |
Write out keys to disk when we get them and periodically read ones we don't know
about into memory, with newly read routers are also added to the routing table. |
ReplyNotVerifiedJob |
the peer gave us a reference to a new router, and we were NOT able to fetch it |
ReplyVerifiedJob |
the peer gave us a reference to a new router, and we were able to fetch it |
RepublishLeaseSetJob |
Run periodically for each locally created leaseSet to cause it to be republished
if the client is still connected. |
RouterGenerator |
|
SearchJob |
Search for a particular key iteratively until we either find a value or we
run out of peers |
SearchMessageSelector |
Check to see the message is a reply from the peer regarding the current
search |
SearchReplyJob |
|
SearchState |
Data related to a particular search |
SearchUpdateReplyFoundJob |
Called after a match to a db search is found |
StartExplorersJob |
Fire off search jobs for random keys from the explore pool, up to MAX_PER_RUN
at a time. |
StoreJob |
|
StoreMessageSelector |
Check to see the message is a reply from the peer regarding the current
store |
StoreState |
|
TransientDataStore |
|
XORComparator |
Help sort Hashes in relation to a base key using the XOR metric |