Home Crypto Dissecting the Ethereum Networking Stack: Node Discovery

Dissecting the Ethereum Networking Stack: Node Discovery

0
Dissecting the Ethereum Networking Stack: Node Discovery

[ad_1]

In the last few days, I got a chance to spin up a Eth2 validator node and dive into the depths of the Ethereum N/w stack using some traditional (ahem!) tools such as tcpdump. One of the things I learnt from this exercise is that the Eth n/w stack is being continuously worked upon & upgraded with a lot of EIPs (improvement proposals) floating around.

As you know, the latest “Merge” didn’t actually signify a deployment of a new Eth network. It was purely an upgrade of the previous Eth n/w. In other words, Eth1 is now known as the “Execution Layer” where smart contracts & n/w protocols reside & operate, while Eth2 is referred to as the “consensus layer”, which ensures the participants of the n/w behave in accordance with the incentive structure & consensus rules (PoW to PoS).

In this blog, we will look at three things

  • A quick overview of the Eth Networking Stack
  • Ethereum Node/Peer Discovery over devp2p
  • Tcpdump traces to observe packet level details and verify the stack behavior

So basically, an Ethereum client (e.g. geth) has two main layers/clients sitting on the same node, each with its own networking stack and subprotocols.

  1. Execution Layer
  2. Consensus Layer

What happens in the Execution Layer?

  • This is where the EVM resides
  • Responsible for transaction building, execution & state management
  • Also sends “Gossip” transactions over the p2p n/w with encrypted communication amongst peers

What happens in the Consensus Layer?

  • Responsible for maintaining consensus chain (beacon chain) and processing the consensus blocks (beacon blocks) and attestations received from other peers

Now, when/how do these two layers interact with each other?

  • Each beacon block contains an execution payload. This payload contains a list of transactions and other data required to execute and validate the payload
  • In order to check this validity condition, the consensus layer sends the payload to the execution layer over the local RPC connection
  • The execution layer assembles the execution block, verifies the pre-conditions, executes transactions and verifies post-conditions. The result is sent back to the consensus layer
  • Execution layer passes validation data back to consensus layer, block now considered to be validated (local RPC connection)
  • Consensus layer adds block to head of its own blockchain and attests to it, broadcasting the attestation over the network (consensus p2p)

I like the following representation of this flow in the official Ethereum doc

Now, the exec layer n/w protocols is divided into two stacks and both these stacks run in parallel

  1. Node Discovery — Allows a new node to find peers to connect to over UDP
  2. Information Exchange — Enables node to exchange information over

Here’s what the high level n/w architecture looks like

The Eth client (node) sits on top of this stack

In this blog, we will look into the “discv4 Node Discovery” in detail

What is “Node Discovery” and why do we need it?

  • The process of finding other peers in the n/w for data exchange and communication. It is implemented by the client s/w running on the p2p network. In our case, it is one of the Eth clients (gETh is used in our example below)
  • The client uses a bootstrapping mechanism to connect to a small set of bootnodes, which are hardcoded into the client
  • A Bootnode’s only responsibility is to introduce a new node to a set of peers. It doesn’t participate in chain tasks like syncing the chain. Used only when the client is spun up for the first time

How does the discovery work under the hoods?

  • A modified form of Kademlia is used where every node shares the lists of nodes in a distributed hash table
  • Each node has a list of closest nodes in its table. This closeness is not ‘geographical’, but definition by the similarity of node’s ID (Use XOR to determine the closeness. More details here)
  • The discovery process happens over UDP. Unlike TCP, UDP does not have any additional overhead like error checking, retransmissions etc. This makes the discovery fast.
  • On a high level, the process begins by the client playing PING/PONG with the bootnode. Once connected, it gets the list of nodes/neighbors from this node and keeps traversing through the list until sufficient no of peers have been added to the peer list
  • start client → connect to bootnode (Ping/Pong ) → bond to bootnode (Ping/Pong) → find neighbours (FindNode) → bond to nighbors -> Exchange information

Let’s break this whole process step-by-step now. I used the latest gth client and wireshark to capture these traces. Also, you will have to install this ethereum dissector plugin on wireshark to decode the devp2p traffic

  1. Once the client starts, the bootnodes are identified from the hardcoded list. Since I’m running gEth on the goerli tesnet, here’s the list of bootnodes pertaining to goerli. You can find the full list here
// GoerliBootnodes are the enode URLs of the P2P bootstrap nodes running on the // Görli test network.var GoerliBootnodes = []string{// Upstream bootnodes  "enode://011f758e6552d105183b1761c5e2dea0111bc20fd5f6422bc7f91e0fabbec9a6595caf6239b37feb773dddd3f87240d99d859431891e4a642cf2a0a9e6cbb98a@51.141.78.53:30303",  "enode://176b9417f511d05b6b2cf3e34b756cf0a7096b3094572a8f6ef4cdcb9d1f9d00683bf0f83347eebdf3b81c3521c2332086d9592802230bf528eaf606a1d9677b@13.93.54.137:30303",  "enode://46add44b9f13965f7b9875ac6b85f016f341012d84f975377573800a863526f4da19ae2c620ec73d11591fa9510e992ecc03ad0751f53cc02f7c7ed6d55c7291@94.237.54.114:30313",  "enode://b5948a2d3e9d486c4d75bf32713221c2bd6cf86463302339299bd227dc2e276cd5a1c7ca4f43a0e9122fe9af884efed563bd2a1fd28661f3b5f5ad7bf1de5949@18.218.250.66:30303",// Ethereum Foundation bootnode  "enode://a61215641fb8714a373c80edbfa0ea8878243193f57c96eeb44d0bc019ef295abd4e044fd619bfc4c59731a73fb79afe84e9ab6da0c743ceb479cbb6d263fa91@3.11.147.67:30303",// Goerli Initiative bootnodes  "enode://d4f764a48ec2a8ecf883735776fdefe0a3949eb0ca476bd7bc8d0954a9defe8fea15ae5da7d40b5d2d59ce9524a99daedadf6da6283fca492cc80b53689fb3b3@46.4.99.122:32109",  "enode://d2b720352e8216c9efc470091aa91ddafc53e222b32780f505c817ceef69e01d5b0b0797b69db254c586f493872352f5a022b4d8479a00fc92ec55f9ad46a27e@88.99.70.182:30303",}

2. Our node tries to connect to bootnodes, but get response only from two of them

Bootnode — 94.237.53.114

The PING and FindNode UDP requests end up with ICMP Destination Unreachable responses. The router/proxy clearly is blocking this traffic. The TCP SYN requests over the port 30303 are also getting reset by the proxy.

Bootnode — 51.141.78.53

All TCP SYNs are getting dropped somewhere in the n/w. Our node keeps retransmitting the SYN packets with no luck

Bootnode — 13.93.54.137

No response to the Devp2p’s PING and FindNode UDP requests. The router/proxy clearly is blocking this traffic. All TCP SYNs are also getting dropped somewhere in the n/w. Moving on.

Bootnodes — 3.11.147.67

Clearly, same issue.

Bootnodes — 18.218.250.66 and 88.99.70.182

Finally, we are able to see how our node was able to connect to the peers in the n/w.

start client → connect to bootnode (Ping/Pong ) → bond to bootnode (Ping/Pong) → find neighbours (FindNode) → bond to nieghbors

The client (geth) will keep searching for nodes by following the same mechanism (connect/bond/find_neighbors) until all outgoing peer slots are filled.

Note

  • The number of these slots is calculated using this formul: maxpeers * dialRatio. The default values of maxpeers and dial ratio are 25 and 1⁄3 respectively.
  • When the desired number of peers is reached, geth will stop looking. However, there will still be discovery traffic. This is because the discovery system requires maintenance to stay working. Also, the node can also lose a peer connection, and then that slot needs to be refilled. For more details about the specific values around slots and timeouts, check out the go code here

Now, let’s go further. What exactly gets exchanged in the discovery process?

Let’s take a working example. Our node was able to identify a list of neighbors from the bootnode (as seen in the last screenshots above). Now, it is trying to bond to one of the neighbors (143.244.60.51) and get another list of neighbors (It keeps iterating until hits max peers)

Zooming into the payload & fields

  • ^ We can clearly see that a PING was sent from our node 10.128.0.20 to the bootnode 143.244.60.51. This PING includes hashed information about the new node, the bootnode and an expiry time-stamp
  • ^ A PONG was sent back by the bootnode 143.244.60.51 to our node 10.128.0.20 containing the PING hash (6cf6d111948bbe5e212ebc9401c7936b3b2df53e7699b1ddf9830c460affd70e)

Since the PING and PONG hashes have matched, the connection between our node and peer is verified and they are said to have “bonded”.

^ Once bonded, our node sends a FIND-NODE request to 143.244.60.51. The data returned by 143.244.60.51 includes a list of peers that our node can further connect to. The data is truncated into two UDP packets. One contains 14 neighbors and the other contains four.

Now, our node begins a PING-PONG exchange with each of them. Successful PING-PONGs bond our node with its neighbors, enabling message exchange. Again, this happens until the maxpeer*dial_ratio limit is reached. Also, if one of these neighbors goes offline, the peer slot needs to be filled up. So the process is repeated all over again.

The client then starts initiating TCP connections over the port 30303 with the corresponding nodes and exchange state information. More about this and the node addressing scheme to exchange data in a later post.

Coming back to discovery now, there are some unanswered questions

  • Why is geth trying to connect to nodes by sending FIND_NODE and initiating stale TCP connections when clearly it hasn’t gotten any response for the initial ping. Does this not make the n/w unnecessarily chatty?
  • The UDP packets seem to be more than 1000 bytes. I got to know that there’s a fixed limit of 1280 bytes maximum packet size, and it has been working OK so far in all tested networks. However, a lot of legacy routers on the Internet still don’t handle bigger UDP packets so well. Usually, the safest option is 512 bytes. Modern routers, however, can handle bigger packets.

Finally, the interesting part — this is not all of it. There’s another parallel DNS based discovery process that the Eth client carries out based on EIP-1459. This is a backup discovery mechanism that the clients can leverage especially during restrictive network policies that might fail the discv4 based node and prevent the nodes from joining the DHT. More about this and Ethereum Node Addressing scheme (ENR) in a separate post.

New to trading? Try crypto trading bots or copy trading

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here