Data Transfer Over A Network

As the data from the Application Layer (AL) on your computer moves down the TCP/IP stack each layer “wraps” the data in its own package. The Transport Layer (TL) adds a TCP or UDP header (depending on the type of data transfer requested by the AL) with a source and destination port number. The Internet Layer (IL) adds an IP address (source and destination). The Network Layer (NL) adds a Medium Access Control (MAC) address (source and destination). The packet is then transmitted over the network to another node on the same network as your computer …

What?! It’s not transmitted directly to www.dalantech.com you ask? Nope, it’s sent to the default route that you have configured in your TCP/IP settings, or the default route supplied by your Internet Service Provider (ISP) when you logged in. You see, MAC addresses are used to communicate with devices that are on the same logical and physical network. IP addresses are used to communicate with devices on different logical networks (Note: You could have two logical IP networks on the same physical cable, but if you don’t have a router or a bridge connecting them the nodes on one IP network will not be able to send data to nodes on the other IP network). So, the source and destination MAC address is constantly changing as a packet is routed across a network (or the Internet). The source and destination IP address remains the same.

As the packet is moving up the TCP/IP stack on the receiving computer, each layer “unwraps” the package. The Network layer “unwraps” the Network Layer “packaging” by removing the source and destination MAC, the Frequency Check Sequence (used for error detection, the Network layer calculates a new FCS value and compares it with the FCS that was transmitted with the packet. If the two FCSs don’t match the packet is ignored ) and then passes the packet to the Internet Layer. The Internet Layer “unwraps” the Internet Layer packaging by removing the IP header (source and destination IP for example) and then passes the packet to the Transport Layer. The Transport layer removes the Transport Layer header and passes the data to the correct application (determined by the destination port number).

Now let’s back up a little. Earlier I told you that the Transport Layer on your PC negotiates a connection with the remote computer that you want to connect to – and that is true. But what about all of the routers, bridges, and other pieces of routing hardware that your data has to pass through to reach the destination IP address? What if one of them had a lower MTU than what you negotiated? Would you still get a connection and be able to transfer data? Sure, but not without a few penalties … keep reading.

For the purpose of explanation I’m gonna keep it simple and use a small network. But, what I’m about to tell you applies to all networks – regardless of their size. Take a look at the network below:

Router one (R1) and Router three (R3) both have an MTU of 1500 bytes (the TCP/IP maximum) but Router two (R2) has an MTU of 1000 bytes. Your PC can sense the MTU of R1 because both of them are on the same network. Likewise for the web server and R3. So your PC is going to negotiate an MTU of 1500 bytes, and the web server is going to accept.

As data is being sent from the web server to your PC, R3 will receive the packets first. R3 can sense the MTU of R2 because they each have a connection on a common network. R3 will then fragment every packet it gets from the web server into two packets at the IP layer (one with an MTU of 1000 bytes and the other with an MTU of 500 bytes, for example) before they are transmitted to R2. R2 will send the packets to R3 and R3 will reassemble the fragments before they reach you. Works great huh? Keep reading …

Let’s say that we wanted to transmit 14,600 bytes of data. If the MTU remained a constant across all of the equipment between sender and receiver only 10 packets would have been transferred (MTU – 40 = MSS or 1500 – 40 = 1460 bytes). But since the packets had to be fragmented between R3 and R2 we had to send 20 packets across the R3 to R2 link. Each packet had to have a MAC and an IP address – bits that don’t count toward transferring application data. In fact, the extra packaging decreases the Effective Bandwidth (EB) of the data transfer (EB is the actual amount of data that is transferred between two points).

But wait! It gets worse … let’s say that during the course of the data transfer one of the fragments gets corrupted or lost. The IP layer (where fragmentation occurs) is connectionless – it can detect corrupt packets and fragments, but it cannot request the retransmission of missing or corrupt data. If the IP layer detects a corrupt packet (or fragment) the entire packet is just “dropped” – the IP layers clears out the offending data. The Transport layer has to request that the entire packet be retransmitted. Since the number of packets needed to transmit a given amount of data increases when the packets are fragmented, the chances that a packet will be lost goes up …

Think it can’t get any worse? Bzzzzzt! Wrong! There is a term that impacts effective bandwidth and it’s called propagation delay. Propagation delay is defined as the amount of time it takes a signal (data) to flow through a device. The greater the fragmentation and reassembly of packets, the longer it takes those packets to get from point A (a web server) to point B (you) – as propagation delay increases effective bandwidth decreases (ouch!).

Fragmentation was incorporated into the Internet Layer to allow for possible differences in MTU – but it’s not a good thing. If at all possible you do not want to fragment your data. It is faster to use a lower MTU – one that every piece of gear between the sender and receiver can handle – than it is to send the TCP maximum and force fragmentation. But, if the transport layer on the sender and receiver negotiates the MTU, how can we figure out what the best MTU is between the two points? Hmmm …

The way to figure out the best MTU between two points is to use Path Maximum Transmittable Unit Discovery (PMTUD). Here is how it works: Before a connection is negotiated your PC sends out a packet with an MTU of 1500 bytes – but with one minor difference from before. Every Ethernet packet has a special “flag” (a packet control bit) called the “Don’t fragment” flag, and in this packet the bit is set (or active). As the packet moves from router to router it may reach a router that can’t send the packet – the MTU of the next device in the path is lower than the MTU of the packet, and since the “Don’t fragment” bit is set, the transfer fails. The router that could not forward the data sends an Internet Control Message Protocol (ICMP) error message (trivia: ICMP type three code 4E) back to your PC letting you know that the transfer failed because the MTU was too high. Your PC then sets a lower MTU (how much lower varies depending on who coded the PMTUD function of the TCP/IP stack on your computer) and the process starts all over. Eventually, your PC will get a packet all the way to the server you where trying to reach. Your PC then initiates a TCP session with the correct MTU. Sounds time consuming? Well, it is – but it works and you get faster data transfers.

You have now established a connection and the data is transferred. What happens next? Well, for a TCP connection the process that kicked it off (SYNs and SYN-ACKs) is very similar to the process that ends it. Only now your PC is going to send a Finish (FIN) packet and the server you are ending the connection with will send a Finish Acknowledgement (FIN-ACK). Your PC will send a final FIN packet and the connection is gone.

Leave a Comment: