The Network Layer

You might have noticed in the original OSI model that “IP” was part of Layer 3, and protocol stacks like UDP and TCP were part of Layer 4. It’s a little bit confusing that we say “TCP/IP” when the “IP” really applies to so many other protocols like UDP and ICMP. There are certainly other protocols and protocol stacks, but for the purposes of these networks, we’re talking almost exclusively about TCP/IP.

The network layer does not guarantee delivery. Essentially, it makes every effort to deliver IP datagrams (packets) to the destination, but it’s error-handling is pretty simple: just toss the packet into the bit-bucket. It’s also a connectionless layer, meaning the packets making up a message aren’t part of an ongoing conversation. They can be split up, encoded, and sent separately, by different routes, and arrive completely out of order. And packets can get duplicated or corrupted.

Figuring all this out is the job of the protocol stack (e.g., TCP) in layer 4. The network layer, L3, just delivers packets. Network byte order: A rarely needed (but useful) fact is that the network sends bytes in big endian order. That means bytes are transmitted starting with bit 0 and working down to bit 31, usually eight bits at a time.

A lot of the computers on the Internet use little endian encoding, which starts at the other end of the word. In those cases, the byte order has to be reversed somewhere between the computer’s memory and Layer 3. For most situations, that fact isn’t particularly useful, but there is the occasional fault that involves failure to reverse byte order along the path from RAM to NIC.

What is This “Packets” You Speak of, Kimosabe?

Packets are basic Internet Protocol (IP) message units. A message will probably be split into multiple packets by L4 (the transport layer) so it can be efficiently sent. For example, imagine that you’re sending a very long letter to your friend, and all you have are lots of envelopes and first-class stamps.

If you’ve ever done a lot of mailing, you’ll know that mailing a one-ounce letter costs you, say, fifty-eight cents. If you add another ounce of paper to it, that second ounce only costs you, say, twenty cents. But all you have are first class (i.e., fifty-eight-cent) stamps. If you don’t want to waste your money, you can either cram more pages in the envelope, until you’re at three ounces (the most you can get with two stamps), or send two letters, each with one ounce in it.

The way envelopes go through the mailing system, you’re better off not over-stuffing an envelope. So what do you do? You sit down and write the letter to your friend, carefully numbering the pages. Then you divide it into piles of pages that are just under one ounce. Finally, you put each pile into an addressed, stamped envelope and mail each letter separately. When your friend gets the letters, it doesn’t matter which one gets there first, because they can reassemble your message, using the page numbers.

Fixed Packet Lengths and Segmented Messaging

We could have designed computer networks to take messages of indeterminate lengths, but that presents some unique challenges when trying to manage network traffic. For example, suppose you send seven overstuffed letters to your friend, and so does everyone else on your block? All these huge letters aren’t going to fit in one letter-carrier’s bag, so they’ll have to either send out two delivery people, or wait until tomorrow to send out someone’s letters.

Choosing a fixed (relatively short) length makes it statistically possible for everyone’s letters (everyone’s messages) to be delivered at a fairly constant, reliable rate. That rate will vary with the size of the overall message, not with who threw their message on the Internet first.

A larger message takes longer to send. Messages are split into packets of consistent length before they’re passed to L3, so larger messages take longer. It’s statistically more efficient to split messages into equally-sized packets than any other arrangement – the method that gets the highest count of complete messages through the network in a given amount of time. In network terminology, it’s the highest-throughput approach to network traffic. Specifically, this technique is called multiplexing.

IP Packets

The IP datagram (packet) is the backbone of most modern networks. The datagram is preceded by an IPv4 header, which attaches to the front of data packets up to about 65K long. Here are the header fields and what information they carry:

  • IP Protocol Version: This is “4” for IPv4 and “6” for IPv6. There are lots of others, but they generally don’t touch a typical network.

  • Internet Header Length: The number of 32-bit words in the header, including the options (but not including the data, since it’s just the header). Most of the time, this will have the value “5”, but options do exist and are sometimes included.

  • Differentiated Services Code Point: This is used to specify special classes of service. Normally, IP packets are delivered on a “best-effort” basis, that is, Layer 3 will try everything possible to make sure a packet gets delivered. You can cause L3 to deliver packets with higher priority (implying more certainty) by using a different DSCP.

  • ECN = Explicit Congestion Notification: These bits are both set by an ECN-capable router when that router is above a certain traffic threshold. They are there to alert a sender to slow down (or expect delays) when the network segment in use is particularly congested.

  • Total Length of IP Packet: This field indicates the length of the entire packet, including the data. This makes it possible to calculate the byte offset of the data within the datagram.

  • Identification: This is a serial number, generated by the sending NIC, that helps the participants uniquely identify the datagram. In a sense, it works like the little “take-a-number” tickets you get at the hamburger stand: Eventually, the number will repeat, but the repeat cycle is so long that there’s no chance of confusing packets. The sequential nature of this field, when used in concert with the Flags and Fragmentation Offset field, helps the protocol stack correctly reassemble the message.

  • Flags: This field is basically used to indicate that a packet is a fragment of a longer message.

  • Fragmentation Offset: Used with the Identification sequence number, this field allows the system to know which packets precede or follow this one when re-assembling the message.

  • Time to Live (TTL): This indicates the number of routers that a datagram can pass through before it’s discarded. Since routers function by replacing their own destination address with the IP address of the next hop, this essentially limits the number of times a packet’s destination IP can be changed. Most RFC documents suggest keeping this number at 64, it’s more often set to something like 255 without any real bottlenecks.

  • Protocol: This field indicates the higher level protocol (the protocol stack) that generated this message. Examples are given for TCP and UDP in the figure.

  • Header Checksum: This calculates a checksum for the header only. It’s only used in IPv4. Doing integrity-checking on the data is the responsibility of Layer 4.

  • Source Address: This is the IP address of the sender of the packet, for this hop only. As shown in the figure below, routers will change this address so they can get the answer back.

  • Destination Address: This is the IP address of the destination, for this hop only. As shown below, routers change this address to act as brokers in the IP chain.

Routing

We now have enough concepts in play to talk about routing. Routing takes place at the network layer, by changing the source and destination addresses (without losing track of the replaced address). The process looks something like this: The router typically assigns a unique port number to the outbound message, and records the source IP against that port number. When the message comes back to it on that port number, it can look up the IP address of the NIC that sent the packet and route the answer back.


Copyright (C) 2024 by Bill Wear; All Rights Reserved