Welcome. I write about networks for a living; this is the friendly version I wish more people had stumbled into early on. If you have fuzzy areas or blind spots, you’re in the right place. We’ll keep jargon minimal, diagrams mental, and the troubleshooting practical.
How we got here
The long arc
1870s–1930s: Telephony emerges: local exchanges, manual switchboards, then electromechanical step-by-step and crossbar switches. The idea: circuits — a dedicated path during a call.
1960s–1970s: Packet switching is proposed (Baran, Davies). ARPANET links UCLA, SRI, Utah, UCSB. Circuits are reliable, but packets are flexible and resilient.
1980s: TCP/IP standardizes across ARPANET (1983). Ethernet becomes the dominant LAN. The OSI model appears (a teaching model), but the pragmatic TCP/IP stack wins deployment.
1990s: NSFNET decommissions; commercial Internet takes over. “NAPs” (Network Access Points) like FIX-West/East, MAE-East/West, and CIX facilitate interconnection. Today we call these IXPs (Internet Exchange Points). The web arrives; BGP4 becomes the Internet’s inter-domain routing glue.
2000s–present: CDNs, massive data centers, merchant-silicon switches, and Clos spine-leaf fabrics replace tall, bespoke hierarchies. Cloud scales by repetition, not snowflakes.
Correcting an old myth
There’s no rule that a provider “must connect to three NAPs.” That phrasing came from 1990s procurement checklists. In practice, networks interconnect at many IXPs and private interconnects based on cost, performance, and geography — not a magic number.
Modern network architecture
Imagine two hosts: SanDiego and Bangor. Early on you could rent a private line, but one cut and you’re dark. The Internet’s genius was to forward packets hop-by-hop until they find a working path. Inside data centers, the most economical way to scale that forwarding is a Clos (spine-leaf) fabric:
- Leaf/TOR switches connect servers.
- Spines connect leaves. Add leaves to scale out; spines to add bandwidth.
- Same building blocks, repeated. Reliability comes from many cheap paths instead of one heroic box.
Merchant silicon + NOS
Modern switches often use Broadcom or Tofino “merchant” ASICs under different brands. A Network Operating System (NOS) provides BGP/OSPF/ISIS, telemetry, and automation hooks. Interop has improved; lock-in is less absolute than it was.
Internet infrastructure: ASes, BGP, IXPs
At Internet scale we speak of autonomous systems (ASes) — networks under one admin policy — stitched together with BGP. Interconnection happens at:
- IXPs: shared fabrics where many ASes peer.
- Private interconnects: direct cross-connects in a facility.
- Transit: you pay an upstream to carry traffic to the rest of the Internet.
Peering vs. transit
Peering is usually settlement-free swaps of traffic between networks of roughly equal value; transit is paid. CDNs and hyperscalers peer broadly to reduce latency and cost.
Layers without the hand-waving
The OSI 7-layer model is a teaching aid; the deployed Internet stack is simpler, but OSI gives us a shared vocabulary:
Layer | What to know | Troubleshooting tools |
---|---|---|
1 Physical | Bits on a wire/fiber; power; optics; RF. | ethtool , link LEDs, cable testers, Wi-Fi analyzers. |
2 Data Link | Ethernet, MACs, VLANs, STP. | ip link , bridge , tcpdump -e . |
3 Network | IP addressing, routing, ARP/ND. | ip addr , ip route , ping , traceroute . |
4 Transport | TCP/UDP, ports, MTU, congestion. | ss , iperf3 , tracepath . |
5–7 | Sessions, TLS, HTTP/DNS/SSH, apps. | curl , dig , openssl s_client , browser tools. |
A practical toolbelt
Your everyday tools
- Link & address:
ip link
,nmcli
,ethtool
. - Reachability:
ping
,traceroute
,mtr
. - Name resolution:
dig
,resolvectl
. - Ports & sockets:
ss -ltnp
,lsof -i
. - Traffic capture:
tcpdump
, Wireshark. - Throughput:
iperf3
, check duplex and MTU. - HTTP/S:
curl -v
,openssl s_client
. - Service pokes:
nc
,nmap
. - ARP/ND:
ip neigh
,arping
.
MTU pain, quick test
# find max payload before fragmentation
ping -M do -s 1472 8.8.8.8 # 1472 + 28 = 1500
# if fails, lower until success; path MTU found
Step-by-step troubleshooting
- Power & link: check LEDs,
ip link
,ethtool
. - Addressing:
ip addr
,ip route
. - Local reach:
ping
gateway; check VLANs,ip neigh
,tcpdump -e arp
. - DNS:
dig
,dig +trace
. - Path:
traceroute
,mtr
. - Ports:
ss -ltnp
, firewall/SG rules. - Throughput:
iperf3
, check duplex, MTU, CPU. - Capture, then hypothesize:
tcpdump
narrow filters.
Wi-Fi specifics
- Survey channels; avoid overlapping 2.4 GHz.
- RSSI/SNR > “bars.” Move or add APs if needed.
- Test WPA2/WPA3 fast-roam carefully.
Two sticky topics: ARP and DNS
ARP in plain English
ARP maps IP→MAC on IPv4 LANs. When host A wants to reach B, it broadcasts “Who has 10.0.0.42?” and B replies. Common failure: wrong VLAN or stale cache.
# watch ARP while you ping the gateway
sudo tcpdump -n -e arp or icmp
ip neigh show
sudo arping -I eth0 10.0.0.1
DNS in plain English
Names resolve to IPs through recursive resolvers. Separate “can I reach the resolver?” from “can the resolver answer?”
# does your resolver respond?
dig @192.0.2.53 example.com
# follow the chain yourself
dig +trace example.com
# HTTPS reachability with DNS override
curl -sv --resolve example.com:443:203.0.113.10 https://example.com/
Keep it simple
The Internet grew not because every path was optimal, but because any one path being broken didn’t matter. Prefer simple, repeated designs; let redundancy and good telemetry carry the weight.
Last updated 2025-09-22.