ipv4
12 TopicsIPv8 Would Fix My Routing Tables. It Will Never Ship.
Anyone who worked on a service provider backbone in the late 90s or early 2000s remembers the squeeze. Cisco 7500s and early GSRs came with RAM budgets that looked generous at install and felt terrifying three years later, and the global BGP table kept growing faster than the hardware refresh cycle. Providers started summarizing aggressively, pushing back on customers who wanted to advertise /24s for traffic engineering, and progressively raising the minimum prefix length they’d accept at the edge. It was a real problem on both sides of every BGP session, and the fix was always the same conversation: “we’d love to carry all your cute disparate /25 CIDR blocks, but my RIBs are a little sore.” Twenty-five years later, the table is pushing toward a million prefixes. The hardware got bigger, and we quietly learned to live with a routing system whose growth has no architectural ceiling. So when I read this IPv8 draft and got to the part where the /16 minimum injectable prefix rule effectively caps the global table on the order of one entry per originating ASN, dropping us from ~900K prefixes to something closer to ~150–200K in steady state, I performed Balki’s dance of joy in my head (look it up, youngins!) and was ready to sign up on the spot. A bounded global routing table, WHOIS8 validation that meaningfully raises the bar on prefix hijacking, and a Cost Factor metric that actually accumulates end-to-end across AS boundaries instead of stopping at the edge. That’s three things the younger me wanted twenty years ago, bundled into one draft. Of note: this is an individual -00 Internet-Draft with no working-group adoption or visible industry backing yet. Plenty of RFCs started this way, but it's a design document at this stage, not a standards-track specification. But it’s not all puppies and rainbows. And the reason why gets to something more important: IPv6 didn’t struggle because it solved the wrong problem. It struggled because it solved only one problem in a system where operators needed several to be solved concurrently. Before I dig into why that matters for IPv8, let’s take a step back and consider the history of IP in general, because the reasons a proposal like this is hard to ship are the same reasons IPv6 is still stuck at half the internet three decades in. IPv4 has been carrying the internet since 1981, and its 32-bit address space, roughly 4.3 billion addresses, was declared exhausted at the IANA level in 2011. IPv6 was ratified as the official successor back in 1998 with a 128-bit address space, and despite nearly three decades of standards work, deployment campaigns, and World IPv6 Day t-shirts (who doesn’t love a good #nerd shirt?), it still carries a minority of overall traffic, even if the telecom percentage is now more than half. That’s the backdrop against which IPv8 is proposed. IPv8 is a proposed 64-bit successor to IPv4 that pairs an expanded address space with a unified management architecture. Addresses take the form r.r.r.r.n.n.n.n, where the first 32 bits encode an ASN and the last 32 are an IPv4-semantic host address. When r.r.r.r = 0.0.0.0, the address is IPv4, which the draft leans on to argue IPv4 is a proper subset of IPv8 and no flag day or dual-stack phase is needed. Beyond addressing, the draft specifies a “Zone Server” that collapses DHCP, DNS, NTP, OAuth2 auth, telemetry, route validation, ACLs, and IPv4↔IPv8 translation onto one platform. It also introduces an end-to-end Cost Factor routing metric, the /16 minimum prefix rule mentioned above, and mandatory egress validation that drops any packet without a matching DNS lookup and WHOIS8-registered route. How IPv8 Differs From IPv6 IPv6 was scoped narrowly: solve address exhaustion. It went to 128 bits, modernized some header mechanics, and left DHCP, DNS, auth, telemetry, and routing security to evolve on their own, which, thirty years later, they mostly still haven’t in any coordinated way. Transition assumed dual-stack everywhere until IPv4 could eventually be retired. Eventually is doing some heavy lifting in that sentence. IPv8's authors argue exhaustion is only one of three structural IPv4 failures, the others being management, fragmentation and unbounded, unvalidated BGP, and try to solve all three at once while rejecting dual-stack outright. Addressing, routing, identity, policy, and telemetry are treated as one system. That’s either exactly what the industry needed, or exactly why it won’t ship. Why IPv6 Adoption Stalled We’ll get into the reasons below, but it’s worth looking at where things actually stand. The headline numbers people quote for IPv6 come from Google, APNIC, and Cloudflare, all of which measure eyeball-to-content traffic, users reaching public services. Here’s how that breaks down by country as of early 2026 (Is that FRANCE leading the way?!?): Dual-stack did most of the damage. Running both protocols in parallel roughly doubled the config, monitoring, firewall, and troubleshooting surface area with nothing new to show for it operationally. Cost was immediate; benefit was deferred to a day that kept sliding to the right. Every network engineer who has debugged a dual-stack MTU issue at 2am has opinions about this. Carrier-grade NAT finished the job. Once ISPs could stretch IPv4 with CGNAT, the exhaustion crisis stopped being acute and quietly became someone else’s problem, specifically the problem of whoever was trying to run a peer-to-peer protocol through three layers of translation. Add a non-backward-compatible header and 128-bit colon-hex notation that fights operator muscle memory, and the business case never really came together. We’ve spent three decades turning the “IPv6 is coming” war cry into the networking equivalent of fusion power. The Enterprise Internal-Network Blind Spot The country-level numbers above tell you what mobile carriers and residential ISPs have shipped. They don’t tell you anything about the LAN side of the corporate firewall, which is a completely different story. Internal enterprise IPv6 adoption is sitting somewhere between 20% and 30% and has barely moved in a decade, a gap the headline statistics quietly gloss over. A few data points worth knowing: RFC 9386 (IPv6 Deployment Status), the closest thing to an official IETF status report, surveyed European service providers in 2020 and found the enterprise segment lagging mobile and fixed broadband even when measured from the provider’s perspective. Internal deployment numbers were mostly not collected because they were understood to be negligible. HexaBuild's IPv6 Adoption Reports from 2018 and 2020 explicitly call out that “many commercial enterprises still lack IPv6 connectivity at their Internet perimeters and don’t have any IPv6 network connectivity in their internal networks.” Follow-on coverage hasn’t meaningfully changed that framing. OMB Memorandum M-21-07 required US federal agencies to hit 80% IPv6-only on internal assets by September 30, 2025. As of October 2025, no federal agency has publicly announced reaching that threshold. This is a mandate with five years of runway, presidential-memo weight, and FAR procurement backing, and it still missed its own deadline across essentially every agency. The reasons internal adoption is stuck are painfully mundane, and every network engineer reading this will recognize them: RFC 1918 solved the address problem thirty years ago. 10.0.0.0/8 gives you 16 million addresses. Unless you’re a hyperscaler or you’ve acquired your way into overlapping subnet hell, that’s functionally infinite. It’s hard to sell a renumbering project to a CFO when the existing scheme has never once failed to have enough addresses. Every piece of tooling assumes IPv4. Firewalls, load balancers, IPAM, NetFlow collectors, ACL generators, SIEM parsers, monitoring dashboards, runbooks, change management templates, and the regex in that one critical Perl script from 2008, all of it was written for dotted-quad. Dual-stack means maintaining two of everything with no operational payoff. Troubleshooting costs roughly double that. Anyone who has tried to correlate a dual-stack flow across a load balancer, a WAF, and three microservices knows exactly why executives didn’t approve the project. The failure modes aren’t symmetric. An IPv6-only path can break in ways that leave the IPv4 path working, which means “it works on my machine” becomes “it works on my protocol family.” Security teams often see IPv6 as a new attack surface rather than a modernized infrastructure. Auto-configuration and neighbor discovery behave differently enough from ARP that existing segmentation, spoofing, and rogue-device playbooks need to be rewritten. For a team already underwater on IPv4 incidents, opting into a second set of attack patterns is a hard sell. There’s no customer-visible benefit. The user doesn’t care what protocol their apps run on internally. The CIO/CISO might (ok, for sure) have an opinion, but the CFO definitely doesn’t. This is actually a stronger argument for the IPv8 approach than the draft itself makes. The reason IPv6 bounced off the enterprise LAN is that it offered zero operational improvement over what RFC 1918 and NAT were already providing. IPv8’s pitch, that IPv4 is a proper subset, that internal networks keep their existing addressing, and that the management story is the value proposition rather than the address space, is at least aimed at the right problem. Pros & Cons No proposal this ambitious gets everything right or everything wrong, and IPv8 is no exception. A few things it nails, a few things it doesn’t, and one quiet standout worth calling out even if the rest of the draft never ships. Pros Backward compatibility is the one thing this gets right that IPv6 got wrong. Encoding IPv4 as IPv8 with a zero ASN prefix means existing applications, RFC 1918 networks, and CGNAT deployments don’t need to change to keep working. If that claim holds up in implementation, it sidesteps the single biggest political failure of the IPv6 transition, the one where you had to convince every stakeholder in the chain to move at the same time for anyone to benefit. The management-fragmentation critique is strong, and the answer makes a lot of sense. Networking from disparate angles doesn’t exactly evoke a thoughtful design pattern, but feels more like a whack-a-mole approach. DHCP, DNS, syslog, SNMP, and auth really were specified independently over four decades with no shared identity or telemetry model, and anyone who’s ever tried to correlate an incident across them knows the pain. A Zone Server with OAuth2/JWT as the common substrate is a reasonable swing at it, and it’s refreshing to see a proposal treat operations as a first-class concern instead of an exercise left to the reader. Cost Factor is the routing metric OSPF and EIGRP always wanted to be. CF accumulates seven signals: RTT, loss, congestion window state, session stability, link capacity, economic policy, and great-circle distance as a physics floor, end-to-end across AS boundaries, which is exactly where OSPF and EIGRP stop being useful. The geographic component is the clever bit: no path can measure faster than the speed of light over the great circle distance allows, so a path that appears better than physics permits is flagged as an anomaly instead of silently poisoning route selection. That’s a better hijack detector than most of what we have today, and it falls out of the metric for free. Honorable mention: bounded routing table. Already covered in the intro, but worth restating that the /16 minimum-prefix rule plus mandatory WHOIS8 validation is the structural fix for both unbounded RIB growth and prefix hijacking. If any single piece of this draft gets adopted à la carte, this is the one I’d bet on. Cons “No dual stack" understates the deployment reality. IPv4 packets transit an IPv8 router fine, but anything that actually uses the ASN prefix (new header fields, A8 records, AF_INET8 sockets, 8to4 tunneling, WHOIS8 egress validation) requires updated stacks, resolvers, middleboxes, firewalls, and applications. Backward-compatible is not the same as zero deployment cost, and the draft blurs the two in a way that will feel familiar to anyone who remembers the original “IPv6 is a drop-in replacement" sales pitch. The Zone Server is a massive trust and failure domain. This is the part that should make operators nervous. We've spent the last twenty years decomposing monoliths, breaking apart control planes, distributing systems, and reducing blast radius. The Zone Server pulls DHCP, DNS, auth, telemetry, validation, and policy back into a single logical system. Even with active/active HA, it’s a high-value target, it expands the trust boundary significantly, and a bad day becomes a very bad day. We’ve seen this pattern before in other control-plane centralizations. It works great…until it doesn’t. The scope is probably fatal to adoption. Ten companion drafts covering a new IP version, five routing protocols, a new exchange-point architecture, a zone-server platform, support protocols, a MIB, WiFi8, and mandatory NIC certification with hardware-enforced rate limits is the opposite of how the IETF actually ships things. The institutional motto is “rough consensus and running code”, not “ten coordinated drafts and a reference architecture.” I love the crazy ambition, but narrow, incrementally deployable specs get adopted. Monolithic suites rarely do, just ask OSI. The Real Roadblock: Incentives IPv8 won’t fail because it’s too ambitious. It will fail because no one with budget authority is experiencing enough pain to justify replacing the system. For it to succeed, the RIRs would need to stand up WHOIS8 as a high-availability egress-gating service, and RPKI, a much narrower version of the same idea, is still partially deployed fifteen years in (don’t get Chase started). At least one major vendor (Cisco, Juniper, Arista, Nokia, or the merchant-silicon ecosystem) would need to publicly commit to shipping IPv8 forwarding, certified NIC firmware, and Zone Server reference code, while somehow reconciling the “just a software update” framing with the mandatory NIC certification and hardware rollback prevention the draft also requires. And the hyperscalers, who have already solved VPC overlap and multi-cloud routing on their own terms, would need a reason to adopt a standard that constrains their existing architecture. Meanwhile, CGNAT works well enough. Hyperscalers have already built their own solutions. And operational pain sits with engineers, not executives, which is the same incentive gap that killed IPv6 momentum. The draft answer, that Cost Factor will naturally incentivize IPv4 transit ASNs to upgrade because 8to4 paths measure slower, is clever but requires enough IPv8 traffic to exist for the signal to register, which is the same chicken-and-egg problem IPv6 has been losing for thirty years. There’s a faint echo here of other efforts like segment routing and SD-WAN where pieces of this vision are already being adopted, just not as a single unified system. That’s probably the shape of whatever actually ships. Bottom Line The diagnosis is on point. Management fragmentation, unbounded BGP, unauthenticated routing, and CGNAT's drag on peer-to-peer protocols are real problems that IPv6 didn’t address and that the industry has mostly absorbed as permanent friction in their engineering and operational playbooks. IPv6 addresses one of them. IPv8 tries to address all of them at once, and that’s both its strength and the reason it probably won’t ship. If anything from this proposal survives, it will likely be the smaller pieces (stronger route validation, better routing metrics, more cohesive management models) adopted incrementally rather than as a full replacement. Which is a bit of a shame, because a bounded routing table alone would have solved one of the hardest conversations of my early career. IPv8 is what the internet might look like if it were designed today. Unfortunately, the internet we have is the one that has to adopt it. What do you think? Come at me and my IPv8 hot takes!602Views2likes0Comments