ipv6
29 TopicsIPv8 Would Fix My Routing Tables. It Will Never Ship.
Anyone who worked on a service provider backbone in the late 90s or early 2000s remembers the squeeze. Cisco 7500s and early GSRs came with RAM budgets that looked generous at install and felt terrifying three years later, and the global BGP table kept growing faster than the hardware refresh cycle. Providers started summarizing aggressively, pushing back on customers who wanted to advertise /24s for traffic engineering, and progressively raising the minimum prefix length they’d accept at the edge. It was a real problem on both sides of every BGP session, and the fix was always the same conversation: “we’d love to carry all your cute disparate /25 CIDR blocks, but my RIBs are a little sore.” Twenty-five years later, the table is pushing toward a million prefixes. The hardware got bigger, and we quietly learned to live with a routing system whose growth has no architectural ceiling. So when I read this IPv8 draft and got to the part where the /16 minimum injectable prefix rule effectively caps the global table on the order of one entry per originating ASN, dropping us from ~900K prefixes to something closer to ~150–200K in steady state, I performed Balki’s dance of joy in my head (look it up, youngins!) and was ready to sign up on the spot. A bounded global routing table, WHOIS8 validation that meaningfully raises the bar on prefix hijacking, and a Cost Factor metric that actually accumulates end-to-end across AS boundaries instead of stopping at the edge. That’s three things the younger me wanted twenty years ago, bundled into one draft. Of note: this is an individual -00 Internet-Draft with no working-group adoption or visible industry backing yet. Plenty of RFCs started this way, but it's a design document at this stage, not a standards-track specification. But it’s not all puppies and rainbows. And the reason why gets to something more important: IPv6 didn’t struggle because it solved the wrong problem. It struggled because it solved only one problem in a system where operators needed several to be solved concurrently. Before I dig into why that matters for IPv8, let’s take a step back and consider the history of IP in general, because the reasons a proposal like this is hard to ship are the same reasons IPv6 is still stuck at half the internet three decades in. IPv4 has been carrying the internet since 1981, and its 32-bit address space, roughly 4.3 billion addresses, was declared exhausted at the IANA level in 2011. IPv6 was ratified as the official successor back in 1998 with a 128-bit address space, and despite nearly three decades of standards work, deployment campaigns, and World IPv6 Day t-shirts (who doesn’t love a good #nerd shirt?), it still carries a minority of overall traffic, even if the telecom percentage is now more than half. That’s the backdrop against which IPv8 is proposed. IPv8 is a proposed 64-bit successor to IPv4 that pairs an expanded address space with a unified management architecture. Addresses take the form r.r.r.r.n.n.n.n, where the first 32 bits encode an ASN and the last 32 are an IPv4-semantic host address. When r.r.r.r = 0.0.0.0, the address is IPv4, which the draft leans on to argue IPv4 is a proper subset of IPv8 and no flag day or dual-stack phase is needed. Beyond addressing, the draft specifies a “Zone Server” that collapses DHCP, DNS, NTP, OAuth2 auth, telemetry, route validation, ACLs, and IPv4↔IPv8 translation onto one platform. It also introduces an end-to-end Cost Factor routing metric, the /16 minimum prefix rule mentioned above, and mandatory egress validation that drops any packet without a matching DNS lookup and WHOIS8-registered route. How IPv8 Differs From IPv6 IPv6 was scoped narrowly: solve address exhaustion. It went to 128 bits, modernized some header mechanics, and left DHCP, DNS, auth, telemetry, and routing security to evolve on their own, which, thirty years later, they mostly still haven’t in any coordinated way. Transition assumed dual-stack everywhere until IPv4 could eventually be retired. Eventually is doing some heavy lifting in that sentence. IPv8's authors argue exhaustion is only one of three structural IPv4 failures, the others being management, fragmentation and unbounded, unvalidated BGP, and try to solve all three at once while rejecting dual-stack outright. Addressing, routing, identity, policy, and telemetry are treated as one system. That’s either exactly what the industry needed, or exactly why it won’t ship. Why IPv6 Adoption Stalled We’ll get into the reasons below, but it’s worth looking at where things actually stand. The headline numbers people quote for IPv6 come from Google, APNIC, and Cloudflare, all of which measure eyeball-to-content traffic, users reaching public services. Here’s how that breaks down by country as of early 2026 (Is that FRANCE leading the way?!?): Dual-stack did most of the damage. Running both protocols in parallel roughly doubled the config, monitoring, firewall, and troubleshooting surface area with nothing new to show for it operationally. Cost was immediate; benefit was deferred to a day that kept sliding to the right. Every network engineer who has debugged a dual-stack MTU issue at 2am has opinions about this. Carrier-grade NAT finished the job. Once ISPs could stretch IPv4 with CGNAT, the exhaustion crisis stopped being acute and quietly became someone else’s problem, specifically the problem of whoever was trying to run a peer-to-peer protocol through three layers of translation. Add a non-backward-compatible header and 128-bit colon-hex notation that fights operator muscle memory, and the business case never really came together. We’ve spent three decades turning the “IPv6 is coming” war cry into the networking equivalent of fusion power. The Enterprise Internal-Network Blind Spot The country-level numbers above tell you what mobile carriers and residential ISPs have shipped. They don’t tell you anything about the LAN side of the corporate firewall, which is a completely different story. Internal enterprise IPv6 adoption is sitting somewhere between 20% and 30% and has barely moved in a decade, a gap the headline statistics quietly gloss over. A few data points worth knowing: RFC 9386 (IPv6 Deployment Status), the closest thing to an official IETF status report, surveyed European service providers in 2020 and found the enterprise segment lagging mobile and fixed broadband even when measured from the provider’s perspective. Internal deployment numbers were mostly not collected because they were understood to be negligible. HexaBuild's IPv6 Adoption Reports from 2018 and 2020 explicitly call out that “many commercial enterprises still lack IPv6 connectivity at their Internet perimeters and don’t have any IPv6 network connectivity in their internal networks.” Follow-on coverage hasn’t meaningfully changed that framing. OMB Memorandum M-21-07 required US federal agencies to hit 80% IPv6-only on internal assets by September 30, 2025. As of October 2025, no federal agency has publicly announced reaching that threshold. This is a mandate with five years of runway, presidential-memo weight, and FAR procurement backing, and it still missed its own deadline across essentially every agency. The reasons internal adoption is stuck are painfully mundane, and every network engineer reading this will recognize them: RFC 1918 solved the address problem thirty years ago. 10.0.0.0/8 gives you 16 million addresses. Unless you’re a hyperscaler or you’ve acquired your way into overlapping subnet hell, that’s functionally infinite. It’s hard to sell a renumbering project to a CFO when the existing scheme has never once failed to have enough addresses. Every piece of tooling assumes IPv4. Firewalls, load balancers, IPAM, NetFlow collectors, ACL generators, SIEM parsers, monitoring dashboards, runbooks, change management templates, and the regex in that one critical Perl script from 2008, all of it was written for dotted-quad. Dual-stack means maintaining two of everything with no operational payoff. Troubleshooting costs roughly double that. Anyone who has tried to correlate a dual-stack flow across a load balancer, a WAF, and three microservices knows exactly why executives didn’t approve the project. The failure modes aren’t symmetric. An IPv6-only path can break in ways that leave the IPv4 path working, which means “it works on my machine” becomes “it works on my protocol family.” Security teams often see IPv6 as a new attack surface rather than a modernized infrastructure. Auto-configuration and neighbor discovery behave differently enough from ARP that existing segmentation, spoofing, and rogue-device playbooks need to be rewritten. For a team already underwater on IPv4 incidents, opting into a second set of attack patterns is a hard sell. There’s no customer-visible benefit. The user doesn’t care what protocol their apps run on internally. The CIO/CISO might (ok, for sure) have an opinion, but the CFO definitely doesn’t. This is actually a stronger argument for the IPv8 approach than the draft itself makes. The reason IPv6 bounced off the enterprise LAN is that it offered zero operational improvement over what RFC 1918 and NAT were already providing. IPv8’s pitch, that IPv4 is a proper subset, that internal networks keep their existing addressing, and that the management story is the value proposition rather than the address space, is at least aimed at the right problem. Pros & Cons No proposal this ambitious gets everything right or everything wrong, and IPv8 is no exception. A few things it nails, a few things it doesn’t, and one quiet standout worth calling out even if the rest of the draft never ships. Pros Backward compatibility is the one thing this gets right that IPv6 got wrong. Encoding IPv4 as IPv8 with a zero ASN prefix means existing applications, RFC 1918 networks, and CGNAT deployments don’t need to change to keep working. If that claim holds up in implementation, it sidesteps the single biggest political failure of the IPv6 transition, the one where you had to convince every stakeholder in the chain to move at the same time for anyone to benefit. The management-fragmentation critique is strong, and the answer makes a lot of sense. Networking from disparate angles doesn’t exactly evoke a thoughtful design pattern, but feels more like a whack-a-mole approach. DHCP, DNS, syslog, SNMP, and auth really were specified independently over four decades with no shared identity or telemetry model, and anyone who’s ever tried to correlate an incident across them knows the pain. A Zone Server with OAuth2/JWT as the common substrate is a reasonable swing at it, and it’s refreshing to see a proposal treat operations as a first-class concern instead of an exercise left to the reader. Cost Factor is the routing metric OSPF and EIGRP always wanted to be. CF accumulates seven signals: RTT, loss, congestion window state, session stability, link capacity, economic policy, and great-circle distance as a physics floor, end-to-end across AS boundaries, which is exactly where OSPF and EIGRP stop being useful. The geographic component is the clever bit: no path can measure faster than the speed of light over the great circle distance allows, so a path that appears better than physics permits is flagged as an anomaly instead of silently poisoning route selection. That’s a better hijack detector than most of what we have today, and it falls out of the metric for free. Honorable mention: bounded routing table. Already covered in the intro, but worth restating that the /16 minimum-prefix rule plus mandatory WHOIS8 validation is the structural fix for both unbounded RIB growth and prefix hijacking. If any single piece of this draft gets adopted à la carte, this is the one I’d bet on. Cons “No dual stack" understates the deployment reality. IPv4 packets transit an IPv8 router fine, but anything that actually uses the ASN prefix (new header fields, A8 records, AF_INET8 sockets, 8to4 tunneling, WHOIS8 egress validation) requires updated stacks, resolvers, middleboxes, firewalls, and applications. Backward-compatible is not the same as zero deployment cost, and the draft blurs the two in a way that will feel familiar to anyone who remembers the original “IPv6 is a drop-in replacement" sales pitch. The Zone Server is a massive trust and failure domain. This is the part that should make operators nervous. We've spent the last twenty years decomposing monoliths, breaking apart control planes, distributing systems, and reducing blast radius. The Zone Server pulls DHCP, DNS, auth, telemetry, validation, and policy back into a single logical system. Even with active/active HA, it’s a high-value target, it expands the trust boundary significantly, and a bad day becomes a very bad day. We’ve seen this pattern before in other control-plane centralizations. It works great…until it doesn’t. The scope is probably fatal to adoption. Ten companion drafts covering a new IP version, five routing protocols, a new exchange-point architecture, a zone-server platform, support protocols, a MIB, WiFi8, and mandatory NIC certification with hardware-enforced rate limits is the opposite of how the IETF actually ships things. The institutional motto is “rough consensus and running code”, not “ten coordinated drafts and a reference architecture.” I love the crazy ambition, but narrow, incrementally deployable specs get adopted. Monolithic suites rarely do, just ask OSI. The Real Roadblock: Incentives IPv8 won’t fail because it’s too ambitious. It will fail because no one with budget authority is experiencing enough pain to justify replacing the system. For it to succeed, the RIRs would need to stand up WHOIS8 as a high-availability egress-gating service, and RPKI, a much narrower version of the same idea, is still partially deployed fifteen years in (don’t get Chase started). At least one major vendor (Cisco, Juniper, Arista, Nokia, or the merchant-silicon ecosystem) would need to publicly commit to shipping IPv8 forwarding, certified NIC firmware, and Zone Server reference code, while somehow reconciling the “just a software update” framing with the mandatory NIC certification and hardware rollback prevention the draft also requires. And the hyperscalers, who have already solved VPC overlap and multi-cloud routing on their own terms, would need a reason to adopt a standard that constrains their existing architecture. Meanwhile, CGNAT works well enough. Hyperscalers have already built their own solutions. And operational pain sits with engineers, not executives, which is the same incentive gap that killed IPv6 momentum. The draft answer, that Cost Factor will naturally incentivize IPv4 transit ASNs to upgrade because 8to4 paths measure slower, is clever but requires enough IPv8 traffic to exist for the signal to register, which is the same chicken-and-egg problem IPv6 has been losing for thirty years. There’s a faint echo here of other efforts like segment routing and SD-WAN where pieces of this vision are already being adopted, just not as a single unified system. That’s probably the shape of whatever actually ships. Bottom Line The diagnosis is on point. Management fragmentation, unbounded BGP, unauthenticated routing, and CGNAT's drag on peer-to-peer protocols are real problems that IPv6 didn’t address and that the industry has mostly absorbed as permanent friction in their engineering and operational playbooks. IPv6 addresses one of them. IPv8 tries to address all of them at once, and that’s both its strength and the reason it probably won’t ship. If anything from this proposal survives, it will likely be the smaller pieces (stronger route validation, better routing metrics, more cohesive management models) adopted incrementally rather than as a full replacement. Which is a bit of a shame, because a bounded routing table alone would have solved one of the hardest conversations of my early career. IPv8 is what the internet might look like if it were designed today. Unfortunately, the internet we have is the one that has to adopt it. What do you think? Come at me and my IPv8 hot takes!878Views2likes0CommentsIPv6 Virtual Server to IPv4 pools translation
Hi all, we are going to configure below scenario on BIG-IP AWAF VE: Source (client from IPv6) --> Virtual Server (IPv6) --> Pool (servers in IPv4) As per below ref article BIG-IP will automatically translate as below: Connections to an IPv6 virtual server that are forwarded to an IPv4 destination will be translated to the IPv4 self IP address of the destination VLAN. Ref Article: https://my.f5.com/manage/s/article/K3326 I want the actual source IP inforamtion on physical server, what is the solution to achive it. please let me know if X-Forwarding For will solve my issue ?Solved1.3KViews0likes3CommentsF5 LTM SNAT: only 1 outgoing connection, multiple internal clients
I have an F5 LTM SNAT configured: ltm snat /Common/outgoing_snat_v6 { description "IPv6 SNAT translation" mirror enabled origins { ::/0 { } } snatpool /Common/outgoing_snatpool_v6 vlans { /Common/internal } vlans-enabled } ... with a translation configured as: ltm snat-translation /Common/ext_SNAT_v6 { address 2607:f160:c:301d::63 inherited-traffic-group true traffic-group /Common/traffic-group-1 } ... with snatpool configured as: ltm snatpool /Common/outgoing_snatpool_v6 { members { /Common/ext_SNAT_v6 } } ... and finally, with the SNAT type set to automap: vs_pool__snat_type { value automap } The goal is to achieve a single Diameter connection (single source IP, port) between F5 and the external element, while internally multiple Diameter clients connect via F5 to the external element: However, what ends up happening with this SNAT configuration is that multiple outgoing Diameter connections to the external Diameter element are opened, with the only difference between them being the source port (source IP, destination IP and port remained the same). The external element cannot handle multiple connections per the same origin IP and the same Diameter entity (internal clients are all configured to use the same Origin-Host during the Capabilities Exchange phase). Is there a way to configure F5 to funnel all the internal connections into a single outgoing one?Solved1.3KViews0likes10CommentsCreate IPv6 self-IP with Route Domains on 10.2.3
We need to create IPv6 self-IPs in a non-default Route Domain, but we are getting the following error: The vlan () for the specified self IP () must be one of the vlans in the associated route domain (0). Seems the internal F5 logic interpret this as an IP-address from Route Domain 0, although we are in a partition which is mapped to Route Domain 4 (doing this, you normally don't need to append the <%RD>). I verified this also on version 11.x and there it's not an issue. So is this a bug in version 10.2.3 or do I need to use a special format? Or isn't this kind of setup supported in such an old version? Thank you! Ciao Stefan 🙂265Views0likes2CommentsHow to check if a string parameter can be an IPv4 or an IPv6 or nothing in an iRule ?
How to deal with that question in the best optimized way to code it versus cycles ? "How to check if a string parameter can be an IPv4 or an IPv6 or nothing in an iRule ?" I have already looked at "IP::addr .... mask ...scan ..." without any simple efficient way. Some helps ? Some few lines ? or TCL function or undocumentated iRule command ? Many thanks :-)792Views0likes2CommentsReverse records (PTR) for IPv6
Hi folks, i got an F5 DNS acting as a nameserver, ready for ipv6, but now we got the create the reverse records (PTR) for the clients subnet, and those subnet are millions of millions of addreses, so millions of records, i dont know how we can solve this, with an irule i guess or maybe you guys know another method working somewhere. Thanks guys!441Views0likes1CommentNAT IPv6 to IPv6 (NAT66)
Hi, I have a scenario which requires us to do ipv6 to ipv6 natting. (map a private-ipv6 to a public-ipv6) We are using the soft version 13.1.1.4 and it seems it doesn't properly work. We tried the following: 1. cfged a snat pool list w/ one ipv6 address, next this snat was assigned to our ipv6 virtual-server. tshooting it w/ tcpdump shows no translation occurs. i found under the 14.x release notes a bug ID681070 whichseems similar "NAT66 may fail if configured with a single translation address". we then tried to cfg the snat pool list w/ an ipv6/124 prefix resultingin errors by the f5 saying " 01020059:3: IP Address :: is invalid, must not be all zeros." tried using an iRULE w/ plain when client_accepted, snat ipv6address... this didn't work either, we receiving TCL errors bad IP address format (line 1)TCL error (line 1) (line 1) invoked from within "snat xxxx:6xx0:0001:0100:00xx:0xx5:0104:0/124" Did anyone successfully configure something like this? Any ideas will be very much appreciated. thanks,684Views0likes0CommentsIs it cmpulsory to enable DNS IPv6 to IPv4 to host IPv6 listner?
Comment made 1 day ago by Mihir Joshi 2 Hi, I have a question. Is it compulsory to enable option "DNS IPv6 to IPv4" if we host IPv6 listener on BIG-IP DNS (GTM)? We are experiencing strange issue. User belong to one of the Europe region not able to connect application when they connect from their home Wi-Fi which have IPv6 addresses enabled. On GTM we have IPv6 listener and IPv4 listener which shares same DNS profile which enabled with option "DNS IPv6 to IPv4" (Secondary). Because of this end user receives two records in IPv6 addresses in format of "::xxx.xxx.xxx.xxx". Do you think this could be a reason for issue we are currently experiencing? When we ask client to change their IP schema from IPv6 to IPv4 it works perfectly fine. Regards, Mihir524Views0likes2CommentsDNS on the BIG-IP: IPv6 to IPv4 Translation
I've been writing some DNS articles over the past couple of months, and I wanted to keep the momentum going with a discussion on IPv6 translation. As a reminder, my first four articles are: Let's Talk DNS on DevCentral DNS The F5 Way: A Paradigm Shift DNS Express and Zone Transfers The BIG-IP GTM: Configuring DNSSEC The Address Space Problem I'm pretty sure all of you have heard about the problem of IPv4 address depletion, so I won't go too crazy on that. But, I did want to share one quick analogy of how the IPv4 address space relates to the IPv6 space. There are ~4 billion possible IPv4 addresses and ~3.4 x 10 38 IPv6 addresses. Sometimes when I see a comparison of large numbers like these, it's hard for me to grasp the magnitude of the difference. Here's the analogy that helped put this in perspective: if the entire IPv4 address space was a single drop of water, the IPv6 address space would be the equivalent of 68 times the entire volume of the world's oceans! I can't imagine ever needing more IP address space than that, but I guess we will see. As IPv4 address space is used up and new IP-enabled devices continue to hit the market, companies need to support and manage existing IPv4 devices and content while transitioning to IPv6. Just last week, ICANN announced that IPv4 addresses are nearing total exhaustion. Leo Vegoda, operational excellence manager at ICANN, said "Redistributing increasingly small blocks of IPv4 address space is not a sustainable way to grow the Internet. IPv6 deployment is a requirement for any network that needs to survive." As companies transition to IPv6, they still face a real issue of handling IPv4 traffic. Despite the need to move to IPv6, the fact is most Internet traffic today is still IPv4. Google has a really cool graph that tracks IPv6 adoption, and they currently report that only 3.5% of all Internet traffic is IPv6. You would think that the people who developed IPv6 would have made it backward compatible with IPv4 thus making the transition fairly easy and straightforward...but that's not true. This leaves companies in a tough spot. They need a services fabric that is flexible enough to handle both IPv4 and IPv6 at the same time. The good news is that the BIG-IP is the best in the business at doing just that. BIG-IP Configuration Let's say you built an IPv6 network and things are running smoothly within your own network...IPv6 talking to IPv6 and all is well. But remember that statistic I mentioned about most of the Internet traffic running IPv4? That creates a big need for your network to translate from IPv6 to IPv4 and back again. The BIG-IP can do this by configuring a DNS profile and assigning it to a virtual server. You can create this DNS profile by navigating to Local Traffic >> Profiles >> Services >> DNS and create/modify a DNS profile. There are several options to configure in the DNS profile, but for this article, we are just going to look at the DNS IPv6 to IPv4 translation part. Notice the three DNS IPv6 to IPv4 settings in the screenshot below: DNS IPv6 to IPv4, IPv6 to IPv4 Prefix, and IPv6 to IPv4 Additional Section Rewrite. The DNS IPv6 to IPv4 setting has four options. This setting specifies whether you want the BIG-IP to convert IPv6-formatted IP addresses to IPv4-formatted IP addresses. The options for DNS IPv6 to IPv4 are: Disabled: The BIG-IP does not map IPv4 addresses to IPv6 addresses. This is the default setting. Secondary: The BIG-IP receives an AAAA (IPv6) query and forwards the query to a DNS server. Only if the server fails to return a response does the BIG-IP system send an A (IPv4) query. If the BIG-IP system receives an A response, it prepends a 96-bit user-configured prefix to the record and forwards it to the client. Immediate: The BIG-IP system receives an AAAA query and forwards the query to a DNS server. The BIG-IP then forwards the first good response from the DNS server to the client. If the system receives an A response first, it prepends a 96-bit prefix to the record and forwards it to the client. If the system receives an AAAA response first, it simply forwards the response to the client. The system disregards the subsequent response from the DNS server. v4 Only: The BIG-IP receives an AAAA query, but forwards an A query to a DNS server. After receiving an A response from the server, the BIG-IP system prepends a 96-bit user-configured prefix to the record and forwards it to the client. Only select the v4 Only option if you know that all DNS servers are IPv4-only servers. When you select one of the options listed above (except the "Disabled" option), you must also provide a prefix in the IPv6 to IPv4 Prefix field and make a selection from the IPv6 to IPv4 Additional Section Rewrite list. The IPv6 to IPv4 Prefix specifies the prefix to use for the IPv6-formatted IP addresses that the BIG-IP converts to IPv4-formatted IP addresses. The default is 0:0:0:0:0:0:0:0. The IPv6 to IPv4 Additional Section Rewrite allows improved network efficiency for both Unicast and Multicast DNS-SD responses. This setting has 4 options: Disabled: The BIG-IP does not perform additional rewrite. This is the default setting. V4 Only: The BIG-IP accepts only A records. The system prepends the 96-bit user-configured prefix (mentioned previously) to a record and returns an IPv6 response to the client. V6 Only: The BIG-IP accepts only AAAA records and returns an IPv6 response to the client. Any: The BIG-IP accepts and returns both A and AAAA records. If the DNS server returns an A record in the Additional section of a DNS message, the BIG-IP prepends the 96-bit user-configured prefix to the record and returns an IPv6 response to the client. Like any configuration change, I would recommend initial testing in a lab to see how your network performs with these settings. This one is pretty straightforward, though. Hopefully this helps with any hesitation you may have with transitioning to an IPv6 network. Go ahead and take advantage of that vast IPv6 space, and let the BIG-IP take care of all the translation work! Stay tuned for more DNS articles, and let me know if you have any specific topics you'd like to see. One final and related note: check out the F5 CGNAT products page to learn more about seamless migration to IPv6.3.5KViews0likes2Comments