ipv6
22 TopicsIPv8 Would Fix My Routing Tables. It Will Never Ship.
Anyone who worked on a service provider backbone in the late 90s or early 2000s remembers the squeeze. Cisco 7500s and early GSRs came with RAM budgets that looked generous at install and felt terrifying three years later, and the global BGP table kept growing faster than the hardware refresh cycle. Providers started summarizing aggressively, pushing back on customers who wanted to advertise /24s for traffic engineering, and progressively raising the minimum prefix length they’d accept at the edge. It was a real problem on both sides of every BGP session, and the fix was always the same conversation: “we’d love to carry all your cute disparate /25 CIDR blocks, but my RIBs are a little sore.” Twenty-five years later, the table is pushing toward a million prefixes. The hardware got bigger, and we quietly learned to live with a routing system whose growth has no architectural ceiling. So when I read this IPv8 draft and got to the part where the /16 minimum injectable prefix rule effectively caps the global table on the order of one entry per originating ASN, dropping us from ~900K prefixes to something closer to ~150–200K in steady state, I performed Balki’s dance of joy in my head (look it up, youngins!) and was ready to sign up on the spot. A bounded global routing table, WHOIS8 validation that meaningfully raises the bar on prefix hijacking, and a Cost Factor metric that actually accumulates end-to-end across AS boundaries instead of stopping at the edge. That’s three things the younger me wanted twenty years ago, bundled into one draft. Of note: this is an individual -00 Internet-Draft with no working-group adoption or visible industry backing yet. Plenty of RFCs started this way, but it's a design document at this stage, not a standards-track specification. But it’s not all puppies and rainbows. And the reason why gets to something more important: IPv6 didn’t struggle because it solved the wrong problem. It struggled because it solved only one problem in a system where operators needed several to be solved concurrently. Before I dig into why that matters for IPv8, let’s take a step back and consider the history of IP in general, because the reasons a proposal like this is hard to ship are the same reasons IPv6 is still stuck at half the internet three decades in. IPv4 has been carrying the internet since 1981, and its 32-bit address space, roughly 4.3 billion addresses, was declared exhausted at the IANA level in 2011. IPv6 was ratified as the official successor back in 1998 with a 128-bit address space, and despite nearly three decades of standards work, deployment campaigns, and World IPv6 Day t-shirts (who doesn’t love a good #nerd shirt?), it still carries a minority of overall traffic, even if the telecom percentage is now more than half. That’s the backdrop against which IPv8 is proposed. IPv8 is a proposed 64-bit successor to IPv4 that pairs an expanded address space with a unified management architecture. Addresses take the form r.r.r.r.n.n.n.n, where the first 32 bits encode an ASN and the last 32 are an IPv4-semantic host address. When r.r.r.r = 0.0.0.0, the address is IPv4, which the draft leans on to argue IPv4 is a proper subset of IPv8 and no flag day or dual-stack phase is needed. Beyond addressing, the draft specifies a “Zone Server” that collapses DHCP, DNS, NTP, OAuth2 auth, telemetry, route validation, ACLs, and IPv4↔IPv8 translation onto one platform. It also introduces an end-to-end Cost Factor routing metric, the /16 minimum prefix rule mentioned above, and mandatory egress validation that drops any packet without a matching DNS lookup and WHOIS8-registered route. How IPv8 Differs From IPv6 IPv6 was scoped narrowly: solve address exhaustion. It went to 128 bits, modernized some header mechanics, and left DHCP, DNS, auth, telemetry, and routing security to evolve on their own, which, thirty years later, they mostly still haven’t in any coordinated way. Transition assumed dual-stack everywhere until IPv4 could eventually be retired. Eventually is doing some heavy lifting in that sentence. IPv8's authors argue exhaustion is only one of three structural IPv4 failures, the others being management, fragmentation and unbounded, unvalidated BGP, and try to solve all three at once while rejecting dual-stack outright. Addressing, routing, identity, policy, and telemetry are treated as one system. That’s either exactly what the industry needed, or exactly why it won’t ship. Why IPv6 Adoption Stalled We’ll get into the reasons below, but it’s worth looking at where things actually stand. The headline numbers people quote for IPv6 come from Google, APNIC, and Cloudflare, all of which measure eyeball-to-content traffic, users reaching public services. Here’s how that breaks down by country as of early 2026 (Is that FRANCE leading the way?!?): Dual-stack did most of the damage. Running both protocols in parallel roughly doubled the config, monitoring, firewall, and troubleshooting surface area with nothing new to show for it operationally. Cost was immediate; benefit was deferred to a day that kept sliding to the right. Every network engineer who has debugged a dual-stack MTU issue at 2am has opinions about this. Carrier-grade NAT finished the job. Once ISPs could stretch IPv4 with CGNAT, the exhaustion crisis stopped being acute and quietly became someone else’s problem, specifically the problem of whoever was trying to run a peer-to-peer protocol through three layers of translation. Add a non-backward-compatible header and 128-bit colon-hex notation that fights operator muscle memory, and the business case never really came together. We’ve spent three decades turning the “IPv6 is coming” war cry into the networking equivalent of fusion power. The Enterprise Internal-Network Blind Spot The country-level numbers above tell you what mobile carriers and residential ISPs have shipped. They don’t tell you anything about the LAN side of the corporate firewall, which is a completely different story. Internal enterprise IPv6 adoption is sitting somewhere between 20% and 30% and has barely moved in a decade, a gap the headline statistics quietly gloss over. A few data points worth knowing: RFC 9386 (IPv6 Deployment Status), the closest thing to an official IETF status report, surveyed European service providers in 2020 and found the enterprise segment lagging mobile and fixed broadband even when measured from the provider’s perspective. Internal deployment numbers were mostly not collected because they were understood to be negligible. HexaBuild's IPv6 Adoption Reports from 2018 and 2020 explicitly call out that “many commercial enterprises still lack IPv6 connectivity at their Internet perimeters and don’t have any IPv6 network connectivity in their internal networks.” Follow-on coverage hasn’t meaningfully changed that framing. OMB Memorandum M-21-07 required US federal agencies to hit 80% IPv6-only on internal assets by September 30, 2025. As of October 2025, no federal agency has publicly announced reaching that threshold. This is a mandate with five years of runway, presidential-memo weight, and FAR procurement backing, and it still missed its own deadline across essentially every agency. The reasons internal adoption is stuck are painfully mundane, and every network engineer reading this will recognize them: RFC 1918 solved the address problem thirty years ago. 10.0.0.0/8 gives you 16 million addresses. Unless you’re a hyperscaler or you’ve acquired your way into overlapping subnet hell, that’s functionally infinite. It’s hard to sell a renumbering project to a CFO when the existing scheme has never once failed to have enough addresses. Every piece of tooling assumes IPv4. Firewalls, load balancers, IPAM, NetFlow collectors, ACL generators, SIEM parsers, monitoring dashboards, runbooks, change management templates, and the regex in that one critical Perl script from 2008, all of it was written for dotted-quad. Dual-stack means maintaining two of everything with no operational payoff. Troubleshooting costs roughly double that. Anyone who has tried to correlate a dual-stack flow across a load balancer, a WAF, and three microservices knows exactly why executives didn’t approve the project. The failure modes aren’t symmetric. An IPv6-only path can break in ways that leave the IPv4 path working, which means “it works on my machine” becomes “it works on my protocol family.” Security teams often see IPv6 as a new attack surface rather than a modernized infrastructure. Auto-configuration and neighbor discovery behave differently enough from ARP that existing segmentation, spoofing, and rogue-device playbooks need to be rewritten. For a team already underwater on IPv4 incidents, opting into a second set of attack patterns is a hard sell. There’s no customer-visible benefit. The user doesn’t care what protocol their apps run on internally. The CIO/CISO might (ok, for sure) have an opinion, but the CFO definitely doesn’t. This is actually a stronger argument for the IPv8 approach than the draft itself makes. The reason IPv6 bounced off the enterprise LAN is that it offered zero operational improvement over what RFC 1918 and NAT were already providing. IPv8’s pitch, that IPv4 is a proper subset, that internal networks keep their existing addressing, and that the management story is the value proposition rather than the address space, is at least aimed at the right problem. Pros & Cons No proposal this ambitious gets everything right or everything wrong, and IPv8 is no exception. A few things it nails, a few things it doesn’t, and one quiet standout worth calling out even if the rest of the draft never ships. Pros Backward compatibility is the one thing this gets right that IPv6 got wrong. Encoding IPv4 as IPv8 with a zero ASN prefix means existing applications, RFC 1918 networks, and CGNAT deployments don’t need to change to keep working. If that claim holds up in implementation, it sidesteps the single biggest political failure of the IPv6 transition, the one where you had to convince every stakeholder in the chain to move at the same time for anyone to benefit. The management-fragmentation critique is strong, and the answer makes a lot of sense. Networking from disparate angles doesn’t exactly evoke a thoughtful design pattern, but feels more like a whack-a-mole approach. DHCP, DNS, syslog, SNMP, and auth really were specified independently over four decades with no shared identity or telemetry model, and anyone who’s ever tried to correlate an incident across them knows the pain. A Zone Server with OAuth2/JWT as the common substrate is a reasonable swing at it, and it’s refreshing to see a proposal treat operations as a first-class concern instead of an exercise left to the reader. Cost Factor is the routing metric OSPF and EIGRP always wanted to be. CF accumulates seven signals: RTT, loss, congestion window state, session stability, link capacity, economic policy, and great-circle distance as a physics floor, end-to-end across AS boundaries, which is exactly where OSPF and EIGRP stop being useful. The geographic component is the clever bit: no path can measure faster than the speed of light over the great circle distance allows, so a path that appears better than physics permits is flagged as an anomaly instead of silently poisoning route selection. That’s a better hijack detector than most of what we have today, and it falls out of the metric for free. Honorable mention: bounded routing table. Already covered in the intro, but worth restating that the /16 minimum-prefix rule plus mandatory WHOIS8 validation is the structural fix for both unbounded RIB growth and prefix hijacking. If any single piece of this draft gets adopted à la carte, this is the one I’d bet on. Cons “No dual stack" understates the deployment reality. IPv4 packets transit an IPv8 router fine, but anything that actually uses the ASN prefix (new header fields, A8 records, AF_INET8 sockets, 8to4 tunneling, WHOIS8 egress validation) requires updated stacks, resolvers, middleboxes, firewalls, and applications. Backward-compatible is not the same as zero deployment cost, and the draft blurs the two in a way that will feel familiar to anyone who remembers the original “IPv6 is a drop-in replacement" sales pitch. The Zone Server is a massive trust and failure domain. This is the part that should make operators nervous. We've spent the last twenty years decomposing monoliths, breaking apart control planes, distributing systems, and reducing blast radius. The Zone Server pulls DHCP, DNS, auth, telemetry, validation, and policy back into a single logical system. Even with active/active HA, it’s a high-value target, it expands the trust boundary significantly, and a bad day becomes a very bad day. We’ve seen this pattern before in other control-plane centralizations. It works great…until it doesn’t. The scope is probably fatal to adoption. Ten companion drafts covering a new IP version, five routing protocols, a new exchange-point architecture, a zone-server platform, support protocols, a MIB, WiFi8, and mandatory NIC certification with hardware-enforced rate limits is the opposite of how the IETF actually ships things. The institutional motto is “rough consensus and running code”, not “ten coordinated drafts and a reference architecture.” I love the crazy ambition, but narrow, incrementally deployable specs get adopted. Monolithic suites rarely do, just ask OSI. The Real Roadblock: Incentives IPv8 won’t fail because it’s too ambitious. It will fail because no one with budget authority is experiencing enough pain to justify replacing the system. For it to succeed, the RIRs would need to stand up WHOIS8 as a high-availability egress-gating service, and RPKI, a much narrower version of the same idea, is still partially deployed fifteen years in (don’t get Chase started). At least one major vendor (Cisco, Juniper, Arista, Nokia, or the merchant-silicon ecosystem) would need to publicly commit to shipping IPv8 forwarding, certified NIC firmware, and Zone Server reference code, while somehow reconciling the “just a software update” framing with the mandatory NIC certification and hardware rollback prevention the draft also requires. And the hyperscalers, who have already solved VPC overlap and multi-cloud routing on their own terms, would need a reason to adopt a standard that constrains their existing architecture. Meanwhile, CGNAT works well enough. Hyperscalers have already built their own solutions. And operational pain sits with engineers, not executives, which is the same incentive gap that killed IPv6 momentum. The draft answer, that Cost Factor will naturally incentivize IPv4 transit ASNs to upgrade because 8to4 paths measure slower, is clever but requires enough IPv8 traffic to exist for the signal to register, which is the same chicken-and-egg problem IPv6 has been losing for thirty years. There’s a faint echo here of other efforts like segment routing and SD-WAN where pieces of this vision are already being adopted, just not as a single unified system. That’s probably the shape of whatever actually ships. Bottom Line The diagnosis is on point. Management fragmentation, unbounded BGP, unauthenticated routing, and CGNAT's drag on peer-to-peer protocols are real problems that IPv6 didn’t address and that the industry has mostly absorbed as permanent friction in their engineering and operational playbooks. IPv6 addresses one of them. IPv8 tries to address all of them at once, and that’s both its strength and the reason it probably won’t ship. If anything from this proposal survives, it will likely be the smaller pieces (stronger route validation, better routing metrics, more cohesive management models) adopted incrementally rather than as a full replacement. Which is a bit of a shame, because a bounded routing table alone would have solved one of the hardest conversations of my early career. IPv8 is what the internet might look like if it were designed today. Unfortunately, the internet we have is the one that has to adopt it. What do you think? Come at me and my IPv8 hot takes!892Views2likes0CommentsDNS on the BIG-IP: IPv6 to IPv4 Translation
I've been writing some DNS articles over the past couple of months, and I wanted to keep the momentum going with a discussion on IPv6 translation. As a reminder, my first four articles are: Let's Talk DNS on DevCentral DNS The F5 Way: A Paradigm Shift DNS Express and Zone Transfers The BIG-IP GTM: Configuring DNSSEC The Address Space Problem I'm pretty sure all of you have heard about the problem of IPv4 address depletion, so I won't go too crazy on that. But, I did want to share one quick analogy of how the IPv4 address space relates to the IPv6 space. There are ~4 billion possible IPv4 addresses and ~3.4 x 10 38 IPv6 addresses. Sometimes when I see a comparison of large numbers like these, it's hard for me to grasp the magnitude of the difference. Here's the analogy that helped put this in perspective: if the entire IPv4 address space was a single drop of water, the IPv6 address space would be the equivalent of 68 times the entire volume of the world's oceans! I can't imagine ever needing more IP address space than that, but I guess we will see. As IPv4 address space is used up and new IP-enabled devices continue to hit the market, companies need to support and manage existing IPv4 devices and content while transitioning to IPv6. Just last week, ICANN announced that IPv4 addresses are nearing total exhaustion. Leo Vegoda, operational excellence manager at ICANN, said "Redistributing increasingly small blocks of IPv4 address space is not a sustainable way to grow the Internet. IPv6 deployment is a requirement for any network that needs to survive." As companies transition to IPv6, they still face a real issue of handling IPv4 traffic. Despite the need to move to IPv6, the fact is most Internet traffic today is still IPv4. Google has a really cool graph that tracks IPv6 adoption, and they currently report that only 3.5% of all Internet traffic is IPv6. You would think that the people who developed IPv6 would have made it backward compatible with IPv4 thus making the transition fairly easy and straightforward...but that's not true. This leaves companies in a tough spot. They need a services fabric that is flexible enough to handle both IPv4 and IPv6 at the same time. The good news is that the BIG-IP is the best in the business at doing just that. BIG-IP Configuration Let's say you built an IPv6 network and things are running smoothly within your own network...IPv6 talking to IPv6 and all is well. But remember that statistic I mentioned about most of the Internet traffic running IPv4? That creates a big need for your network to translate from IPv6 to IPv4 and back again. The BIG-IP can do this by configuring a DNS profile and assigning it to a virtual server. You can create this DNS profile by navigating to Local Traffic >> Profiles >> Services >> DNS and create/modify a DNS profile. There are several options to configure in the DNS profile, but for this article, we are just going to look at the DNS IPv6 to IPv4 translation part. Notice the three DNS IPv6 to IPv4 settings in the screenshot below: DNS IPv6 to IPv4, IPv6 to IPv4 Prefix, and IPv6 to IPv4 Additional Section Rewrite. The DNS IPv6 to IPv4 setting has four options. This setting specifies whether you want the BIG-IP to convert IPv6-formatted IP addresses to IPv4-formatted IP addresses. The options for DNS IPv6 to IPv4 are: Disabled: The BIG-IP does not map IPv4 addresses to IPv6 addresses. This is the default setting. Secondary: The BIG-IP receives an AAAA (IPv6) query and forwards the query to a DNS server. Only if the server fails to return a response does the BIG-IP system send an A (IPv4) query. If the BIG-IP system receives an A response, it prepends a 96-bit user-configured prefix to the record and forwards it to the client. Immediate: The BIG-IP system receives an AAAA query and forwards the query to a DNS server. The BIG-IP then forwards the first good response from the DNS server to the client. If the system receives an A response first, it prepends a 96-bit prefix to the record and forwards it to the client. If the system receives an AAAA response first, it simply forwards the response to the client. The system disregards the subsequent response from the DNS server. v4 Only: The BIG-IP receives an AAAA query, but forwards an A query to a DNS server. After receiving an A response from the server, the BIG-IP system prepends a 96-bit user-configured prefix to the record and forwards it to the client. Only select the v4 Only option if you know that all DNS servers are IPv4-only servers. When you select one of the options listed above (except the "Disabled" option), you must also provide a prefix in the IPv6 to IPv4 Prefix field and make a selection from the IPv6 to IPv4 Additional Section Rewrite list. The IPv6 to IPv4 Prefix specifies the prefix to use for the IPv6-formatted IP addresses that the BIG-IP converts to IPv4-formatted IP addresses. The default is 0:0:0:0:0:0:0:0. The IPv6 to IPv4 Additional Section Rewrite allows improved network efficiency for both Unicast and Multicast DNS-SD responses. This setting has 4 options: Disabled: The BIG-IP does not perform additional rewrite. This is the default setting. V4 Only: The BIG-IP accepts only A records. The system prepends the 96-bit user-configured prefix (mentioned previously) to a record and returns an IPv6 response to the client. V6 Only: The BIG-IP accepts only AAAA records and returns an IPv6 response to the client. Any: The BIG-IP accepts and returns both A and AAAA records. If the DNS server returns an A record in the Additional section of a DNS message, the BIG-IP prepends the 96-bit user-configured prefix to the record and returns an IPv6 response to the client. Like any configuration change, I would recommend initial testing in a lab to see how your network performs with these settings. This one is pretty straightforward, though. Hopefully this helps with any hesitation you may have with transitioning to an IPv6 network. Go ahead and take advantage of that vast IPv6 space, and let the BIG-IP take care of all the translation work! Stay tuned for more DNS articles, and let me know if you have any specific topics you'd like to see. One final and related note: check out the F5 CGNAT products page to learn more about seamless migration to IPv6.3.5KViews0likes2CommentsF5 Friday: The Low Down on BIG-IP and VMware Stuff
#vmworld #vCloud #PHC6050 #EUC6104 #sddc How-tos and where to learn more about what's new with F5 and VMware As we're all gearing for up VMWorld (you are gearing up for the event, right?) it seems appropriate to highlight some existing resources for implementing VMware solutions with F5 BIG-IP and let you know where you can find out more at the show (hint: there are going to be sessions and a demo of a new joint solution!) So first, let's check out some recently posted how-tos from VMware folks on configuring BIG-IP and VMware solutions: First up is a great post on using F5 BIG-IP with Horizon Workspace 1.5 to load balance gateway-VAs for both internal and external access as well as load balancing Kerberos enabled connector VAs. You can download the document here: https://communities.vmware.com/docs/DOC-24577 If you're attending VMWorld, you can also attend a session on how to make Horizon View More Secure, Available, Scalable and Usable with F5 (EUC6104) presented by F5's own Paul Pindell Monday, Aug 26, 11:30 AM - 12:30 PM in Moscone West, Room 2005. Next up is configuring F5 BIG-IP LTM with VMware vCloud Director. This post appears to be the only one available that details how to setup the vCD Console Proxy via F5 BIG-IP. This is an important step that's often overlooked in other how-tos, so you'll want to check it out. Finally, here's a great post on using F5 BIG-IP LTM with IPv6. The noise around IPv6 has dulled to a quiet roar but it's still an increasingly important protocol to understand and using F5 is an awesome and quick way to enable legacy web applications for IPv6. How's that relate to VMware? Well, once you complete the configuration it will make the web interface of vCD available via IPv6. Finally, if you're attending the show, you'll want to attend a session presented by F5's own Charlie Cano and VMware Senior Product Manager, Dan Mitchell, on Monday, Aug 26, 3:30 PM - 4:30 PM in Moscone West, Room 3008 on the topic of Moving Beyond Infrastructure: Meeting Demands on App Lifecycle Management in the Dynamic Datacenter (PCH6050). This session is going to dig into some of the details behind the latest joint solution from F5 and VMware, taking the next step toward a Software-Defined Data Center. The solution is based on a new offering being launched by VMware at the show Monday and F5 will be providing a demo at its booth at the show of the joint solution. You don't want to miss it. If you aren't attending the show or can't make the sessions, be sure to check back here Monday for details on the new joint solution.300Views0likes0CommentsDNS Architecture in the 21st Century
It is amazing if you stop and think about it, how much we utilize DNS services, and how little we think about them. Every organization out there is running DNS, and yet there is not a ton of traction in making certain your DNS implementation is the best it can be. Oh sure, we set up a redundant pair of DNS servers, and some of us (though certainly not all of us) have patched BIND to avoid major vulnerabilities. But have you really looked at how DNS is configured and what you’ll need to keep your DNS moving along? If you’re looking close at IPv6 or DNSSEC, chances are that you have. If you’re not looking into either of these, you probably aren’t even aware that ISC – the non-profit responsible for BIND – is working on a new version. Or that great companies like Infoblox (fair disclosure, they’re an F5 partner) are out there trying to make DNS more manageable. With the move toward cloud computing and the need to keep multiple cloud providers available (generally so your app doesn’t go offline when a cloud provider does, but at a minimum for a negotiation tool), and the increasingly virtualized nature of our application deployments, DNS is taking on a new importance. In particular, distributed DNS is taking on a new importance. What a company with three datacenters and two cloud providers must do today, only ISPs and a few very large organizations did ten years ago. And that complexity shows no signs of slacking. While the technology that is required to operate in a multiple datacenter (whether those datacenters are in the cloud or on your premise) environment is available today, as I alluded to above, most of us haven’t been paying attention. No surprise with the number of other issues on our plates, eh? So here’s a quick little primer to give you some ideas to start with when you realize you need to change your DNS architecture. It is not all-inclusive, the point is to give you ideas you can pursue to get started, not teach you all that some of the experts I spent part of last week with could offer. In a massively distributed environment, DNS will have to direct users to the correct location – which may not be static (Lori tells me the term for this is “hyper-hybrid”) In a IPv6/IPv4 world, DNS will have to serve up both types of addresses, depending upon the requestor Increasingly, DNSSEC will be a requirement to play in the global naming game. While most orgs will go there with dragging feet, they will still go The failure of a cloud, or removal of a cloud from the list of options for an app (as elasticity contracts) will require dynamic changes in DNS. Addition will follow the same rules Multiple DNS servers in multiple locations will have to remain synched to cover a single domain. So the question is where do you begin if you’re like so many people and vaguely looked into DNSSEC or DNS for IPv6, but haven’t really stayed up on the topic. That’s a good question. I was lucky enough to get two days worth of firehose from a ton of experts – from developers to engineers configuring modern DNS and even a couple of project managers on DNS projects. I’ll try to distill some of that data out for you. Where it is clearer to use a concrete example or specific terminology, as almost always that example will be of my employer or a partner. From my perspective it is best to stick to examples I know best, and from yours, simply call your vendor and ask if they have similar functionality. Massively distributed is tough if you are coming from a traditional DNS environment, because DNS alone doesn’t do it. DNS load balancing helps, but so does the concept of a Wide IP. That’s an IP that is flexible on the back end, but static on the front end. Just like when load balancing you have a single IP that directs users to multiple servers, a Wide IP is a single IP address that directs people to multiple locations. A Wide IP is a nice abstraction to actively load balance not just between servers but between sites. It also allows DNS to be simplified when dealing with those multiple sites because it can route to the most appropriate instance of an application. Today most appropriate is generally defined by geographically closest, but in some cases it can include things like “send our high-value customers to a different datacenter”. There are a ton of other issues with this type of distribution, not the least of which is database integrity and primary sourcing, but I’m going to focus on the DNS bit today, just remember that DNS is a tool to get users to your systems like a map is a tool to get customers to your business. In the end, you still have to build the destination out. DNS that supports IPv4 and IPv6 both will be mandatory for the foreseeable future, as new devices come online with IPv6 and old devices persist with IPv4. There are several ways to tackle this issue, from the obvious “leave IPv4 running and implement v6 DNS” to the less common “implement a solution that serves up both”. DNSSEC is another tough one. It adds complexity to what has always been a super-simplistic system. But it protects your corporate identity from those who would try to abuse it. That makes DNSSEC inevitable, IMO. Risk management wins over “it’s complex” almost every time. There are plenty of DNSSEC solutions out there, but at this time DNSSEC implementations do not run BIND. The update ISC is working on might change that, we’ll have to see. The ability to change what’s behind a DNS name dynamically is naturally greatly assisted by the aforementioned Wide IPs. By giving a constant IP that has multiple variable IPs behind it, adding or removing those behind the Wide IP does not suffer the latency that DNS propagation requires. Elasticity of servers servicing a given DNS name becomes real simply by the existence of Wide IPs. Keeping DNS servers synched can be painful in a dynamic environment. But if the dynamism is not in DNS address responses, but rather behind Wide IPs, this issue goes away also. The DNS servers will have the same set of Name/address pairs that require changes only when new applications are deployed (servers is the norm for local DNS, but for Wide-IP based DNS, servers can come and go behind the DNS service with only insertion into local DNS, while a new application might require a new Wide-IP and configuration behind it). Okay, this got long really quickly. I’m going to insert an image or two so that there’s a graphical depiction of what I’m talking about, then I’m going to cut it short. There’s a lot more to say, but don’t want to bore you by putting it all in a single blog. You’ll hear from me again on this topic though, guaranteed. Related Articles and Blogs F5 Friday: Infoblox and F5 Do DNS and Global Load Balancing Right. How to Have Your (VDI) Cake and Deliver it Too F5 BIG-IP Enhances VMware View 5.0 on FlexPod Let me tell you Where To Go. Carrier Grade DNS: Not your Parents DNS Audio White Paper - High-Performance DNS Services in BIG-IP ... Enhanced DNS Services: For Administrators, Managers and Marketers The End of DNS As We Know It DNS is Like Your Mom F5 Video: DNS Express—DNS Die Another Day443Views0likes0CommentsThere is more to it than performance.
Did you ever notice that sometimes, “high efficiency” furnaces aren’t? That some things the furnace just doesn’t cover – like the quality of your ductwork, for example? The same is true of a “high performance” race car. Yes, it is VERY fast, assuming a nice long flat surface for it to drive on. Put it on a dirt road in the rainy season, and, well, it’s just a gas hog. Or worse, a stuck car. I could continue the list. A “high energy” employee can be relatively useless if they are assigned tasks at which brainpower, not activity rate, determines success… But I’ll leave it at those three, I think you get the idea. The same is true of your network. Frankly, increasing your bandwidth in many scenarios will not yield the results you expected. Oh, it will improve traffic flow, and overall the performance of apps on the network will improve, the question is “how much?” It would be reasonable – or at least not unreasonable – to expect that doubling Internet bandwidth should stave off problems until you double bandwidth usage. But often times the problems are with the overloading apps we’re placing upon the network. Sometimes, it’s not the network at all. Check the ecosystem, not just the network. When I was the Storage and Servers Editor over at NWC, I reviewed a new (at the time) HP server that was loaded. It had a ton of memory, a ton of cores, and could make practically any application scream. It even had two gigabit NICs in it. But they weren’t enough. While I had almost nothing bad to say about the server itself, I did observe in the summary of my article that the network was now officially the bottleneck. Since the machine had high speed SAS disks, disk I/O was not as bi a deal as it traditionally has been, high-speed cached memory meant memory I/O wasn’t a problem at all, and multiple cores meant you could cram a ton of processing power in. But team those two NICs and you’d end up with slightly less than 2 Gigabits of network throughput. Assuming 100% saturation, that was really less than 250 Megabytes per second, and that counts both in and out. For query-intensive database applications or media streaming servers, that just wasn’t keeping pace with the server. Now here we are, six or so years later, and similar servers are in use all over the globe… Running VMs. Meaning that several copies of the OS are now carving up that throughput. So start with your server. Check it first if the rest of the network is performing, it might just be the problem. And while we’re talking about servers, the obvious one needs to be mentioned… Don’t forget to check CPU usage. You just might need a bigger server or load balancing, or these days, less virtuals on your server. Heck, as long as we’re talking about servers, let’s consider the app too. The last few years for a variety of reasons we’ve seen less focus on apps whose performance is sucktacular, but it still happens. Worth looking into if the server turns out to be the bottleneck. Old gear is old. I was working on a network that deployed an ancient Cisco switch. The entire network was 1 Gig, except for that single switch. But tracing wires showed that switch to lie between the Internet and the internal network. A simple error, easily fixed, but an easy error to have in a complex environment, and certainly one to be aware of. That switch was 10/100 only. We pulled it out of the network entirely, and performance instantly improved. There’s necessary traffic, and then there’s… Not all of the traffic on your network needs to be. And all that does need to be doesn’t have to be so bloated. Look for sources of UDP broadcasts. More often than you would think, applications broadcast that you don’t care about. Cut them off. For other traffic, well there is Application Delivery Optimization. ADO is improving application delivery by a variety of technical solutions, but we’ll focus on those that make your network and your apps seem faster. You already know about them – compression, caching, image optimization… In the case of back-end services, de-duplication. But have you considered what they do other than improve perceived or actual performance? Free Bandwidth Anything that reduces the size of application data leaving your network also reduces the burden on your Internet connection. This goes without saying, but as I alluded to above, we sometimes overlook the fact that it is not just application performance we’re impacting, but the effectiveness of our connections – connections that grow more expensive by leaps and bounds each time we have to upgrade them. While improving application performance is absolutely a valid reason to seek out ADO, delaying or eliminating the need to upgrade your Internet connection(s) is another. Indeed, in many organizations it is far easier to do TCO justification based upon deferred upgrades than it is based upon “our application will be faster”, while both are benefits of ADO. New stuff! And as time wears on, SPDY, IPv6, and a host of other technologies will be more readily available to help you improve your network. Meanwhile, check out gateways for these protocols to make the transition easier. In Summation There are a lot of reasons for apps not to perform, and there are a lot of benefits to ADO. I’ve listed some of the easier problems to ferret out, the deeper into your particular network you get, the harder it is to generalize problems. But these should give you a taste for the types of things to look for. And a bit more motivation to explore ADO. Of course I hope you choose F5 gear for ADO and overall application networking, but there are other options out there. I think. Maybe.301Views0likes0CommentsF5 Friday: In the NOC at Interop
#interop #fasterapp #adcfw #ipv6 Behind the scenes in the Interop network Interop Las Vegas expects somewhere in the realm of 10,000+ attendees this year. Most of them will no doubt be carrying smart phones, many tablets, and of course the old standby, the laptop. Nearly every one will want access to some service – inside or out. The Interop network provides that access – and more. F5 solutions will provide IT services, including IPv4–IPv6 translation, firewall, SSL VPN, and web optimization technologies, for the Network Operations Center (NOC) at Interop. The Interop 2012 network is comprised of the show floor Network Operations Center (NOC), and three co-location sites: Colorado (DEN), California (SFO), and New Jersey(EWR). The NOC moves with the show to its 4 venues: Las Vegas, Tokyo, Mumbai, and New York. F5 has taken a hybrid application delivery network architectural approach – leveraging both physical devices (in the NOC) and virtual equivalents (in the Denver DC). Both physical and virtual instances of F5 solutions are managed via a BIG-IP Enterprise Manager 4000, providing operational consistency across the various application delivery services provided: DNS, SMTP, NTP, global traffic management (GSLB), remote access via SSL VPNs, local caching of conference materials, and data center firewall services in the NOC DMZ. Because the Interop network is supporting both IPv6 and IPv4, F5 is also providing NAT64 and DNS64 services. NAT64: Network address translation is performed between IPv6 and IPv4 on the Interop network, to allow IPv6-only clients and servers to communicate with hosts on IPv4-only networks DNS64: IPv6-to-IPv4 DNS translations are also performed by these BIG-IPs, allowing A records originating from IPv4-only DNS servers to be converted into AAAA records for IPv6 clients. F5 is also providing SNMP, SYSLOG, and NETFLOW services to vendors at the show for live demonstrations. This is accomplished by cloning the incoming traffic and replicating it out through the network. At the network layer, such functionality is often implemented by simply mirroring ports. While this is sometimes necessary, it does not necessarily provide the level of granularity (and thus control) required. Mirrored traffic does not distinguish between SNMP and SMTP, for example, unless specifically configured to do so. While cloning via an F5 solution can be configured to act in a manner consistent with port mirroring, cloning via F5 also allows intermediary devices to intelligently replicate traffic based on information gleaned from deep content inspection (DCI). For example, traffic can be cloned to a specific pool of devices based on the URI, or client IP address or client device type or destination IP. Virtually any contextual data can be used to determine whether or not to clone traffic. You can poke around with more detail and photos and network diagrams at F5’s microsite supporting its Interop network services. Dashboards are available, documentation, pictures, and more information in general on the network and F5 services supporting the show. And of course if you’re going to be at Interop, stop by the booth and say “hi”! I’ll keep the light on for ya… F5 Interopportunities at Interop 2012 F5 Secures and Optimizes Application and Network Services for the Interop 2012 Las Vegas Network Operations Center When Big Data Meets Cloud Meets Infrastructure Mobile versus Mobile: 867-5309 Why Layer 7 Load Balancing Doesn’t Suck BYOD–The Hottest Trend or Just the Hottest Term What Does Mobile Mean, Anyway? Mobile versus Mobile: An Identity Crisis The Three Axioms of Application Delivery Don’t Let Automation Water Down Your Data Center The Four V’s of Big Data461Views0likes0CommentsIPv6: Not a Solution for Security!!!
On April 15 th , 2011, the last of the IPv4 address blocks was allocated,. Due to IPv4 address depletion, migration to IPv6 is inevitable. This migration to IPv6 will ease IPv4 address depletion but it does not address other significant networking issues such as security. Networks that have already migrated to IPv6 are starting to experience the first Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks. These attacks can lead to significant amounts of downtime and, especially for Communication Service Providers, loss of revenues and increases in subscriber churn. For CSP’s to stay competitive and maintain an acceptable Quality of Experience (QoE), security and mitigation of DoS/DDoS attacks must be included in the migration to IPv6. Throughout the development of IPv6 technology, security was an integrated part of the standards. In the original version of the RFC, IPsec was integrated into the IPv6 header. IPsec provided basic security in the IP stack. However, in December 2011 IPsec as a requirement was changed to an optional element in the RFC. This means that all IPv6 networks will have to be able to interoperate with traffic that includes both IPsec and non-IPsec. And even though there is the argument that by having non-IPsec integration and IPv6 opens the door for more DoS/DDOS attacks, IPsec is not the ultimate solution to DoS/DDOS attacks. Migration technologies have been created to make interoperability of IPv4 and IPv6 networks. For CSPs, this technology is crucial when their subscribers are on an IPv6 network and the content that the subscribers demand is on the IPv4 Internet. Carrier grade network address translation (CGNAT) is designed to managed address translations and assignments for IPv4 and IPv6 networks. This technology integrated with Domain Name System 64 (DNS64) ensures that addresses and domains are locatable and accessible from either an IPv4 or IPv6 network. Tunneling technologies, such as Dual Stack Lite and 6RD, transport traffic through encrypted tunnels which allows IPv4 or IPv6 traffic to be delivered across either network. All of these methods provide different tools for the CSP to migrate all or part of their network to IPv6 and still is able to interoperate with the IPv4 Internet. However, none of these methods address the security threats that exist on the Internet. DoS/DDoS attacks can never be completely prevented. The only strategy that truly works is using security tools, like IPsec, along with distributed architectures to mitigate the impact of these attacks. While CSP’s are migrating to new technologies and upgrading to IPv6, new security architecture should be examined. Since almost every part of the network has to be touched, , this is the perfect opportunity for CSP's to update their security architecture along with becoming IPv6 compliant. No matter which technology scheme for migration to IPv6 is used, all elements of the network can be designed to help mitigate the impacts and costs of Dos/DDoS attacks. Whether it is CGNAT, DNS 64, IPv6 Gateway, or tunneling methodologies, all of the different IPv6 migration technologies can be deployed to maintain service up time during a DoS/DDoS attack. The ultimate goal of mitigating a DoS/DDoS attack is to maintain services for subscribers and minimize degradation of the QoE for subscribers. The challenge of achieving this goal is deploying a network to provide this level of service during an attack without creating a CapEx nightmare. The first step in being successful is creating a network that will maintain service during a DoS/DDoS attack and minimize the expenditures associated is to create an intelligent IPv6 infrastructure that can scale, perform and distribute traffic in an intelligent manner to mitigate the impacts of an attack. Deploying IPv6 is not a solution to attacks from the Internet, however the network architecture can be built to mitigate the impacts of these attacks and this architecture can be deployed as part of the migration to IPv6. Related Articles ZDNet: “First IPv6 Distributed Denial of Service Internet Attacks Seen” RFC 6434 Pete Silva - ipv6 Ray Vinson - IPv6 Lori MacVittie - DDoS F5 Friday: 'IPv4 and IPv6 Can Coexist' or 'How to eat your cake and ... Josh Michaels - DDoS Mitigating Slow HTTP Post DDoS Attacks With iRules > DevCentral ... IPv6 - DevCentral - DevCentral Groups - Social Forums ... IP::addr and IPv6 Audio White Paper - Controlling Migration to IPv6: A Gateway to ... IPv6: Yeah, we got that351Views0likes0CommentsThe Mobile Chimera
#mobile #vdi #IPv6 In the case of technology – as with mythology - the whole is often greater (and more challenging) than the sum of its parts. The chimera is a mythological beast of scary proportions. Not only is it fairly large, but it’s also got three, independent heads – traditionally a lion, a goat, and a snake. Some variations on this theme exist, but the basic principle remains: it’s a three-headed, angry beast that should not be taken lightly should one encounter it in the hallway. Individually, one might have a strategy to meet the challenge of a lion or a goat head on. But when they converge into one very angry and dangerous beast, the strategies and tactics employed to best any one of them will almost certainly not work to address all three of them simultaneously. The world of mobility is rapidly approaching its own technological chimera, one comprised of three individual technology trends. While successful stratagem and tactics exist which address each one individually, when taken together they form a new challenge requiring a new strategic approach. THE MOBILE CHIMERA Three technology trends - VDI, mobile, and IPv6 - are rapidly converging upon the enterprise. Each is driven in part by the other, and each requires in part functionality and support of another. Addressing the challenges accompanying this trifecta requires a serious evaluation of the enterprise infrastructure with an eye toward performance, scalability, and flexibility, less it be overwhelmed by demand originating both internally and externally. Mobile The myriad articles, blogs, and editorial orations on mobile device growth have to date focused on the need for organizations to step up and accept the need for device-ready enterprise applications. This focus has thus far ignored the reality of the diversity of the device client base, the ramifications of which those with long careers in IT will painfully recall from the client-server era. Thus it is no surprise that interest in and adoption of technology such as VDI is on the rise, as virtualization serves as a popular solution to the problem of delivering applications to a highly-diverse set of clients. But virtualization, as popular a solution as it may be, is not a panacea. Security and control over corporate resources and applications is a growing necessity today because of the ease with which users can take advantage of mobile technology to access them. Access control does not entirely solve the challenges of a diverse mobile client audience, as attackers turn their attention on mobile platforms as a means to gain access to resources and data previously beyond their reach. The need for endpoint security inspection continues to grow as the threat posed by mobile devices continues to rear its ugly head. VDI It was inevitable that the growth of mobile device usage in the enterprise continued to grow that so, too, would the solution of VDI grow as the most efficient way to deliver applications without requiring mobile platform-specific versions. The desire by business owners and security practitioners to keep data securely within the data center "walls", too, is a factor in the rising desire to deploy VDI. VDI enables organizations to deliver applications remotely while maintaining control over data inside the data center, preserving enforcement of corporate security policies and minimizing risk. But VDI deployments are not trivial, regardless of the virtualization platform chosen. Each virtualization solution has its challenges and most of those challenges revolve around the infrastructure necessary to support such an initiative. Scalability and flexibility are important facets of VDI delivery infrastructure, and performance cannot be overlooked if such deployments are to be considered successful. IPv6 Who could forget that the Internet is being pressured to move to IPv6 sooner rather than later, in part because of the growth of mobile clients? The strain placed on service providers to maintain IPv4 support as a means to not "break the Internet" can only be borne so long before IPv6 becomes, as has been predicted, the Y2K for the network. The ability to deliver applications via VDI to mobile devices will soon require support for IPv6, but will not obviate the need to support IPv4 just yet. A dual stack approach will be required during the transition period, putting delivery infrastructure again front and center in the battle to deploy and support applications for mobile devices. With all accounts numbering mobile devices in the four billion range across multiple platforms and effectively 0 IPv4 addresses left to assign to those devices, it should be no surprise that as these three technology trends collide the result will be the need for a new mobility strategy. This is why solutions are strategic and technology is tactical. There exist individual products that easily solve each of these problems individually, but very few solutions that address the combined juggernaut that is the three combined. It is necessary to coordinate and architect a solution that can solve all three challenges simultaneously as a means to combat complexity and its associated best friend forever, operational risk. A flexible and scalable delivery strategy will be necessary to ensure performance and security without sacrificing operational efficiency. I Scream, You Scream, We all Scream for Ice Cream (Sandwich) The Full-Proxy Data Center Architecture Scaling VDI Architectures Virtualization and Cloud Computing: A Technological El Niño The Future of Cloud: Infrastructure as a Platform Strategic Trifecta: Access Management From a Network Perspective, What Is VDI, Really? F5 Friday: A Single Namespace to Rule Them All256Views0likes0CommentsIP::addr and IPv6
Did you know that all address internal to tmm are kept in IPv6 format? If you’ve written external monitors, I’m guessing you knew this. In the external monitors, for IPv4 networks the IPv6 “header” is removed with the line: IP=`echo $1 | sed 's/::ffff://'` IPv4 address are stored in what’s called “IPv4-mapped” format. An IPv4-mapped address has its first 80 bits set to zero and the next 16 set to one, followed by the 32 bits of the IPv4 address. The prefix looks like this: 0000:0000:0000:0000:0000:ffff: (abbreviated as ::ffff:, which looks strickingly simliar—ok, identical—to the pattern stripped above) Notation of the IPv4 section of the IPv4-formatted address vary in implementations between ::ffff:192.168.1.1 and ::ffff:c0a8:c8c8, but only the latter notation (in hex) is supported. If you need the decimal version, you can extract it like so: % puts $x ::ffff:c0a8:c8c8 % if { [string range $x 0 6] == "::ffff:" } { scan [string range $x 7 end] "%2x%2x:%2x%2x" ip1 ip2 ip3 ip4 set ipv4addr "$ip1.$ip2.$ip3.$ip4" } 192.168.200.200 Address Comparisons The text format is not what controls whether the IP::addr command (nor the class command) does an IPv4 or IPv6 comparison. Whether or not the IP address is IPv4-mapped is what controls the comparison. The text format merely controls how the text is then translated into the internal IPv6 format (ie: whether it becomes a IPv4-mapped address or not). Normally, this is not an issue, however, if you are trying to compare an IPv6 address against an IPv4 address, then you really need to understand this mapping business. Also, it is not recommended to use 0.0.0.0/0.0.0.0 for testing whether something is IPv4 versus IPv6 as that is not really valid a IP address—using the 0.0.0.0 mask (technically the same as /0) is a loophole and ultimately, what you are doing is loading the equivalent form of a IPv4-mapped mask. Rather, you should just use the following to test whether it is an IPv4-mapped address: if { [IP::addr $IP1 equals ::ffff:0000:0000/96] } { log local0. “Yep, that’s an IPv4 address” } These notes are covered in the IP::addr wiki entry. Any updates to the command and/or supporting notes will exist there, so keep the links handy. Related Articles F5 Friday: 'IPv4 and IPv6 Can Coexist' or 'How to eat your cake ... Service Provider Series: Managing the ipv6 Migration IPv6 and the End of the World No More IPv4. You do have your IPv6 plan running now, right ... Question about IPv6 - BIGIP - DevCentral - F5 DevCentral ... Insert IPv6 address into header - DevCentral - F5 DevCentral ... Business Case for IPv6 - DevCentral - F5 DevCentral > Community ... We're sorry. The IPv4 address you are trying to reach has been ... Don MacVittie - F5 BIG-IP IPv6 Gateway Module1.4KViews1like1Comment