11.4
8 TopicsEnabling PFS
Hi everyone, I've been trying to get PFS enabled on my LTM (ver 11.4.1) and am running into a blocker. I've tried various cipher string options and have no luck so far. I've also opened a ticket with f5 support and they just point me to various devcentral discussions that don't have the detail I need. So I guess my question is: what are the cipher options I need to add/remove to enable PFS on a SSL client profile? or is there another way to get PFS going that I am missing? Thanks!2.6KViews0likes53CommentsF5 101 - BIG-IP Virtual Edition Version
I'm starting my journey with F5 and I need to confirm the version I should be deploying to accompany the 101 exam. According to the latest guide this is 11.4. Is this correct? I cannot find 11.4 as an option to download via the partner portal only 11.5. Is 11.5 suitable as a base for the 101 exam?Solved1KViews0likes4CommentsBIG-IP : upgrade fails to transfer configuration
BIG-IP 11.4.0 Build 2384.0 Final I want to upgrade to 11.4.1 retaining current configuration. I downloaded BIGIP-11.4.1.608.0.iso and uploaded to BIG-IP and installed to volume HD1.2 ( HD1.1 has 11.4.0 ). System > Software Management : Boot Locations I select HD1.2 , set Install Configuration to Yes , set Source Volume = "HD1.1:11.4.0" , and click Activate However on reboot the message is displayed : The configuration has not yet loaded. If this message persists, it may indicate a configuration problem.1KViews0likes11CommentsProblem with iRule upgrading to 11.4.
I'm trying to upgrade to 11.4 but am at a bit of a loss to know how to replace the HTTP_CLASS_SELECTED functionality we currently have in one of our iRules. The iRule aborts a session when it encounters a user-agent listed in the 'UserAgentBlacklist' data group, but it then checks to see if there's an ASM class assigned, and if so, it disables ASM. Without the last part, we were getting error messages written to the log. Any idea on how to rewrite this iRule in 11.4? Code when HTTP_REQUEST { set abort_trans 0 if { [class match -- [string tolower [HTTP::header "User-Agent"]] contains AVUserAgentBlacklist ] } { set abort_trans 1}} when HTTP_CLASS_SELECTED { if {[HTTP::class asm]==1}{ if {$abort_trans==1}{ ASM::disable drop }} else {if {$abort_trans==1} {drop }}}334Views0likes8Commentslocal traffic policy http-header insert action
Hi! BIG-IP 11.4 introduces new feature called Local Traffic Policies. Could you please help with the question if it is possible to use iRules commands inside local traffic policies? I want use logic like represented below. Insert specific header with IP address value. policy_rule_1 { actions { 0 { http-header insert name My-Header-Client-IP value [IP::client_addr] } } conditions { none } }333Views0likes12CommentsBIG-IP : multi-boot multi-version : each on separate file-system ?
This actually might be a bit of a newbie unix question. My device has 2 volumes , each with a different version of BIG-IP : HD1.1 BIG-IP 11.4.0 Build 2384.0 Final HD1.2 BIG-IP 11.4.1 Build 608.0 Final Via the browser admin ( System > Software Management > Boot Locations > ) I can boot into one volume or another. Is it true that HD1.1 and HD1.2 are on totally separate file-systems ? So, once booted, ssh into big-ip only has access to the file-system for the version on the volume booted ?309Views0likes1CommentGenerating SHA2 Algorithm certificates on 11.4
I am trying to generate SHA2 certificates on my F5 and they are being generated with SHA1. Is there a hotfix or setting that needs to be applied to get this algorithm on my boxes? I would prefer to generate them from the F5 instead of openssl. Thanks.257Views0likes1CommentLoad Aware Fabrics
#cloud Heterogeneous infrastructure fabrics are appealing but watch out for the gotchas One of the "rules" of application delivery (and infrastructure in general) has been that when scaling out such technologies, all components must be equal. That started with basic redundancy (deploying two of everything to avoid a single point of failure in the data path) and has remained true until recently. Today, fabrics can be comprised of heterogeneous components. Beefy, physical hardware can be easily paired with virtualized or cloud-hosted components. This is good news for organizations seeking the means to periodically scale out infrastructure without oversubscribing the rest of the year, leaving resources idle. Except when it's not so good, when something goes wrong and there's suddenly not enough capacity to handle the load because of the disparity in component capacity. We (as in the industry) used to never, ever, ever suggest running active-active infrastructure components when load on each component was greater than 50%. The math easily shows why: It's important to note that this scenario isn't just a disaster (failure) based scenario. This is true for maintenance, upgrades, etc... as well. This is why emerging fabric-based models should be active-active-N. That "N" is critically important as a source of resources designed to ensure that the "all not so good" scenario is covered. This fundamental axiom of architecting reliable anything - always match capacity with demand - is the basis for understanding the importance of load-aware failover and distribution in fabric-based architectures. In most HA (high availability) scenarios the network architect carefully determines the order of precedence and failover. These are pre-determined, there's a primary and a secondary (and a tertiary, and so on). That's it. It doesn't matter if the secondary is already near or at capacity, or that it's a virtualized element with limited capacity instead of a more capable piece of hardware. It is what it is. And that "is" could be disastrous to availability. If that "secondary" isn't able to handle the load, users are going to be very angry because either responsiveness will plummet to the point the app might as well be unavailable or it will be completely unavailable. In either case, it's not meeting whatever SLA has been brokered between IT and the business owner of that application. That's why it's vitally important as we move toward fabric-based architectures that failover and redundancy get more intelligent. That the algorithms used to distribute traffic across the fabric get very, very intelligent. Both must become load aware and able to dynamically determine what to do in the event of a failure. The fabric itself ought to be aware of not just how much capacity each individual component can handle but how much it currently is handling, so that if a failure occurs or performance is degrading it can determine dynamically which component (or components, if need be) can take over more load. In the future, that intelligence might also enable the fabric to spin up more resources if it recognizes there's just not enough. As we continue to architect "smarter" networks, we need to re-evaluate existing technology and figure out how it needs to evolve, too, to fit into the new, more dynamic and efficiency-driven world. It's probably true that failover technologies and load balancing algorithms aren't particularly exciting to most people, but they're a necessary and critical function of networks and infrastructure designed to ensure high-availability in the event of (what many would call inevitable) failure. So as network and application service technologies evolve and transform, we've got to be considering how to adapt foundational technologies like failover models to ensure we don't lose the stability necessary to continue evolving the network.146Views0likes0Comments