design
7697 TopicsF5 BGP Peering in Active /Standby Cluster
Hi team , I am trying to achieve Dynamic Routing over F5 BIGIP Devices with Cisco ACI in front which has the following design requirements. F5 is configured to be as default gateway for pool of webservers with a Virtual IP Subnet hosted for VIP on the F5 itself. The BGP is established by using a separate vlan to peer with CISCO ACI BGP Peering running an EBGP. The BGP peering is established by declaring selfip on the BGP Vlan and the route is advertised through the peering. I can see the Virtual Subnet a.a.a.a and Pool Member Subnet b.b.b.b as directly connected routes in the sh ip route on Imish console. Moreover a.a.a.a and b.b.b.b are consistently being advertised from Active and Standby F5 devices to Cisco ACI peers from their specific BGP peers hosted locally in F5. I tried to inject a Route for VIP subnet a.a.a.a with /32 by advertising route on BGP which is advertised as Kernel route and works well in Clustering scenario by being advertised from Active Boxes. The problem comes for Pool Member subnet b.b.b.b which is being advertised as connected route from both active and standby devices to Cisco ACI Peers and traffic ends up taking both the paths simultaneously. I have few questions in this design : Is there a way we can supress routes from Standby device being advertised to the upstream routers? Is there a way we can adjust the metrics for directly connected routes to be advertised with low metrics from Standby device. I have tried to configure the Pool Member as Wild Card Forwarding IP and to inject the RHI for b.b.b.b subnet as TMM route but the same was not honoured as TMM route because of its nature of directly connected routes. Any way to feasibly stop route advertisement from Standby would be greatly appreciated.Solved144Views0likes8CommentsCheckpoint Web Smartconsole behind reverse proxy.
Does anyone have any experience at trying (and hopefully suceeding) to put a Checkpoint (CP) FW Provider-1 based web smartconsole behind a reverse proxy. The thing is that CP use local IP addresses to identify one of a selection of management module instances. And they use webtransport/websockets to connect from these mgmt modules back to a browser for displaying FW policies and log data etc. That all seems fairly OK but they don't anchor it using the connection ID and so the raw IPs (of what they call the domain blade/instance) get passed to the browser. But we would prefer to NAT/hide/reIP the server (domain) side IPs and not have the internal server/domain IPs sent along to the browser. Part of the conversation, and some wrapper text from me, from the server to the client follows: *** We wish to use access to various customer domains using the /smartconsole web interface. But the access has to be behind a reverse proxy (F5 vIP) and after the initial logon using the CMA IP behind a vIP (so address the browser sees is a service public one) you get a screen where the domain is listed and after selecting continue you get redirected seperately to the CMA IP in an internal JSON/javascript message. Hence breaking the attempt to have the CMA behind a reverse proxy. *** {"data":{"loginToDomain":{"transportOtt":"107ad894-253d-4638-aa31-1c3e7d23172a","transportUrl":"https://100.64.20.29:443/smartconsole/transport","__typename":"LoginToDomainResponse"}}} ***61Views0likes0CommentsKerberos SSO without APM?
Hi, I have a question regarding Single Sign-On with kerberos. I have a pair of Virtual BigIPs on a Viprion-System running in Active/Standby. The systems are connected one-armed and therefor using SNAT. I am loadbalancing several servers using the LTM...quite easy, no problems. Now the application guys would like to use our active directory for a single sign-on to these servers. This works fine, if you address one of the pool-servers directly, but not if you use the virtual server. I think the problem is the SNAT. When crossing the LTM, the source address of the packet is changed. When the kerberos-ticket arrives at the server, the IP inside the ticket is different from the source ip because of SNAT. Is that right, or is there another reason? Would running the LTM in two-armed-mode without SNAT solve my problem? Or is the only way to buy an APM license and let the BigIP talk to the active directory? Unfortunately I have only little knowledge in SSO/Kerberos/AD, but I hope I could make myself clear. Thanks in advance Regards, ThorstenSolved891Views0likes6CommentsIs it possible to connect to VPN from a Windows client command line without user interaction?
I am trying to get our F5 11.4 APM set up to work with remote Windows clients that need to connect and then disconnect from the VPN without any user interaction. To briefly summarize, this is for a remote password self-service application, so I need for the client machines to be able to do a remote command line access using "f5fpc -start /c "C:\Program Files\F5 VPN\client.f5c /t /q" so the users don't get any prompts or pop-ups, the client.f5c file has to provide the username and password and server to connect to, then the password self service app does its thing, and finally issues the command "f5fpc -stop" at the end to close the VPN session out. This needs to be done in the background as it is using straight username and password authentication and we don't want the users to see that. I keep getting authentication failures, and when I check the client.f5c, I see that it has not saved my password. I went back and checked my F5 settings, and sure enough the save password on exit setting is set to false. Even if I edit the file and save it, it just ignores the password I put in there and removes it afterwards. I guess I would need to set the connectivity profile to allow password caching and save it to disk, but can I make that change, download the package with those options, and then change it back as we generally don't want the passwords to be cached? How would I need to configure the access policy for this to work? I tried with and without a logon page, but I don't seem to be getting a username and/or password passed through from the client, none of the session variables show the values from the edge client config file. Is there a way to do this without a logon page so it is completely transparent and silent to the users?Solved2.5KViews0likes9CommentsCookie persistence and source address fallback
I wonder what will be result of such setup: LB set to Round Robin Default Persistence Profile: Cookie (Cookie Insert) Fallback Persistence Profile: Source Address Source IP same for all requests (SNATed) My assumption is: First new TCP connection established, no cookie present Fallback Persistence used, no Persistence Record (PR) found No persistence is applied because none exist so connection will be directed to first member What will happen then? Persistence Record for source IP will be created pointing to first server? In HTTP response cookie is inserted pointing to first pool member Then second connection from the same IP comes, assuming that PR was created and did not time out then LB will be ignored and connection will be directed to first server In HTTP response again cookie pointing to first server will be inserted Then all returning connections (with cookies set) will be directed to first server, LB in fact will not be used, except for situation when there is enough period of inactivity between connections to allow PR to expire, but will then new connection be send to second server according to RR or not necessarily? Is above correct? PiotrSolved1KViews0likes19CommentsOne Connect not keeping connection open on HTTP 204 No Content
We have an application that returns a 'HTTP 204 No Content' response on 99% of all requests. These connections are being kept open and reused on the client side of the F5. The problem is the Load Balancer closes these connections on the server side right after the HTTP 204 RESPONSE is received from the server. When we send a HTTP 200 the connection is kept open and reused(normal One Connect operation). Is there an iRule that we can apply to the VIP to keep the connection open even when the Server returns a 'HTTP 204 No Content'? Thanks675Views0likes9Comments5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unified toolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.454Views0likes0CommentsF5 Content Switching
I am coming from a netscaler world where they had an element in the UI for content switching... I know with F5 you can use iRules to select a Pool but the problem I find with that is in the netscaler world you select a virtual server to content switch to which has the advantage of having all of the virtual server specific policies/authenication/waf etc tied to it. Is it possible to use irules to direct to another virtual server with F5 while retaining the same external IP for the client instead of the pool? (ie: I don't want to do a simple redirection from one url to another or ip to another, it needs to be transparent like netscaler does)2.1KViews0likes4Comments