soap
5 TopicsManaging ZoneRunner Resource Records with Bigsuds
Over the last several years, there have been questions internal and external on how to manage ZoneRunner (the GUI tool in F5 DNS that allows you to manage DNS zones and records) resources via the REST interface. But that's a no can do with the iControl REST--it doesn't have that functionality. It was brought to my attention by one of our solutions engineers that a customer is using some methods in the SOAP interface that allows you to do just that...which was news to me! The things you learn... In this article, I'll highlight a few of the methods available to you and work on a sample domain in the python module bigsuds that utilizes the suds SOAP library for communication duties with the BIG-IP iControl SOAP interface. Test Domain & Procedure For demonstration purposes, I'll create a domain in the external view, dctest1.local, with the following attributes that mirrors nearly identically one I created in the GUI: Type: master Zone Name: dctest1.local. Zone File Name: db.external.dctest1.local. Options: allow-update from localhost TTL: 500 SOA: ns1.dctest1.local. Email: hostmaster.ns1.dctest1.local. Serial: 2021092201 Refresh: 10800 Retry: 3600 Expire: 604800 Negative TTL: 60 I'll also add a couple type A records to that domain: name: mail.dctest1.local., address: 10.0.2.25, TTL: 86400 name: www.dctest1.local., address: 10.0.2.80, TTL: 3600 After adding the records, I'll update one of them, changing the IP and the TTL: name: mail.dctest1.local., address: 10.0.2.110, ttl: 900 Then I'll delete the other one: name: www.dctest1.local., address: 10.0.2.80, TTL: 3600 And finally, I'll delete the zone: name: dctest1.local. ZoneRunner Methods All the methods can be found on Clouddocs in the ZoneRunner, Zone, and ResourceRecord method pages. The specific methods we'll use in our highlight real are: Management.ResourceRecord.add_a Management.ResourceRecord.delete_a Management.ResourceRecord.get_rrs Management.ResourceRecord.update_a Management.Zone.add_zone_text Management.Zone.get_zone_v2 Management.Zone.zone_exist With each method, there is a data structure that the interface expects. Each link above provides the details, but let's look at an example with the add_a method. The method requires three parameters, view_zones, a_records, and sync_ptrs, which the image of the table shows below. The boolean is just a True/False value in a list. The reason the list ( [] ) is there for all the attributes is because you can send a single request to update more than one zone, and addmore than one record within each zone if desired. The data structure for view_zones and a_records is in the following two images. Now that we have an idea of what the methods require, let's take a look at some code! Methods In Action First, I import bigsuds and initialize the BIG-IP. The arguments are ordered in bigsuds for host, username, and password. If the default “admin/admin” is used, they are assumed, as is shown here. import bigsuds b = bigsuds.BIGIP(hostname='ltm3.test.local') Next, I need to format the ViewZone data in a native python dictionary, and then I check for the existence of that zone. zone_view = {'view_name': 'external', 'zone_name': 'dctest1.local.' } b.Management.Zone.zone_exist([zone_view]) # [0] Note that the return value, which should be a list of booleans, is a list with a 0. I’m guessing that’s either suds or the bigsuds implementation doing that, but it’s important to note if you’re checking for a boolean False. It’s also necessary to set the booleans as 0 or 1 as well when sending requests to BIG-IP with bigsuds. Now I will create the zone since it does not yet exist. From the add_zone_text method description on Clouddocs, note that I need to supply, in separate parameters, the zone info, the appropriate zone records, and the boolean to sync reverse records or not. zone_add_info = {'view_name': 'external', 'zone_name': 'dctest1.local.', 'zone_type': 'MASTER', 'zone_file': 'db.external.dctest1.local.', 'option_seq': ['allow-update { localhost;};']} zone_add_records = 'dctest1.local. 500 IN SOA ns1.dctest1.local. hostmaster.ns1.dctest1.local. 2021092201 10800 3600 604800 60;\n' \ 'dctest1.local. 3600 IN NS ns1.dctest1.local.;\n' \ 'ns1.dctest1.local. 3600 IN A 10.0.2.1;' b.Management.Zone.add_zone_text([zone_add_info], [[zone_add_records]], [0]) b.Management.Zone.zone_exist([zone_view]) # [1] Note that the strings here require a detailed understanding of DNS record formatting, the individual fields are not parameters that can be set like in the ZoneRunner GUI. But, I am confident there is an abundance of modules that manage DNS formatting in the python ecosystem that could simplify the data structuring. After creating the zone, another check to see if the zone exists results in a true condition. Huzzah! Now I’ll check the zone info and the existing records for that zone. zone = b.Management.Zone.get_zone_v2([zone_view]) for k, v in zone[0].items(): print(f'{k}: {v}') # view_name: external # zone_name: dctest1.local. # zone_type: MASTER # zone_file: "db.external.dctest1.local." # option_seq: ['allow-update { localhost;};'] rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: print(rr) # dctest1.local. 500 IN SOA ns1.dctest1.local. hostmaster.ns1.dctest1.local. 2021092201 10800 3600 604800 60 # dctest1.local. 3600 IN NS ns1.dctest1.local. # ns1.dctest1.local. 3600 IN A 10.0.2.1 Everything checks outs! Next I’ll create the A records for the mail and www services. I’m going to add a filter to only check for the mail/www services for printing to cut down on the lines, but know that they’re still there going forward. a1 = {'domain_name': 'mail.dctest1.local.', 'ip_address': '10.0.2.25', 'ttl': 86400} a2 = {'domain_name': 'www.dctest1.local.', 'ip_address': '10.0.2.80', 'ttl': 3600} b.Management.ResourceRecord.add_a(view_zones=[zone_view], a_records=[[a1, a2]], sync_ptrs=[0]) rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: if any(item in rr for item in ['mail', 'www']): print(rr) # mail.dctest1.local. 86400 IN A 10.0.2.25 # www.dctest1.local. 3600 IN A 10.0.2.80 Here you can see that I’m adding two records to the zone specified and not creating the reverse records (not included for brevity, but in prod would be likely). Now I’ll update the mail address and TTL. b.Management.ResourceRecord.update_a([zone_view], [[a1]], [[a1_update]], [0]) rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: if any(item in rr for item in ['mail', 'www']): print(rr) # mail.dctest1.local. 900 IN A 10.0.2.110 # www.dctest1.local. 3600 IN A 10.0.2.80 You can see that the address and TTL updated as expected. Note that with the update_/N/ methods, you need to provide the old and new, not just the new. Let’s get destruction and delete the www record! b.Management.ResourceRecord.delete_a([zone_view], [[a2]], [0]) rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: if any(item in rr for item in ['mail', 'www']): print(rr) # mail.dctest1.local. 900 IN A 10.0.2.110 And your web service is now unreachable via DNS. Congratulations! But there’s more damage we can do: it’s time to delete the whole zone. b.Management.Zone.delete_zone([zone_view]) b.Management.Zone.zone_exist([zone_view]) # [0] And that’s a wrap! As I said, it’s been years since I have spent time with the iControl SOAP interface. It’s nice to know that even though most of what we do is done through REST, imperatively or declaratively, that some missing functionality in that interface is still alive and kicking via SOAP. H/T to Scott Huddy for the nudge to investigate this. Questions? Drop me a comment below. Happy coding! A gist of these samples is available on GitHub.1KViews2likes1CommentJSON versus XML: Your Choice Matters More Than You Think
Should the enterprise standardize on JSON or XML as their lingua franca for Web 2.0 integration?Or should they use both as best fits the application?The decision impacts more than just integration – it resounds across the entire infrastructure and impacts everything from security to performance to availability of those applications. One of the things a developer may or may not have control over when building enterprise applications is the format of the data used to communicate (integrate) with other applications. Increasingly services external to the enterprise are very Web 2.0 in that they provide HTTP-based APIs for integration that exchange data in one of a couple of standard formats: XML and JSON. While RSS and ATOM are also seen in APIs as options, these are generally used only when the data being presented is frequently updated and of a “listing” style nature. XML and JSON are used to deliver more complex structures that do not fit well in to the paradigm described by RSS and ATOM formatted information. Increasingly libraries or toolkits are used to build interactive Web 2.0 style applications – XAJAX, SAJAX, Dojo, Prototype, script.aculo.us – and these, too, generally default to XML or JSON, though other formats are often supported as well. So as you’re building out that Web 2.0 style application and thinking about the API you’re going to offer to make it easier for partners/customers/other departments to handle integration with their Web 2.0 style applications – or even thinking about the way in which data will be exchanged with the client (browser) - you need to think carefully about the choice you’re making. There are pros and cons to both JSON and XML, and the choice has implications outside the confines of application development in your organization. The debate on which is “best” or “optimal” is far from over, and it’s likely to eclipse – for developers anyway – the religious-style wars over the choice of browser. Even mainstream technology coverage is taking an interest in the subject. A recent piece from C|NET on “NoSQL and the future of cloud databases” says “Mapping object data to JSON, a JavaScript data interchange format, is far less complex. The "schemaless" nature of many of these products is an excellent fit with agile development methodologies.” Indeed, schemaless data formats are certainly more flexible, but that flexibility has a price that may need to be paid by the rest of the infrastructure.259Views0likes2CommentsMagic Virtualization-Fairy Dust and the New Network
The virtualization fairy won’t create APIs out of thin air, but a visit from her may kick-start a necessary (re)evaluation of the role of the API in the new network. The way some people talk about the “virtualization of the network” and how it’s necessary for cloud computing and automation and creating a flexible infrastructure you’d think that the transformation from physical form factor to virtual form factor was a magical one that conferred not only the ability scale on-demand but the APIs, as well. There are actual two misconceptions here that need correcting and you know me - I’m going to correct them. First seems to be the erroneous belief that in order to fit into a dynamic data center a network infrastructure component must be virtualized. The thought here seems to revolve around a belief that only by becoming a virtual network appliance (VNA) can a hardware component suddenly be imbued with the control plane, the API, required to be automated and orchestrated in a dynamic, on-demand way. In other words, to have its capabilities delivered as a service. Not true at all. Many network infrastructure components have been control-plane enabled for years, since 2001 or so, in fact. These control-planes are standards-based, almost always leveraging HTTP and SOAP or POX (Plain Old XML), and provide the means by which these components have been integrated into third-party management applications for many, many years. That management API, the control plane, is what confers flexibility, programmability, and integration with the rest of the infrastructure and management systems that ultimately enable a dynamic data center. Second is what seems to be the belief that the transformation from physical to virtual somehow births an API that did not before exist. That’s simply not true. While moving from one form factor to another inherently allows management at the container level, i.e. of the virtual machine, it does not magically confer the ability to manage what’s running inside the container, i.e. the actual networking component. That requires an API of some kind, whether that’s SOAP or REST or whatever. If the API didn’t exist before “virtualization” there is no guarantee it will suddenly exist after the process of virtualization is complete. Sometimes the process of virtualization seems to result in an API. That’s not because of the process, it’s because (a) the organization realizes that be a part of the new infrastructure an API is going to be required and (b) as long as they’re mucking around in the code-base to virtualization the solution it’s a really good time to add an API. It’s more a matter of “well, we’re in here anyway…let’s just do it” than anything else. This isn’t a “cause –> effect” chain but more a coincidence, albeit perhaps a logical and financially smart one. If the API existed before the virtualization-fairy showed up, well, it’s almost certainly still going to be there after it’s virtualized.183Views0likes0CommentsImpact of Load Balancing on SOAPy and RESTful Applications
A load balancing algorithm can make or break your application’s performance and availability It is a (wrong) belief that “users” of cloud computing and before that “users” of corporate data center infrastructure didn’t need to understand any of that infrastructure. Caution: proceed with infrastructure ignorance at the (very real) risk of your application’s performance and availability. Think I’m kidding? Stefan’s SOA & Enterprise Architecture Blog has a detailed and very explanatory post on Load Balancing Strategies for SOA Infrastructures that may change your mind. This post grew, apparently, out of some (perceived) bad behavior on the part of a load balancer in a SOA infrastructure. Specifically, the load balancer configuration was overwhelming the very services it was supposed to be load balancing. Before we completely blame the load balancer, Stefan goes on to explain that the root of the problem lay in the load balancing algorithm used to distribute requests across the services. Specifically, the load balancer was configured to use a static round robin algorithm and to apply source IP address-based affinity (persistence) while doing so. The result is that one instance of the service was constantly sent requests while the others remained idle and available. Stefan explains how the load balancing algorithm was changed to utilize a dynamic ratio algorithm that takes into consideration the state of each service (CPU and memory available) and removed the server affinity requirement. The problem wasn’t the load balancer, per se. The load balancer was acting exactly as it was configured to act. The problem lay deeper: in understanding the interaction between the network, the application network, and the services themselves. Services, particularly stateless services as offered by SOA and REST-based APIs today, do not generally require persistence. In cases where they do require persistence, that persistence needs to be based on application-layer information, such as an API key or user (usually available in a cookie). But this problem isn’t unique to SOA. Consider, if you will, the effect that such an unaware distribution might have on any one of the popular social networking sites offering RESTful APIs for integration. Imagine that all Twitter API requests ended up distributed to one server in Twitter’s infrastructure. It would fall over quickly, no doubt about that, because the requests are distributed without any consideration for current load and almost, one could say, blindly. Stefan points this out as he continues to examine the effect of load balancing algorithms on his SOA infrastructure: “Secondly, the static round-robin algorithm does not take in effect, which state each cluster node has. So, for example if one cluster node is heavily under load, because it processes some complex orders, and this results in 100% cpu load, then the load balancer will not recognize this but route lots of other requests to this node causing overload and saturation.” Load balancing algorithms that do not take into account the current state of the server and application, i.e. they are not context-aware, are not appropriate for today’s dynamic application architectures. Such algorithms are static, brittle, and blind when it comes to distributed load efficiently and will ultimately result in an uneven request load that is likely to drive an application to downtime. THE APPLICATION SHOULD BE A PART OF THE ALGORITHM It is imperative in a distributed application architecture like SOA or REST that the application network infrastructure, i.e. the load balancer, be able to take into consideration the current load on any given server before distributing a request. If one node in the (pool|farm|cluster) is processing a complex order that consumes most of the CPU resources available, the load balancer should not continue to send it requests. This requires that the load balancer, the application delivery controller, be aware of the application, its environment, as well as the network and the user. It must be able to make a decision, in real-time, about where to direct any given request based on all the variables available. That includes CPU resources, what the request is, and even who the user/application is. For example, Twitter uses a system of inbound rate limiting on API calls to help manage the load on its infrastructure. Part of that equation could be the calling application. HTTP as a transport protocol contains a somewhat surprisingly rich array of information in its headers that can be parsed and inspected and made a part of the load balancing equation in any environment. This is particularly useful to sites like Twitter where multiple “applications” (clients) are making use of the API. Twitter can easily require the use of a custom HTTP header that includes the application name and utilize that as part of its decision making processes. Like RESTful APIs, SOAP envelopes are full of application specifics that provide data to the load balancer, if it’s context-aware, that can be utilized to determine how best to distribute a request. The name of the operation being invoked, for example, can be used to not only load balance at the service level, but at the operation level. That granularity can be important when operations vary in their consumption of resources. This application layer information, in conjunction with current load and connections on the server provide a wealth of information as to how best, i.e. most efficiently, to distribute any given request. But if the folks in charge of configuring the load balancer aren’t aware of the impact of algorithms on the application and its infrastructure, you can end up in a situation much like that described in Stefan’s blog on the subject. CLOUD WILL MAKE THIS SITUATION WORSE Cloud computing won’t solve this problem and, in fact, it will probably make it worse. The belief that the infrastructure should be “hidden” from the user (that’s you) means that configuration options – like the load balancing algorithm – aren’t available to you as a user/deployer of cloud-based applications. Even though load balancing is going to be used to scale your application, you have no clue or control over how that’s going to occur. That’s why it’s important that you ask questions of your provider on this subject. You need to know what algorithm is being used and how requests are distributed so you can determine how that’s going to impact your application and its performance once its deployed. You can’t – or shouldn’t – assume that the load balancing provided is going to magically distribute requests perfectly across your scaled application because it wasn’t configured with your application in mind. If you deploy an application – particularly a SOA or RESTful one – you may find that with scalability comes poor performance or even unavailable applications because of the configuration of that infrastructure you “aren’t supposed to worry about.” Applications are not islands; they aren’t deployed stand-alone even though the virtualization of applications is making it seem like that’s the case. The delivery of applications requires collaboration between a growing number of components in the data center and load balancing is one of the key components that can make or break your application’s performance and availability. Five questions you need to ask about load balancing and the cloud Dr. Dobb’s Journal: Coding in the Cloud Cloud Computing: Vertical Scalability is Still Your Problem Server Virtualization versus Server Virtualization SOA & Web 2.0: The Connection Management Challenge The Impact of the Network on AJAX Have a can of Duh! It’s on me Intro to Load Balancing for Developers – The Algorithms Not All Virtual Servers are Created Equal624Views0likes0CommentsCloud Computing and Networking Vendors: The Third Option
Alistair Croll has a great post on GIGAOM discussing how networking vendors will need to change in order to support a cloud computing infrastructure. He outlines two options for networking vendors that will keep them relevant in a cloud computing environment. In option number one he postulates that virtual appliances are the way to go, that the "pendulum swings back to software". Option number two revolves around sales strategy, and he suggests that networking vendors will need to sell to the providers of the cloud. That makes sense to me. If you want to be a part of the cloud computing infrastructure then you should probably sell to the people building those infrastructures. Option #2 is not really a question of technology; Alistair is somewhat mixing his metaphors, as it were, with these two options, so I'm going to focus on the technology because honestly, I couldn't sell water to a man dying of thirst. The goal of the virtual appliance (option #1) is to provide networking (routing, firewalls, load balancing) to customers of the cloud in a way that is (a) easy for cloud computing providers to provision and de-provision on-demand and (b) segregates management and configuration on a customer by customer basis. When Customer A wants a load balancer, for example, the cloud computing provider needs to provision the resource and associated management to the customer in an automated way. Hence Alistair's choice of a virtual appliance, as it is well established that within the right framework, with the right tools, a virtual image of any application can be easily provisioned and de-provisioned. While the virtual appliance is certainly one way in which this can be accomplished, there is another option that doesn't introduce the overhead of virtualization into the equation and maintains the performance gains made in hardware in the networking world. Option #3: A service-enabled device with virtualized management The service-enabled networking device presents its management and configuration interfaces as a Web services API, either via REST or SOAP. The service-enabled networking device can be easily integrated into provisioning workflow automation systems such as those that will be required to truly make cloud computing viable for a large number of customers. Using standards-based interfaces, the service-enabled networking device can be provisioned, configured, monitored, and managed remotely through integration with custom or packaged solutions, and customers can be provided remote access via a management application to customize, manage, and configure their piece of the networking device. This requires the second half of this option: virtualized management. The management of the service-enabled networking device must be virtualized such that customers can be segregated; that is, configuration changes to Customer A's piece of the device will not affect Customer B. The service-enabled API must support this kind of virtualization, such that groups and individual users can be assigned a particular domain to which they belong and what rights they have within that domain - and no other. The virtualized management must be consistent across the GUI, CLI, and service-enabled interfaces to provide the most value and to ensure segregation between customers. The cloud computing environment must be scalable on-demand, and it must support high volume (capacity) without sacrificing performance. Automation and integration have always, always, degraded performance in the software world and it is rarely as "on-demand" as the solutions claim to be. While a virtual appliance is not necessarily a bad idea, there are still questions regarding the impact of virtualization on performance of networking devices that have not been answered. The loss of acceleration hardware assistance for compute intensive tasks like XML processing, compression, SSL operations, bulk encryption, and simple packet processing at wire speed might unnecessarily increase the hardware horsepower needed to deploy such solutions, and it is still unlikely to come close to the performance and capacity available in hardware solutions. For highly performant cloud computing infrastructures, option #3 is likely to be a better choice because it offers flexibility, integration capabilities, virtualization of management, and automation of provisioning (on-demand) without sacrificing the hard won benefits of hardware processing in the network.202Views0likes0Comments