soap
29 TopicsHTTP Post SOAP XML monitor with data
I need to set up an HTTP POST monitor that makes a call via SOAP XML, sends some data and I will handle the result, doing the test with CURL works 100%, however, when I configure the HTTP monitor or test using "echo -ne", the header with the data is not forwarded at all. I'm using version 14.1.2.3 1) Below the test via CURL successfully: curl -X POST "http://10..10.10.10:9080/aaa/services/ARService?server=mlt3ho0700&webService=MonitorarServico" -H 'Content-Type: text/xml; charset=UTF-8' -H 'SOAPAction: urn:MonitorarServico/monitorarServico' -d '<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:MonitorarServico"><soapenv:Body><urn:monitorarServico><urn:tipoOperacao>monitorarServico</urn:tipoOperacao><urn:nomeServidor>mlt3ho0740</urn:nomeServidor><urn:portaAplicacao>9080</urn:portaAplicacao><urn:nomeUsuario>TEST</urn:nomeUsuario></urn:monitorarServico></soapenv:Body></soapenv:Envelope>' Answer OK <?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Body><ns0:monitorarServicoResponse xmlns:ns0="urn:MonitorarServico" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <ns0:codRetorno>0</ns0:codRetorno> <ns0:msgRetorno>UP</ns0:msgRetorno> </ns0:monitorarServicoResponse></soapenv:Body></soapenv:Envelope> 2) Test when configuring the HTTP monitor or using echo -ne (echo -ne "POST http://10.10.10.10:9080/arsys/services/ARService?server=mlt3ho0700&webService=MonitorarService \r\n HTTP/1.1\r\nContent-Type: text/xml;charset=utf-8\r\nSOAPAction: urn:MonitorarServico/monitorarServico\r\n\r\n<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"xmlns:urn=\"urn:MonitorarServico\"><soapenv:Body><urn:monitorarServico><urn:tipoOperacao>monitorarServico</urn:tipoOperacao><urn:nomeServidor>mlt3ho0740</urn:nomeServidor><urn:portaAplicacao>9080</urn:portaAplicacao><urn:nomeUsuario>TEST</urn:nomeUsuario></urn:monitorarServico></soapenv:Body></soapenv:Envelope>\r\n"; cat) | nc 10.80.41.92 9080 Answer NOT OK <?xml version="1.0" encoding="utf-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Body><soapenv:Fault><faultcode>soapenv:Server.userException</faultcode><faultstring>org.xml.sax.SAXParseException; Premature end of file.</faultstring><detail><ns1:hostname xmlns:ns1="http://xml.apache.org/axis/">mlt3ho0740</ns1:hostname></detail></soapenv:Fault></soapenv:Body></soapenv:Envelope> Ncat: Broken pipe. Has anyone ever needed to do something in this direction that can help me? I tried to do a test using JSON and faced the same problem, in this case, example I used the BIG-IP itself.Solved1.7KViews0likes2CommentsHelp with SOAP Monitor
I am attempting to use the built-in SOAP monitor on an LTM with 10.2.4. I have made several attempts, but no success yet. Any help or advice would be much appreciated! One of the biggest issues I have is how to I validate that the customer has given me a legitimate POST request and that I'm getting back the result they say I should? They claim to have verified using SOAP-UI, and tell me that this request should work but so far my pool members all fail this monitor. Admittedly, I have very little SOAP knowledge and so I'm having trouble deconstructing the SOAP POST request the customer has provided me, with what I need to put in the fields of the monitor. Can someone help me identify what components of this request need to be included in my SOAP monitor fields and what goes where?1.3KViews0likes31CommentsSOAP HTTP Monitor - HTTP Error 400. The request has an invalid header name.
Hello All I am new to SOAP testing, I am using an http monitor to do a SOAP test but I keep receiving error 400 from the server. I am using a 3rd party SOAP client which gets a successful response, and it's using the same statement. I believe the problem is how I am constructing the request. I have read http://support.f5.com/kb/en-us/solutions/public/2000/100/sol2167.html but still don't understand why this is failing. This is what I am using on the 'Send String' portion of the test: POST /CurrencyConvertor.asmx HTTP/1.1\r\nAccept-Encoding: gzip,deflate\r\nContent-Type: text/xml;charset=UTF-8\r\nSOAPAction: \"http://www.webserviceX.NET/ConversionRate\"\r\nContent-Length: 345\r\nHost: www.webservicex.com\r\nConnection: Keep-Alive\r\nUser-Agent: Apache-HttpClient/4.1.1 (java 1.5)\r\n\r\r\n\r\n\r\n\r\nEUR\r\nAFA\r\n\r\n\r The Wireshark capture from my tests indicates: POST /CurrencyConvertor.asmx HTTP/1.1 Accept-Encoding: gzip,deflate Content-Type: text/xml;charset=UTF-8 SOAPAction: "http://www.webserviceX.NET/ConversionRate" Content-Length: 345 Host: www.webservicex.com Connection: Keep-Alive EUR AFA HTTP/1.1 400 Bad Request Content-Type: text/html; charset=us-ascii Server: Microsoft-HTTPAPI/2.0 Date: Fri, 04 Oct 2013 23:04:50 GMT Connection: close Content-Length: 339 Bad Request Bad Request - Invalid Header HTTP Error 400. The request has an invalid header name. The Wireshark from the successful connection using 3rd party shows: POST /CurrencyConvertor.asmx HTTP/1.1 Accept-Encoding: gzip,deflate Content-Type: text/xml;charset=UTF-8 SOAPAction: "http://www.webserviceX.NET/ConversionRate" Content-Length: 345 Host: www.webservicex.com Connection: Keep-Alive User-Agent: Apache-HttpClient/4.1.1 (java 1.5) EUR AFA HTTP/1.1 200 OK Cache-Control: private, max-age=0 Content-Type: text/xml; charset=utf-8 Content-Encoding: gzip Vary: Accept-Encoding Server: Microsoft-IIS/7.0 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Fri, 04 Oct 2013 22:24:10 GMT Content-Length: 311 0 I'd really appreciate any advice, many thanks…. G FYI I am using BIP-IP VE version 10.1.01.3KViews0likes3CommentsManaging ZoneRunner Resource Records with Bigsuds
Over the last several years, there have been questions internal and external on how to manage ZoneRunner (the GUI tool in F5 DNS that allows you to manage DNS zones and records) resources via the REST interface. But that's a no can do with the iControl REST--it doesn't have that functionality. It was brought to my attention by one of our solutions engineers that a customer is using some methods in the SOAP interface that allows you to do just that...which was news to me! The things you learn... In this article, I'll highlight a few of the methods available to you and work on a sample domain in the python module bigsuds that utilizes the suds SOAP library for communication duties with the BIG-IP iControl SOAP interface. Test Domain & Procedure For demonstration purposes, I'll create a domain in the external view, dctest1.local, with the following attributes that mirrors nearly identically one I created in the GUI: Type: master Zone Name: dctest1.local. Zone File Name: db.external.dctest1.local. Options: allow-update from localhost TTL: 500 SOA: ns1.dctest1.local. Email: hostmaster.ns1.dctest1.local. Serial: 2021092201 Refresh: 10800 Retry: 3600 Expire: 604800 Negative TTL: 60 I'll also add a couple type A records to that domain: name: mail.dctest1.local., address: 10.0.2.25, TTL: 86400 name: www.dctest1.local., address: 10.0.2.80, TTL: 3600 After adding the records, I'll update one of them, changing the IP and the TTL: name: mail.dctest1.local., address: 10.0.2.110, ttl: 900 Then I'll delete the other one: name: www.dctest1.local., address: 10.0.2.80, TTL: 3600 And finally, I'll delete the zone: name: dctest1.local. ZoneRunner Methods All the methods can be found on Clouddocs in the ZoneRunner, Zone, and ResourceRecord method pages. The specific methods we'll use in our highlight real are: Management.ResourceRecord.add_a Management.ResourceRecord.delete_a Management.ResourceRecord.get_rrs Management.ResourceRecord.update_a Management.Zone.add_zone_text Management.Zone.get_zone_v2 Management.Zone.zone_exist With each method, there is a data structure that the interface expects. Each link above provides the details, but let's look at an example with the add_a method. The method requires three parameters, view_zones, a_records, and sync_ptrs, which the image of the table shows below. The boolean is just a True/False value in a list. The reason the list ( [] ) is there for all the attributes is because you can send a single request to update more than one zone, and addmore than one record within each zone if desired. The data structure for view_zones and a_records is in the following two images. Now that we have an idea of what the methods require, let's take a look at some code! Methods In Action First, I import bigsuds and initialize the BIG-IP. The arguments are ordered in bigsuds for host, username, and password. If the default “admin/admin” is used, they are assumed, as is shown here. import bigsuds b = bigsuds.BIGIP(hostname='ltm3.test.local') Next, I need to format the ViewZone data in a native python dictionary, and then I check for the existence of that zone. zone_view = {'view_name': 'external', 'zone_name': 'dctest1.local.' } b.Management.Zone.zone_exist([zone_view]) # [0] Note that the return value, which should be a list of booleans, is a list with a 0. I’m guessing that’s either suds or the bigsuds implementation doing that, but it’s important to note if you’re checking for a boolean False. It’s also necessary to set the booleans as 0 or 1 as well when sending requests to BIG-IP with bigsuds. Now I will create the zone since it does not yet exist. From the add_zone_text method description on Clouddocs, note that I need to supply, in separate parameters, the zone info, the appropriate zone records, and the boolean to sync reverse records or not. zone_add_info = {'view_name': 'external', 'zone_name': 'dctest1.local.', 'zone_type': 'MASTER', 'zone_file': 'db.external.dctest1.local.', 'option_seq': ['allow-update { localhost;};']} zone_add_records = 'dctest1.local. 500 IN SOA ns1.dctest1.local. hostmaster.ns1.dctest1.local. 2021092201 10800 3600 604800 60;\n' \ 'dctest1.local. 3600 IN NS ns1.dctest1.local.;\n' \ 'ns1.dctest1.local. 3600 IN A 10.0.2.1;' b.Management.Zone.add_zone_text([zone_add_info], [[zone_add_records]], [0]) b.Management.Zone.zone_exist([zone_view]) # [1] Note that the strings here require a detailed understanding of DNS record formatting, the individual fields are not parameters that can be set like in the ZoneRunner GUI. But, I am confident there is an abundance of modules that manage DNS formatting in the python ecosystem that could simplify the data structuring. After creating the zone, another check to see if the zone exists results in a true condition. Huzzah! Now I’ll check the zone info and the existing records for that zone. zone = b.Management.Zone.get_zone_v2([zone_view]) for k, v in zone[0].items(): print(f'{k}: {v}') # view_name: external # zone_name: dctest1.local. # zone_type: MASTER # zone_file: "db.external.dctest1.local." # option_seq: ['allow-update { localhost;};'] rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: print(rr) # dctest1.local. 500 IN SOA ns1.dctest1.local. hostmaster.ns1.dctest1.local. 2021092201 10800 3600 604800 60 # dctest1.local. 3600 IN NS ns1.dctest1.local. # ns1.dctest1.local. 3600 IN A 10.0.2.1 Everything checks outs! Next I’ll create the A records for the mail and www services. I’m going to add a filter to only check for the mail/www services for printing to cut down on the lines, but know that they’re still there going forward. a1 = {'domain_name': 'mail.dctest1.local.', 'ip_address': '10.0.2.25', 'ttl': 86400} a2 = {'domain_name': 'www.dctest1.local.', 'ip_address': '10.0.2.80', 'ttl': 3600} b.Management.ResourceRecord.add_a(view_zones=[zone_view], a_records=[[a1, a2]], sync_ptrs=[0]) rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: if any(item in rr for item in ['mail', 'www']): print(rr) # mail.dctest1.local. 86400 IN A 10.0.2.25 # www.dctest1.local. 3600 IN A 10.0.2.80 Here you can see that I’m adding two records to the zone specified and not creating the reverse records (not included for brevity, but in prod would be likely). Now I’ll update the mail address and TTL. b.Management.ResourceRecord.update_a([zone_view], [[a1]], [[a1_update]], [0]) rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: if any(item in rr for item in ['mail', 'www']): print(rr) # mail.dctest1.local. 900 IN A 10.0.2.110 # www.dctest1.local. 3600 IN A 10.0.2.80 You can see that the address and TTL updated as expected. Note that with the update_/N/ methods, you need to provide the old and new, not just the new. Let’s get destruction and delete the www record! b.Management.ResourceRecord.delete_a([zone_view], [[a2]], [0]) rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: if any(item in rr for item in ['mail', 'www']): print(rr) # mail.dctest1.local. 900 IN A 10.0.2.110 And your web service is now unreachable via DNS. Congratulations! But there’s more damage we can do: it’s time to delete the whole zone. b.Management.Zone.delete_zone([zone_view]) b.Management.Zone.zone_exist([zone_view]) # [0] And that’s a wrap! As I said, it’s been years since I have spent time with the iControl SOAP interface. It’s nice to know that even though most of what we do is done through REST, imperatively or declaratively, that some missing functionality in that interface is still alive and kicking via SOAP. H/T to Scott Huddy for the nudge to investigate this. Questions? Drop me a comment below. Happy coding! A gist of these samples is available on GitHub.999Views2likes1CommentSOAP HTTPS redirects not working (Postman / SoapUI)
Hello, I am using the Postman application, as well as SoapUI, to test some SOAP requests to an application that is behind our F5 WAF. When I send SOAP HTTPS POST requests, the WAF handles the request perfectly and all tests pass. However, when I send these requests over HTTP, tests do not succeed and I get an HTTP 500 error. To be clear, I have the default F5 iRule attached to the virtual server to redirect HTTP requests to HTTPS, and it does work. If I make a request to the site through the browser over HTTP, it gets sent to HTTPS. As another side note of troubleshooting, I have seen old threads that mention the Postman Interceptor Chrome extension being necessary for some API testing. I have installed it, turned it on, and I still get the same issues. My next step was turning on HTTP Analytics logging and looking at some of these requests to see if I could spot a difference between where we force HTTPS and where we leave it as HTTP. From what I can tell, it looks like every HTTP 500 response shows that it was a GET request... which is wrong, because the tests are configured as an HTTP POST. So to me it seems like the WAF is redirecting the HTTP POST to an HTTPS GET, which is why we get the 500 response code. Does this sound like anything someone has seen before? Any insight as to why this is occurring is appreciated.712Views0likes3CommentsImpact of Load Balancing on SOAPy and RESTful Applications
A load balancing algorithm can make or break your application’s performance and availability It is a (wrong) belief that “users” of cloud computing and before that “users” of corporate data center infrastructure didn’t need to understand any of that infrastructure. Caution: proceed with infrastructure ignorance at the (very real) risk of your application’s performance and availability. Think I’m kidding? Stefan’s SOA & Enterprise Architecture Blog has a detailed and very explanatory post on Load Balancing Strategies for SOA Infrastructures that may change your mind. This post grew, apparently, out of some (perceived) bad behavior on the part of a load balancer in a SOA infrastructure. Specifically, the load balancer configuration was overwhelming the very services it was supposed to be load balancing. Before we completely blame the load balancer, Stefan goes on to explain that the root of the problem lay in the load balancing algorithm used to distribute requests across the services. Specifically, the load balancer was configured to use a static round robin algorithm and to apply source IP address-based affinity (persistence) while doing so. The result is that one instance of the service was constantly sent requests while the others remained idle and available. Stefan explains how the load balancing algorithm was changed to utilize a dynamic ratio algorithm that takes into consideration the state of each service (CPU and memory available) and removed the server affinity requirement. The problem wasn’t the load balancer, per se. The load balancer was acting exactly as it was configured to act. The problem lay deeper: in understanding the interaction between the network, the application network, and the services themselves. Services, particularly stateless services as offered by SOA and REST-based APIs today, do not generally require persistence. In cases where they do require persistence, that persistence needs to be based on application-layer information, such as an API key or user (usually available in a cookie). But this problem isn’t unique to SOA. Consider, if you will, the effect that such an unaware distribution might have on any one of the popular social networking sites offering RESTful APIs for integration. Imagine that all Twitter API requests ended up distributed to one server in Twitter’s infrastructure. It would fall over quickly, no doubt about that, because the requests are distributed without any consideration for current load and almost, one could say, blindly. Stefan points this out as he continues to examine the effect of load balancing algorithms on his SOA infrastructure: “Secondly, the static round-robin algorithm does not take in effect, which state each cluster node has. So, for example if one cluster node is heavily under load, because it processes some complex orders, and this results in 100% cpu load, then the load balancer will not recognize this but route lots of other requests to this node causing overload and saturation.” Load balancing algorithms that do not take into account the current state of the server and application, i.e. they are not context-aware, are not appropriate for today’s dynamic application architectures. Such algorithms are static, brittle, and blind when it comes to distributed load efficiently and will ultimately result in an uneven request load that is likely to drive an application to downtime. THE APPLICATION SHOULD BE A PART OF THE ALGORITHM It is imperative in a distributed application architecture like SOA or REST that the application network infrastructure, i.e. the load balancer, be able to take into consideration the current load on any given server before distributing a request. If one node in the (pool|farm|cluster) is processing a complex order that consumes most of the CPU resources available, the load balancer should not continue to send it requests. This requires that the load balancer, the application delivery controller, be aware of the application, its environment, as well as the network and the user. It must be able to make a decision, in real-time, about where to direct any given request based on all the variables available. That includes CPU resources, what the request is, and even who the user/application is. For example, Twitter uses a system of inbound rate limiting on API calls to help manage the load on its infrastructure. Part of that equation could be the calling application. HTTP as a transport protocol contains a somewhat surprisingly rich array of information in its headers that can be parsed and inspected and made a part of the load balancing equation in any environment. This is particularly useful to sites like Twitter where multiple “applications” (clients) are making use of the API. Twitter can easily require the use of a custom HTTP header that includes the application name and utilize that as part of its decision making processes. Like RESTful APIs, SOAP envelopes are full of application specifics that provide data to the load balancer, if it’s context-aware, that can be utilized to determine how best to distribute a request. The name of the operation being invoked, for example, can be used to not only load balance at the service level, but at the operation level. That granularity can be important when operations vary in their consumption of resources. This application layer information, in conjunction with current load and connections on the server provide a wealth of information as to how best, i.e. most efficiently, to distribute any given request. But if the folks in charge of configuring the load balancer aren’t aware of the impact of algorithms on the application and its infrastructure, you can end up in a situation much like that described in Stefan’s blog on the subject. CLOUD WILL MAKE THIS SITUATION WORSE Cloud computing won’t solve this problem and, in fact, it will probably make it worse. The belief that the infrastructure should be “hidden” from the user (that’s you) means that configuration options – like the load balancing algorithm – aren’t available to you as a user/deployer of cloud-based applications. Even though load balancing is going to be used to scale your application, you have no clue or control over how that’s going to occur. That’s why it’s important that you ask questions of your provider on this subject. You need to know what algorithm is being used and how requests are distributed so you can determine how that’s going to impact your application and its performance once its deployed. You can’t – or shouldn’t – assume that the load balancing provided is going to magically distribute requests perfectly across your scaled application because it wasn’t configured with your application in mind. If you deploy an application – particularly a SOA or RESTful one – you may find that with scalability comes poor performance or even unavailable applications because of the configuration of that infrastructure you “aren’t supposed to worry about.” Applications are not islands; they aren’t deployed stand-alone even though the virtualization of applications is making it seem like that’s the case. The delivery of applications requires collaboration between a growing number of components in the data center and load balancing is one of the key components that can make or break your application’s performance and availability. Five questions you need to ask about load balancing and the cloud Dr. Dobb’s Journal: Coding in the Cloud Cloud Computing: Vertical Scalability is Still Your Problem Server Virtualization versus Server Virtualization SOA & Web 2.0: The Connection Management Challenge The Impact of the Network on AJAX Have a can of Duh! It’s on me Intro to Load Balancing for Developers – The Algorithms Not All Virtual Servers are Created Equal611Views0likes0CommentsUse APM HTTP Auth to send a SOAP Message for OTP
Hello, I recently read this article on implementing a OTP solution in our APM via SMS: https://devcentral.f5.com/articles/one-time-passwords-via-an-sms-gateway-with-big-ip-access-policy-manager In this article the OTP is sent to the client via an HTTP API, of the SMS provider, which used an HTTP Auth Server to communicate with the SMS provider. I would like to implement this same solution using a SOAP API, but I'm not sure what to populate the fields of the AAA HTTP Server with. I have successfully implemented this using an e-mail irule to the SMS provider and it works well, but we would like to encrypt the message via HTTPS and the provider only supports SOAP. I was hoping that I could just paste the following output (from soapclient.com) into the "Hidden Form Parameters/Values", but it doesn't seem to be that simple. Any suggestions or other F5 doco I could reference? 1234567890987654321Test MessageMessage The WSDL of the SMS Toolkit methods is located at: http://xml.redcoal.com/soapserver.dll/wsdl/ISoapServer Thanks for your help, -Mike600Views0likes4CommentsEvent log soap[22458]
Hello, I try to understand a log message on our F5 Big IP 13.1.1.4. Under System -> Logs -> Local Traffic, I have several entries like LogLevel:info Service:soap[22458] Event:src=127.0.0.1, user= I precise there is nothing after user :) Anyone can explain me what it means and if it is possible to filter these entries? Best regards.599Views0likes3CommentsSOAP service Call from non browser client with APM policy
We are configuring our F5 APM to authenticate SOAP service calls. We create a VS and add APM policy and when we access the service from any browser its works fine. Now my application team want to call the service form a .net client . When they call they are able to create a Proxy for the services but when try to consume the service they receive the below error : " return The content type text/html; charset=utf-8 of the response message does not match the content type of the binding (text/xml; charset=utf-8). If using a custom encoder, be sure that the IsContentTypeSupported method is implemented properly. The first 1024 bytes of the response were: 'BIG-IP logout page When we remove the APM policy from the VS everything works fine. the message says the response were HTML and the html codes seems the bigip logout page. I am not able to understand , why its returning a Big-IP logout page " BIG-IP logout page " the APM policy is simple Start ---> certificate check ----> Allow Fallback ----> Deny Am I missing anything here ? Any suggestion is highly appreciated.536Views0likes3CommentsSOAP Request Formation for iControl
Hello, I am trying to get the list of virtual servers under all the partitions in the LTM using the below SOAP message. http://www.w3.org/2001/XMLSchema-instance"; xmlns:xsd="; xmlns:soapenv="; xmlns:vir="urn:iControl:LocalLB/VirtualServer"> ' target="_blank" rel="nofollow">http://schemas.xmlsoap.org/soap/encoding/"/>; But getting only the vs under common partition. Could someone please let me know how to get the list of virtual servers for all partitions.516Views0likes1Comment