curl
20 TopicsLet's Encrypt with Cloudflare DNS and F5 REST API
Hi all This is a followup on the now very old Let's Encrypt on a Big-IP article. It has served me, and others, well but is kind of locked to a specific environment and doesn't scale well. I have been going around it for some time but couldn't find the courage (aka time) to get started. However, due to some changes to my DNS provider (they were aquired and shut down) I finally took the plunges and moved my domains to a provider with an API and that gave me the opportunity to make a more nimble solution. To make things simple I chose Cloudflare as the community proliferation is enormous and it is easy to find examples and tools. I though think that choosing another provide with an open API isn't such a big deal. After playing around with different tools I realized that I didn't need them as it ended up being much easier to just use curl. So, if the other providers have just a somewhat close resemblance it shouldn't be such a big task converting the scripts to fit. There might be finer and more advanced solutions out there, but my goal was that I needed a solution that had as few dependencies as possible and if I could make that only Bash and Curl it would be perfect. And that is what I ended up with 😎 Just put 5 files in the same directory, adjust the config to your environment, and BAM you're good to go!!😻 And if you need to run it somewhere else just copy the directory over and continue like nothing was changed. That is what I call portability 😁 Find all the details here: Let's Encrypt with Cloudflare DNS and F5 REST API Please just drop me a line if you have any questions or feedback or find any bugs.2.2KViews1like6CommentsiControlREST and Curl to save and download ASM policies
Hi, I want to be able to save/export asm policies on the F5 and then download. I want to do this using iControlREST and curl. I am able to save UCS files with the post shown below: curl -v -sk -u admin:admin https://myF5/mgmt/tm/sys/ucs -H 'Content-Type: application/json' -X POST -d '{"command":"save","name":"blah.ucs"}' | jq However if I try to do something similar for asm I get errors. Below is what I was trying with asm. curl -v -sk -u admin:admin https://myF5/mgmt/tm/asm/policies/fn9GoMrandomGvoN2dD -H 'Content-Type: application/json' -X POST -d '{"command":"save","name":"as_test.xml"}' | jq The error I get is: { "code": 400, "message": "Could not parse/validate the Policy 'Security Policy /Common/as_test'. Unknown field 'command'", "originalRequestBody": "{\"command\":\"save\",\"name\":\"as_test.xml\"", "referer": "x.x.x.x", "restOperationId": 59083, "kind": ":resterrorresponse" } Thank you1.7KViews0likes5Commentscreate an external monitor with curl to all nodes with different host names
Hi, I would like help with the following scenario. We have a pool that consists of 10 servers. I need a monitor to check the existence of favicon.ico on each of them. The catch - I need to use individual host names. I can do this when creating a node monitor for each member. The following works OK on an HTTPS member specific monitor: Get /favicon.ico http/1.1\r\n Host:server1.domain.com \r\nconnection:close \r\n\r\n receive string 200 ok But we would really like a single monitor for the whole pool and so I tried a few external monitors (curl monitor) but nothing seems to work. When I test curl to the server IP (i.e. curl -k https://x.x.x.x/favicon.ico) I don't get a 200 ok response. instead I get a long binary sequence which I believe represents the ico file. I tried using sections from this binary as the RECV parameter value but this didn't. I tried using 200 ok for the RECV and still the pool was down. If I only leave the URI as favicon.ico without using RECV parameter, the pool is green but if you shut down a server there is no effect and it will appear green (the member will remain green). I have tried using a script that alternates between host names like here: case "$Node" in "1.2.3.4") HOST="host1.domain.com" ;; "5.6.7.8") HOST="host2.domian.com" ;; But it didn't changen anything Could anybody help with this issue? Thanks, VeredSolved1.6KViews0likes3CommentsAPI Calls to F5 limited to 1024 KB download
Hi, I am interacting with the F5 API in order to download ASM policies for the purpose of automating the backups. The process works fine however policies larger than 1024 KB are cut off at this size of 1024 KB. Initially I suspected that there was a default limit on the curl request however I have not been able to find information on how to increase this with the curl request. Is this a limitation on the F5 API or the Curl request? wget is not an option as this is not natively supported on the F5 virtual appliance. My script lives on the appliance, downloads the relevant policies and then pushes them to a SMB share. The only issue is that the ASM policies that are larger than 1024 KB are being cut off at 1024 KB. The API calls are as per the documentation here: http://cdn.f5.com/websites/devcentral.f5.com/downloads/icontrol-rest-api-user-guide-13-0-0.pdf specifically: GET https://x.x.x.x/mgmt/tm/asm/policies POST https://x.x.x.x/mgmt/tm/asm/tasks/export-policy GET https://x.x.x.x/mgmt/tm/asm/file-transfer/downloads/$asmPolicy Excluding the processing in my script the API calls I make are shown below: I expect the issue resides in the download api call. Is there a switch I can add to increase this limit? curl -ku 'username:password' -X GET https://x.x.x.x/mgmt/tm/asm/policies | jq '.items[] | "pol_name:" + .name + ";api_id:" + .id' >> $wdir/asmDetails.txt curl -ku 'username:password' -X POST https://x.x.x.x/mgmt/tm/asm/tasks/export-policy -H 'Content-Type: application/json' -d '{"filename":"'$asmPolicy'","policyReference":{"link":"https://localhost/mgmt/tm/asm/policies/'$asmIDs'"}}' curl -ku 'username:password' -X GET https://x.x.x.x/mgmt/tm/asm/file-transfer/downloads/$asmPolicy > $wdir/asmBackup/"$folderName"/$number-$asmPolicy-$hostname-"$dateStamp".xml Thanks1.5KViews0likes16CommentsDownload a BIG-IP UCS archive with "curl".
Problem this snippet solves: Download a BIG-IP UCS archive using the program "curl" and verifies the output file's signature. Tested on 13.1.1. How to use this snippet: Edit the code to input the hostname of your F5 UI, admin credentials, source UCS file name (defaults to config.ucs), and the output file name. Code : #!/bin/bash # # Download a UCS archive (across a stable network) with curl. # #------------------------------------------------------------------------- F5_HOST='myhost.example.com' CREDENTIALS='admin:admin' FINAL_FILE='/tmp/config.ucs' ARCHIVE_NAME_ON_SERVER='config.ucs' DEBUG='' #------------------------------------------------------------------------- # # Get the md5 checksum for the archive. # #------------------------------------------------------------------------- ARCHIVE_CHECKSUM=$(curl -sku $CREDENTIALS -X POST -H "Content-type: application/json" \ -d "{\"command\":\"run\", \"utilCmdArgs\": \"-c '/usr/bin/md5sum /var/local/ucs/$ARCHIVE_NAME_ON_SERVER'\"}" \ https://$F5_HOST/mgmt/tm/util/bash | awk -F':' '{print $NF}' | awk -F'"' '{ print $2 }' | awk '{print $1}') [ -z "$ARCHIVE_CHECKSUM" ] && echo "Failed to get archive signature. Aborting." && exit 1 [ ! -z "$DEBUG" ] && echo "Archive checksum: $ARCHIVE_CHECKSUM" #------------------------------------------------------------------------- # # Find out the size of the archive and the size of the data packet. # #------------------------------------------------------------------------- Content_Range=$(curl -I -kv -u $CREDENTIALS -H 'Content-Type: application/json' -X GET "https://$F5_HOST/mgmt/shared/file-transfer/ucs-downloads/$ARCHIVE_NAME_ON_SERVER" 2>/dev/null | grep "Content-Range: " | cut -d ' ' -f 2) FIRST_CONTENT_RANGE=$(echo -n $Content_Range | cut -d '/' -f 1 | tr -d '\r') [ ! -z "$DEBUG" ] && echo -n "FIRST_CONTENT_RANGE: " [ ! -z "$DEBUG" ] && echo $FIRST_CONTENT_RANGE NUMBER_OF_LAST_BYTE=$(echo -n $FIRST_CONTENT_RANGE | cut -d '-' -f 2) [ ! -z "$DEBUG" ] && echo -n "NUMBER_OF_LAST_BYTE: " [ ! -z "$DEBUG" ] && echo $NUMBER_OF_LAST_BYTE INITIAL_CONTENT_LENGTH=$NUMBER_OF_LAST_BYTE CONTENT_LENGTH=$(($NUMBER_OF_LAST_BYTE+1)) [ ! -z "$DEBUG" ] && echo -n "CONTENT_LENGTH: " [ ! -z "$DEBUG" ] && echo $CONTENT_LENGTH DFILE_SIZE=$(echo -n $Content_Range | cut -d '/' -f 2 | tr -d '\r' ) [ ! -z "$DEBUG" ] && echo -n "DFILE_SIZE: " [ ! -z "$DEBUG" ] && echo $DFILE_SIZE LAST_END_BYTE=$((DFILE_SIZE-1)) CUMULATIVE_NO=0 [ ! -z "$DEBUG" ] && echo "CUMULATIVE_NO: $CUMULATIVE_NO" SEQ=0 LAST=0 #------------------------------------------------------------------------- # # Clean up: Remove the previous output file. # #------------------------------------------------------------------------- /bin/rm $FINAL_FILE 2>/dev/null #------------------------------------------------------------------------- # # Get the archive file. # #------------------------------------------------------------------------- while true do if [ $LAST -gt 0 ]; then [ ! -z "$DEBUG" ] && echo 'End of run reached.' break fi if [ $SEQ -eq 0 ]; then NEXT_RANGE=$FIRST_CONTENT_RANGE CUMULATIVE_NO=$NUMBER_OF_LAST_BYTE CONTENT_LENGTH=$INITIAL_CONTENT_LENGTH else START_BYTE=$(($CUMULATIVE_NO+1)) END_BYTE=$(($START_BYTE + $CONTENT_LENGTH)) if [ $END_BYTE -gt $LAST_END_BYTE ]; then [ ! -z "$DEBUG" ] && echo "END_BYTE greater than LAST_END_BYTE: $END_BYTE:$LAST_END_BYTE" LAST=1 let END_BYTE=$LAST_END_BYTE [ ! -z "$DEBUG" ] && echo "Getting the last data packet." fi NEXT_RANGE="${START_BYTE}-${END_BYTE}" CUMULATIVE_NO=$END_BYTE fi [ ! -z "$DEBUG" ] && echo "NEXT_RANGE: $NEXT_RANGE" let SEQ+=1 [ ! -z "$DEBUG" ] && echo "SEQ: $SEQ" OUTPUT_FILE_NAME="/tmp/$$_downloaded_ucs_archive_file_part_$SEQ"; curl -H "Content-Range: ${NEXT_RANGE}/${DFILE_SIZE}" -s -k -u $CREDENTIALS -H 'Content-Type: application/json' -X GET "https://$F5_HOST/mgmt/shared/file-transfer/ucs-downloads/$ARCHIVE_NAME_ON_SERVER" -o $OUTPUT_FILE_NAME cat $OUTPUT_FILE_NAME >> $FINAL_FILE /bin/rm $OUTPUT_FILE_NAME [ ! -z "$DEBUG" ] && echo "End of loop $SEQ" done #------------------------------------------------------------------------- # # Verify downloaded file. # #------------------------------------------------------------------------- FINAL_FILE_CHECKSUM=$(/usr/bin/md5sum $FINAL_FILE | awk '{print $1}') if [ "$FINAL_FILE_CHECKSUM" == "$ARCHIVE_CHECKSUM" ]; then echo "Download completed and verified." else echo "Downloaded file has incorrect checksum." exit 1 fi # END -------------------------------------------------------------------- Tested this on version: 13.01.4KViews2likes5Commentsconfused! not sure what is difference between these commands
what is difference between curl , openssl and WGET ? when to use these commands ? as all commands basically used to test ssl on server or VIP. is ther any difference in ciphers that they used. i read that they use some different ssl libraries. is it true ?Solved1.2KViews0likes4CommentsExternal Health monitor scripts
Hello DevCentral Friends: Im having an issue with external monitor scripts, and i wonder if any of you can help. Im trying to create a script to monitor my service at application layer. In BIG IP LTM i add the following info to my external monitor: >ltm monitor external eav_test_monitor { defaults-from external destination *:* interval 5 run /Common/Trails time-until-up 0 timeout 16 user-defined HOST sitefoint.net user-defined URI /v/1/siteservice.svc user-defined RECV siteService Service } >I have around 40 different services (Pools name) all using the the same back-end Server IPs (10.X.X.60, 10.X.X.61 and 10.X.X.62). when applied my ext-monitor to siteinfo.net service, it is also shown on other services (all 40 instances).. >The attached scripts is applied to the ext monitor in BIG-IP. But when the ext health monitors is applied the pool it doesn't work. The Pool goes Down. Logs shows eav failed. Services down due to ext monitor. Any idea what is wrong on the scripts below, or what might be the problem? I have tried with no recv string set as well... #!/bin/sh # # (c) Copyright 1996-2007 F5 Networks, Inc. # # This software is confidential and may contain trade secrets that are the # property of F5 Networks, Inc.No part of the software may be disclosed # to other parties without the express written consent of F5 Networks, Inc. # It is against the law to copy the software.No part of the software may # be reproduced, transmitted, or distributed in any form or by any means, # electronic or mechanical, including photocopying, recording, or information # storage and retrieval systems, for any purpose without the express written # permission of F5 Networks, Inc.Our services are only available for legal # users of the program, for instance in the event that we extend our services # by offering the updating of files via the Internet. # # @(#) $Id: http_monitor_cURL+GET,v 1.0 2007/06/28 16:10:15 deb Exp $ # (based on sample_monitor,v 1.3 2005/02/04 18:47:17 saxon) # # these arguments supplied automatically for all external monitors: # $1 = IP (IPv6 notation. IPv4 addresses are passed in the form #::ffff:w.x.y.z #where "w.x.y.z" is the IPv4 address) # $2 = port (decimal, host byte order) # # Additional command line arguments ($3 and higher) may be specified in the monitor template # This example does not expect any additional command line arguments # # Name/Value pairs may also be specified in the monitor template # This example expects the following Name/Vaule pairs: #URI= the URI to request from the server #RECV = the expected response (not case sensitive) #HOST =the host name of the SNI-enabled site # # remove IPv6/IPv4 compatibility prefix (LTM passes addresses in IPv6 format) #IP=`echo ${1} | sed 's/::ffff://'` NODE=`echo ${1} | sed 's/::ffff://'` if [[ $NODE =~ ^[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}$ ]]; then NODE=${NODE} else NODE=[${NODE}] fi PORT=${2} PIDFILE="/var/run/`basename ${0}`.${HOST}_${PORT}_${NODE}.pid" # kill of the last instance of this monitor if hung and log current pid if [ -f $PIDFILE ] then echo "EAV exceeded runtime needed to kill ${HOST}_${PORT}_${NODE}" | logger -p local0.error kill -9 `cat $PIDFILE` > /dev/null 2>&1 fi echo "$$" > $PIDFILE # send request & check for expected response #curl -fNsk https://${IP}:${PORT}${URI} | grep -i "${RECV}" 2>&1 > /dev/null curl -fNsk --resolve $HOST:$PORT:$NODE https://$HOST$URI | grep -i "${RECV}" > /dev/null 2>&1 # mark node UP if expected response was received if [ $? -eq 0 ] then rm -f $PIDFILE echo "UP" else rm -f $PIDFILE fi exit1KViews0likes0Commentswhich REST API is available for invoking using curl "force offline of node members" and checking the "current connections" for the node member.
I need a REST API available for invoking using curl "force offline of node members" and checking the "current connections" for the node member.850Views1like4CommentsRecurrent Curl to a Virtual Server Fails on the Same Subnet
On my network, recurrent curl tests to a virtual server (10.184.1.12) only fail when the source ip is on the same subnet. (eg,10.184.1.78) When recurrent curl tests are performed from any other subnet (eg,10.243.2.3 or 10.123.34.5) to the destination virtual server (10.184.1.12), they NEVER fail. Are there any leads to what can warrant this.803Views0likes11CommentsVirtual server details get a 404 with CURL
Hi, I am trying to get the virtual server details from a CURL call: curl -svku "admin:admin" https://0.0.0.0/mgmt/tm/ltm/virtual/virtualtest And I keep getting the following error: {"code":404,"message":"01020036:3: The requested Virtual Server (/Common/virtualtest) was not found.","errorStack":[],"apiError":3} What am I missing? The virtual server does exist. I am using F5 version 12.1.2 Thank you.661Views0likes3Comments