Let's Encrypt on a Big-IP
Problem this snippet solves:
It is now possible to make use of Let's Encrypt certificates and maintain them on the Big-IP.
Code :
http://wiki.lnxgeek.org/doku.php/howtos:let_s_encrypt_-_how_to_issue_certificates_from_a_bigip
My upload.sh: cat upload.sh
BIGIP_USERNAME="xxx" BIGIP_PASSWORD="xxx" BIGIP_DEVICE="xxx" CURL="/usr/bin/curl" LOGFILE=${LOGFILE:-'/home/xxx/dehydrated-bigip-deploy-traffic-certificate.log'} DATE='date +%m/%d/%Y:%H:%M:%S' log() { echo `$DATE`" $*" >> $LOGFILE } uploadFile() { log "uploadFile()[Upload File]: ${1}" if [ ! -r ${1} ] ; then return 1 fi declare -i CHUNK_SIZE declare -i FILESIZE declare -i TMP_FILESIZE declare -i BYTES_START declare -i BYTES_END FILENAME=`basename ${1}` CHUNK_SIZE=$((512 * 1024)) FILESIZE=`stat -L -c%s ${1}` TMP_FILESIZE=0 BYTES_START=0 TMP_FILE=`mktemp` if [ ${FILESIZE} -le ${CHUNK_SIZE} ] ; then OUT=$(/bin/bash -c "${CURL} -s --insecure -X POST --data-binary '@${1}' --user '${BIGIP_USERNAME}:${BIGIP_PASSWORD}' -H 'Content-Type: application/octet-stream' -H 'Content-Range: ${BYTES_START}-$((${FILESIZE} - 1))/${FILESIZE}' 'https://${BIGIP_DEVICE}/mgmt/shared/file-transfer/uploads/${2}'") log "${CURL} -s --insecure -X POST --data-binary '@${1}' --user '${BIGIP_USERNAME}:${BIGIP_PASSWORD}' -H 'Content-Type: application/octet-stream' -H 'Content-Range: ${BYTES_START}-$((${FILESIZE} - 1))/${FILESIZE}' 'https://${BIGIP_DEVICE}/mgmt/shared/file-transfer/uploads/${2}'" else TMP_FILE=`mktemp` while [ ${BYTES_START} -le ${FILESIZE} ] ; do echo -n '' > ${TMP_FILE} dd if="${1}" skip=${BYTES_START} bs=${CHUNK_SIZE} count=1 of="${TMP_FILE}" TMP_FILESIZE=`stat -L -c%s ${TMP_FILE}` if [ $((${BYTES_START} + ${CHUNK_SIZE})) -gt ${TMP_FILESIZE} ] ; then BYTES_END=${FILESIZE} else BYTES_END=$((${BYTES_START} + ${TMP_FILESIZE})) fi OUT=$(/bin/bash -c "${CURL} -s --insecure -X POST --data-binary '@${TMP_FILE}' --user '${BIGIP_USERNAME}:${BIGIP_PASSWORD}' -H 'Content-Type: application/octet-stream' -H 'Content-Range: ${BYTES_START}-$((${BYTES_END} - 1))/${FILESIZE}' 'https://${BIGIP_DEVICE}/mgmt/shared/file-transfer/uploads/${2}'") log "${CURL} -s --insecure -X POST --data-binary '@${TMP_FILE}' --user '${BIGIP_USERNAME}:${BIGIP_PASSWORD}' -H 'Content-Type: application/octet-stream' -H 'Content-Range: ${BYTES_START}-$((${BYTES_END} - 1))/${FILESIZE}' 'https://${BIGIP_DEVICE}/mgmt/shared/file-transfer/uploads/${2}'" BYTES_START=${BYTES_END} done fi if [ "${TMP_FILE}x" != "x" ] && test -e "${TMP_FILE}" ; then rm -f "${TMP_FILE}" fi Overwrite the old records list with the new one. OUT=$(restCall "POST" "/mgmt/shared/file-transfer/uploads/~${BIGIP_PARTITION}~${1}" "{ \"records\": ${TT} }") log "uploadFile()[Upload results]: `echo $OUT | python -mjson.tool`" return 0 } OUT=$(uploadFile "file.txt" "file.txt") echo $OUT
So in essence it is just the upload function called with some static parameters.
- Colin_StubbsNimbostratus
lnxgeek... so it turns out dd can't skip more than 2 x ibs, ibs is 512 by default, and it bombs after 1025 bytes.
[blah@box ~]$ dd if=file.txt skip=1024 count=1 of=/tmp/tmp.AvKIkdviwr 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.000184931 s, 2.8 MB/s [blah@box ~]$ dd if=file.txt skip=1025 count=1 of=/tmp/tmp.AvKIkdviwr 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000173094 s, 0.0 kB/s [blah@box ~]$ dd if=file.txt skip=1026 count=1 of=/tmp/tmp.AvKIkdviwr dd: ‘file.txt’: cannot skip to specified offset 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000192339 s, 0.0 kB/s [blah@box ~]$
Basically looks like I need to explicitly set ibs to chunk size and skip N chunks when necessary,
[root@c01 ~] dd if=file.txt ibs=524288 skip=1 count=1 of=/tmp/tmp.AvKIkdviwr 0+1 records in 1+0 records out 512 bytes (512 B) copied, 0.000317561 s, 1.6 MB/s [root@c01 ~]
I will double check dd behaviour and commit a fix to GitHub repo shortly.
- Colin_StubbsNimbostratus
Hi @lnxgeek ... errr, which exact function did you pull out? And from what file? It actually looks like your upload.sh is error'ing on a dd command not anything else.
In the deploy hooks you'll find uploadFile(), which has the logic to split the file into appropriately sized chunks and upload them using POST calls to iControl REST API one at a time. iControl REST puts them all back together for us as a file in /var/config/rest/downloads/
Thanks @Stanislas Piron. However, some points for you and everyone else... Using icall instead of cron, and using /shared are BIGIP specific things... dehydrated and my dehydrated-bigip hook are NOT intended for installation on a BIGIP system. In fact, installing them there is, in my opinion, kinda dumb.
You can and should be running dehydrated from another system, which is has an appropriate backup schedule, and which can deploy certs/keys to all appropriate BIGIP's. As well as re-deploy those certs/keys if you have to replace a BIGIP system, e.g. RMA for physical appliances or storage corruption totally wasting your VE.
If you use a single BIGIP and it fails, and you havn't backed up the Let's Encrypt account details/key, as well as your dehydrated config, those will be lost. UCS won't count by default as it won't include anything that's not part of the BIGIP/TMOS config.
I'll take your suggestions on board though; and consider making icall usage an option for scheduling to support persistence across upgrade. Dehydrated's CERTDIR variable is what should be used to control where certs get placed on the file system though.
Colin just did a quick test of the upload function.
If I try to upload a file larger than the chunk size this happens:
xxx@gestioip:~$ dd if=/dev/zero of=file.txt count=512 bs=1025 512+0 records in 512+0 records out 524800 bytes (525 kB) copied, 0.0018946 s, 277 MB/s xxx@gestioip:~$ ./upload.sh 1+0 records in 1+0 records out 524288 bytes (524 kB) copied, 0.00108948 s, 481 MB/s dd: ‘file.txt’: cannot skip to specified offset
"upload.sh" is just the upload function with a call to it inside.
- Stanislas_Piro2Cumulonimbus
Hi Colin,
Great work. I have some suggestion:
- move all files from /etc/ or /var/www/dehydrated to a subfolder of /shared/
- replace cron with icall like lnxgeek did.
- move all files from /etc/ or /var/www/dehydrated to a subfolder of /shared/
Awesome work Colin!
I'm in the process of dissecting your work, so I can understand what you have created. I'm especially grateful for the upload function it has been too complex to fix for me :-)
Please keep me posted on your updates!
- Colin_StubbsNimbostratus
For anyone interested, I have created a series of dehydrated hooks to address multiple Let's Encrypt/ACME against F5 BIGIP situations. Options available to use HTTP-01 or DNS-01 based validation, and to obtain/deploy/redeploy traffic OR management interface certs.
Get 'em here: https://github.com/colin-stubbs/dehydrated-bigip
I'm keen for feedback too. Let you know if something doesn't work or you think it could be improved.
- Adam_McKay_3593Nimbostratus
Not 100% relevant to the topic (but figured it wasn't worth a topic of it's own), if you want to create & update Le certificates on an F5 automatically without running scripts directly on the appliance itself, this project on GitHub uses the Python f5-sdk and acme.sh to run either standalone or in a Docker container.
https://github.com/farces/acme-f5-deploy/
In this case you'd need to use the DNS API for verification as it won't have access to the hosting web server to provision the well-known URI. The list of supported DNS providers is increasing, and if your provider has no API (or you're not willing to give your API Key to the script) you can use an 'alias' DNS on a supported host (Cloudflare is free for example) for the purpose of validation only.
It doesn't touch any VServers - it'll only create a certificate and certificate chain, and a single Client SSL profile once (and only if it doesn't already exist), ready to be customized and applied to a VServer.
Has worked well for us so I thought I'd put it somewhere other people could use (and revise, as needed!).
- Matteo_MarziliaNimbostratus
I think i'm already pointing to the V1 endpoint, as suggested by Leon i've uncommented the CA line in the Config file. I'm not skilled in linux scripting, any suggestions would be appreciated ;)
meanwhile i'll look at the script with some collegue. I'll keep you updated
It is the dehydrated script which makes the challenges, so that the culprit. I haven't had a domain which a dash in it that's why I've never come across it.
What you can try is to take the latest edition and point it to the v1 endpoint. It would seem that v2 endpoint require some modification to the Bigip setup.
If it isn't fixed there we need to take a look in the bash script itself.