Let's Encrypt on a Big-IP
Problem this snippet solves:
It is now possible to make use of Let's Encrypt certificates and maintain them on the Big-IP.
Code :
http://wiki.lnxgeek.org/doku.php/howtos:let_s_encrypt_-_how_to_issue_certificates_from_a_bigip
Awesome work Colin!
I'm in the process of dissecting your work, so I can understand what you have created. I'm especially grateful for the upload function it has been too complex to fix for me :-)
Please keep me posted on your updates!
- Stanislas_Piro2Cumulonimbus
Hi Colin,
Great work. I have some suggestion:
- move all files from /etc/ or /var/www/dehydrated to a subfolder of /shared/
- replace cron with icall like lnxgeek did.
- move all files from /etc/ or /var/www/dehydrated to a subfolder of /shared/
Colin just did a quick test of the upload function.
If I try to upload a file larger than the chunk size this happens:
xxx@gestioip:~$ dd if=/dev/zero of=file.txt count=512 bs=1025 512+0 records in 512+0 records out 524800 bytes (525 kB) copied, 0.0018946 s, 277 MB/s xxx@gestioip:~$ ./upload.sh 1+0 records in 1+0 records out 524288 bytes (524 kB) copied, 0.00108948 s, 481 MB/s dd: ‘file.txt’: cannot skip to specified offset
"upload.sh" is just the upload function with a call to it inside.
- Colin_StubbsNimbostratus
Hi @lnxgeek ... errr, which exact function did you pull out? And from what file? It actually looks like your upload.sh is error'ing on a dd command not anything else.
In the deploy hooks you'll find uploadFile(), which has the logic to split the file into appropriately sized chunks and upload them using POST calls to iControl REST API one at a time. iControl REST puts them all back together for us as a file in /var/config/rest/downloads/
Thanks @Stanislas Piron. However, some points for you and everyone else... Using icall instead of cron, and using /shared are BIGIP specific things... dehydrated and my dehydrated-bigip hook are NOT intended for installation on a BIGIP system. In fact, installing them there is, in my opinion, kinda dumb.
You can and should be running dehydrated from another system, which is has an appropriate backup schedule, and which can deploy certs/keys to all appropriate BIGIP's. As well as re-deploy those certs/keys if you have to replace a BIGIP system, e.g. RMA for physical appliances or storage corruption totally wasting your VE.
If you use a single BIGIP and it fails, and you havn't backed up the Let's Encrypt account details/key, as well as your dehydrated config, those will be lost. UCS won't count by default as it won't include anything that's not part of the BIGIP/TMOS config.
I'll take your suggestions on board though; and consider making icall usage an option for scheduling to support persistence across upgrade. Dehydrated's CERTDIR variable is what should be used to control where certs get placed on the file system though.
- Colin_StubbsNimbostratus
lnxgeek... so it turns out dd can't skip more than 2 x ibs, ibs is 512 by default, and it bombs after 1025 bytes.
[blah@box ~]$ dd if=file.txt skip=1024 count=1 of=/tmp/tmp.AvKIkdviwr 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.000184931 s, 2.8 MB/s [blah@box ~]$ dd if=file.txt skip=1025 count=1 of=/tmp/tmp.AvKIkdviwr 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000173094 s, 0.0 kB/s [blah@box ~]$ dd if=file.txt skip=1026 count=1 of=/tmp/tmp.AvKIkdviwr dd: ‘file.txt’: cannot skip to specified offset 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000192339 s, 0.0 kB/s [blah@box ~]$
Basically looks like I need to explicitly set ibs to chunk size and skip N chunks when necessary,
[root@c01 ~] dd if=file.txt ibs=524288 skip=1 count=1 of=/tmp/tmp.AvKIkdviwr 0+1 records in 1+0 records out 512 bytes (512 B) copied, 0.000317561 s, 1.6 MB/s [root@c01 ~]
I will double check dd behaviour and commit a fix to GitHub repo shortly.
My upload.sh: cat upload.sh
BIGIP_USERNAME="xxx" BIGIP_PASSWORD="xxx" BIGIP_DEVICE="xxx" CURL="/usr/bin/curl" LOGFILE=${LOGFILE:-'/home/xxx/dehydrated-bigip-deploy-traffic-certificate.log'} DATE='date +%m/%d/%Y:%H:%M:%S' log() { echo `$DATE`" $*" >> $LOGFILE } uploadFile() { log "uploadFile()[Upload File]: ${1}" if [ ! -r ${1} ] ; then return 1 fi declare -i CHUNK_SIZE declare -i FILESIZE declare -i TMP_FILESIZE declare -i BYTES_START declare -i BYTES_END FILENAME=`basename ${1}` CHUNK_SIZE=$((512 * 1024)) FILESIZE=`stat -L -c%s ${1}` TMP_FILESIZE=0 BYTES_START=0 TMP_FILE=`mktemp` if [ ${FILESIZE} -le ${CHUNK_SIZE} ] ; then OUT=$(/bin/bash -c "${CURL} -s --insecure -X POST --data-binary '@${1}' --user '${BIGIP_USERNAME}:${BIGIP_PASSWORD}' -H 'Content-Type: application/octet-stream' -H 'Content-Range: ${BYTES_START}-$((${FILESIZE} - 1))/${FILESIZE}' 'https://${BIGIP_DEVICE}/mgmt/shared/file-transfer/uploads/${2}'") log "${CURL} -s --insecure -X POST --data-binary '@${1}' --user '${BIGIP_USERNAME}:${BIGIP_PASSWORD}' -H 'Content-Type: application/octet-stream' -H 'Content-Range: ${BYTES_START}-$((${FILESIZE} - 1))/${FILESIZE}' 'https://${BIGIP_DEVICE}/mgmt/shared/file-transfer/uploads/${2}'" else TMP_FILE=`mktemp` while [ ${BYTES_START} -le ${FILESIZE} ] ; do echo -n '' > ${TMP_FILE} dd if="${1}" skip=${BYTES_START} bs=${CHUNK_SIZE} count=1 of="${TMP_FILE}" TMP_FILESIZE=`stat -L -c%s ${TMP_FILE}` if [ $((${BYTES_START} + ${CHUNK_SIZE})) -gt ${TMP_FILESIZE} ] ; then BYTES_END=${FILESIZE} else BYTES_END=$((${BYTES_START} + ${TMP_FILESIZE})) fi OUT=$(/bin/bash -c "${CURL} -s --insecure -X POST --data-binary '@${TMP_FILE}' --user '${BIGIP_USERNAME}:${BIGIP_PASSWORD}' -H 'Content-Type: application/octet-stream' -H 'Content-Range: ${BYTES_START}-$((${BYTES_END} - 1))/${FILESIZE}' 'https://${BIGIP_DEVICE}/mgmt/shared/file-transfer/uploads/${2}'") log "${CURL} -s --insecure -X POST --data-binary '@${TMP_FILE}' --user '${BIGIP_USERNAME}:${BIGIP_PASSWORD}' -H 'Content-Type: application/octet-stream' -H 'Content-Range: ${BYTES_START}-$((${BYTES_END} - 1))/${FILESIZE}' 'https://${BIGIP_DEVICE}/mgmt/shared/file-transfer/uploads/${2}'" BYTES_START=${BYTES_END} done fi if [ "${TMP_FILE}x" != "x" ] && test -e "${TMP_FILE}" ; then rm -f "${TMP_FILE}" fi Overwrite the old records list with the new one. OUT=$(restCall "POST" "/mgmt/shared/file-transfer/uploads/~${BIGIP_PARTITION}~${1}" "{ \"records\": ${TT} }") log "uploadFile()[Upload results]: `echo $OUT | python -mjson.tool`" return 0 } OUT=$(uploadFile "file.txt" "file.txt") echo $OUT
So in essence it is just the upload function called with some static parameters.
Hi everyone!
I've just updated my dehydrated script to the latest version (release 0.6.2) along with the updated "config" and "hook.sh" file - everything seems to play perfectly.
Both the "config" and "hook.sh" file has some new settings in them but I just moved over the configuration from the old files and put it into the same places in the new ones.
I didn't make any changes to the version of ACME so I'm running it with v2.
Hi Colin
Just re-tested your fine upload function and the skip chunks problem has been fixed. However you make one retraction too much when you run the curl command.
When you iterate over the file you have this calculation for the end range number: BYTES_END=$((${BYTES_START} + ${TMP_FILESIZE} - 1))
but then when you fire off the curl command you do it again:
OUT=$(/bin/bash -c "${CURL} -s --insecure -X POST --data-binary '@${TMP_FILE}' --user '${BIGIP_USERNAME}:${BIGIP_PASSWORD}' -H 'Content-Type: application/octet-stream' -H 'Content-Range: ${BYTES_START}-$((${BYTES_END} - 1))/${FILESIZE}' 'https://${BIGIP_DEVICE}/mgmt/shared/file-transfer/uploads/${2}'")
This causes the rest API to go mad:
[SEVERE][18286][17 Jul 2018 14:25:29 UTC][8100/shared/file-transfer/uploads FileTransferWorker] Transfer failed for /var/config/rest/downloads/file.txt with java.lang.IllegalStateException: Chunk byte count 499713 in Content-Range header different from received buffer length 499712 at com.f5.rest.common.RestFileReceiver.writeFileChunk(RestFileReceiver.java:350) at com.f5.rest.common.RestFileReceiver.handleFileChunkWrite(RestFileReceiver.java:286) at com.f5.rest.common.RestFileReceiver$1.run(RestFileReceiver.java:222) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:473) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
If I change it to this instead it works:
OUT=$(/bin/bash -c "${CURL} -s --insecure -X POST --data-binary '@${TMP_FILE}' --user '${BIGIP_USERNAME}:${BIGIP_PASSWORD}' -H 'Content-Type: application/octet-stream' -H 'Content-Range: ${BYTES_START}-$((${BYTES_END}))/${FILESIZE}' 'https://${BIGIP_DEVICE}/mgmt/shared/file-transfer/uploads/${2}'")
- Colin_StubbsNimbostratus
Thanks @lnxgeek... you've prompted me to finally get around to committing what I had locally, new version in new file here, a couple of other minor changes to it: https://github.com/colin-stubbs/dehydrated-bigip/blob/master/dehydrated-bigip-common
You're welcome Colin :-)
It is a fantastic project you have put together here, I'm just trying to help out.