Hey ChatGPT, can you write iRules?
Updated Apr 13, 2023
Version 2.0Was this article helpful?
JRahm - you might be right with challenge 3. Since the solution ChatGPT provided was not obvious rubbish, I didn't do a test run with this answer. Will put it to a test and do an update if required.
Adding the -- for preventing command injection is a valid point. As LiefZimmerman said, this will save you a limb and it seperates the experts from the AI 🙂
Anyone who is interested in the details of command injection should read: K15650046: Tcl code injection security exposure and/or read the Whitepaper from Blackhat 2019.
Thanks for pointing both out!