event
23 TopicsAttempting the new path to BIG-IP Certified Administrator
A couple weeks ago I had kjsalchow on for an episode of DevCentral Connects, which you can watch at your pleasure here: I had reached out to Ken and HeidiSchreifels after one our MVPs made a comment on this new path toward certification. I missed the memo (Heidi's article here), but this was big news and I knew we needed to have a conversation with the community. During our chat, Ken mentioned that all five beta tests required to earn your BIG-IP Certified Administrator would be available on-site at AppWorld (as would the recertification test available to ANYONE who has previously held the cert) and that he needed more people to start at #5 and work backwards. And so I did. I had time Tuesday morning before "The Hub" opening party kicked things off, so I knocked out all five beta tests. Here are my thoughts about the experience. I went in completely blind. I did not study, (but the blueprint is here for you) and I did not do the prep work to get my device ready for testing. I did pre-register for tests #3 - #5. When I arrived at the room, the cert team did a great job helping me get the tools set up on my laptop. The test environment downloads a secure browsing session, and there are some known issues with company laptops that lock things down, so you might be best suited to test from a personal laptop. To my knowledge in discussions with them, tablets are not supported. The certiverse delivery was great. Strong improvement from what I recall for the previous versions. Seeing the questions and the diagrams and being able to reference back and forth was far easier to assess the challenges. I always try to use the flagging system for review and that worked great. As this was a beta, I took extra time to provide feedback. For the betas, I had a range of questions from I think 39 to 58 across the five tests and an hour to complete each. For production tests, I believe that will be 30/30. None of them concerned me on time. I really liked the breakdown in the new format. This allows you to progress through the material when studying without having to keep it all upstairs for one test. BIG-IP Administration Install, Initial Configuration, and Upgrade BIG-IP Administration Data Plane Concepts BIG-IP Administration Data Plane Configuration BIG-IP Administration Control Plane Administration BIG-IP Administration Support and Troubleshooting I felt pretty good about the analysis questions, that stuff is pretty cemented in my brain. I work mostly with the BIG-IP APIs now, so I'm less solid on specific tmsh commands or tmui click paths. I put myself in the cone of shame on a few questions because I filmed lightboard lessons for them but I wasn't confident in the right answer. All that said, I have no idea if I passed them, but I think I hit minimally viable candidate on four of them? As they were betas, there were some questions that probably need to be removed, and some questions might need to be refined a little. This is where the always fascinating psychometrics come into play. But for the most part, I though they were a good summary of the knowledge one should have for basic administration. I got the first three tests completed quickly enough to take the other two. Registering for them on-site and jumping into them was painless. The cert team is the bomb-diggity. They're so helpful, friendly, encouraging, and super eager to make everyone successful. It's always a pleasure to cross paths with them! The downside of betas is they are not scored immediately, so I have to wait. Jason does not like waiting... How about you, community? Anyone else take the betas for the refreshed BIG-IP Certified Administrator (or the recertifying exam) and want to share your experience?219Views4likes2CommentsAI Friday LIVE w/ Steve Wilson - Vibe Coding, Agentic AI Security And More
Welcome to AI Friday! In this episode, we dive into the latest developments in Generative AI Security, discussing the implications and challenges of this emerging technology. Join Aubrey from DevCentral and the OWASP GenAI Security Project, along with an expert panel including Byron, Ken, Lori, and special guest Steve Wilson, as they explore the complexities of AI in the news and the evolving landscape of AI security. We also take a closer look at the fascinating topic of vibe coding, its impact on software development, and the transformative potential of AI-assisted coding practices. Whether you're a developer, security professional, or an AI enthusiast, this episode is packed with insights and expert opinions that you won't want to miss. Don't forget to like, subscribe, and join the conversation! Topics: Agentic Risk vs. Reward OWASP GenAI Security Project DeepSeek and Inference Performance Vibe Coding Roundtable Annnnd we may have shared some related, mostly wholesome memes.34Views0likes0CommentsAI Friday Live: NVIDIA GTC - Disney Robots, Blackwell, Groot, Newton And More!
Join us for our first live episode of AI Friday, where we delve into the latest and most exciting developments in the world of artificial intelligence. This week, we discuss the major announcements from NVIDIA's GTC conference, including insights on their new Blackwell chips and the fascinating advancements in autonomous vehicles. Our very own Buu Lam from DevCentral shares his firsthand experience and key takeaways from the event. In addition, we explore the groundbreaking world of robotics, highlighting how Disney's cute robots, powered by NVIDIA and Google DeepMind, are set to revolutionize the industry. We also tease our upcoming episode featuring Steve Wilson, where we will discuss the OWASP Gen AI Security Project and its implications for large language model applications. Don't miss out on this engaging and informative discussion! Would you like some fish with all those new chips? DGX Spark / Station Robotics: Groot and Newton Nemotron Models19Views0likes0CommentsExploring Quantum Computing, AI, Networking and Cryptography at AppWorld 2025
Join us as we dive into the exciting world of quantum computing and AI at App World 2025, held at the fabulous Fontainebleau in Las Vegas. I get to host a fascinating discussion with Daniela Pontes and Brett Wolmarans bwolmarans from F5. We explore the latest advancements in AI, the impact of quantum computing on cybersecurity, and what the future holds for post-quantum cryptography. Discover how F5 is leveraging AI to optimize and secure applications, and learn about the recent release of the AI gateway. Daniela delves into the looming threat of quantum computing on current cryptography standards, explaining the importance of transitioning to quantum-resistant algorithms and even a little on quantum networking. Stay tuned for info on how industries like finance and healthcare are preparing for a quantum future. Don't miss this episode full of expert knowledge and cutting-edge technology!37Views0likes0CommentsHappy 20th Birthday, BIG-IP TMOS!
I wasn’t in the waiting room with the F5 family, ears and eyes perked for the release announcement of BIG-IP version 9.0. I was a customer back in 2004, working on a government contract at Scott AFB, Illinois. I shared ownership of the F5 infrastructure, pairs of BIG-IPs running version 4.5 on Dell PowerEdge 2250 servers with one other guy. But maybe a month or two before the official first release of TMOS, my F5 account manager dropped off some shiny new hardware. And it was legit purpose-built and snazzy, not some garage-style hacked Frankenstein of COTS parts like the earlier stuff. And you wonder why we chose Dell servers! Anyway, I was a hard-core network engineer at this time, with very little exposure to anything above layer four, and even there, my understanding was limited to ports and ACLs and maybe a little high-level clarity around transport protocols. But application protocols? Nah. No idea. So with this new hardware and an entirely new full-proxy architecture (what’s a proxy, again?) I was overwhelmed. And honestly, I was frustrated with it for the first few days because I didn’t know what I didn’t know and so I struggled to figure out what to do with it, even to replicate my half-proxy configuration in the “new way”. But I’m a curious person. Given enough time and caffeine, I can usually get to the bottom of a problem, at least well enough to arrive at a workable solution. And so I did. My typical approach to anything is to make it work, make it work better, make it work reliably better, then finally make it work reliably and more performantly better. And the beauty here with this new TMOS system is that I was armed with a treasure trove of new toys. The short list I dug into during my beta trial, which lasted for a couple of weeks: The concept of a profile. When you support a few applications, this is no big deal. When you support hundreds, being able to macro configuration snippets within your application and across applications was revolutionary. Not just for the final solution, but also for setting up and executing your test plans. iRules. Yes, technically they existed in 4.x, but they were very limited in scope. With TMOS, F5 introduced the Tcl-based and F5 extended live-traffic scripting environment that unleashed tremendous power and flexibility for network and application teams. I dabbled with this, and thought I understood exactly how useful this was. More on this a little later. A host operating system. I was a router, switch, and firewall guy. Nothing I worked on had this capability. I mean, a linux system built in to my networking device? YES!!! Two things I never knew I always needed during my trial: 1) tcpdump ON BOX. Seriously--mind blown; and 2) perl scripting against config and snmp. Yeah, I know, I laugh about perl now. But 20 years ago, it was the cats pajamas. A fortunate job change Shortly after my trial was over, I interviewed for an accepted a job offer from a major rental car company that was looking to hire an engineer to redesign their application load balancing infrastructure and select the next gear purchase for the effort. We evaluated Cisco, Nortel/Alteon, Radware, and F5 on my recommendation. With our team’s resident architect we drafted the rubric with which we’d evaluate all the products, and whereas there were some layer two performance issues in some packet sizes that were arguably less than real-world, the BIG-IP blew away the competitors across the board. Particularly, though, in configurability and instrumentation. Tcpdump on box was such a game-changer for us. Did we have issues with TMOS version 9? For sure. My first year with TMOS was also TMOS's first year. Bugs are going to happen with any release, but a brand new thing is guaranteed. But F5 support was awesome, and we worked through all the issues in due time. Anyway, I want to share three wins in my first year with TMOS. Win #1 Our first production rollout was in the internet space, on BIG-IP version 9.0.5. That’s right, a .0 release. TMOS was a brand new baby, and we had great confidence throughout our testing. During our maintenance, once we flipped over the BIG-IPs, our rental transaction monitors all turned red and the scripted rental process had increased by 50%! Not good. “What is this F5 stuff? Send it back!!” But it was new, and we knew we had a gem here. We took packet captures on box, of course, then rolled back and took more packet captures, this time through taps because our old stuff didn’t have tcpdump on box. This is where Jason started to really learn about the implications of both a full proxy architecture and the TCP protocol. It turns our our application servers had a highly-tuned TCP stack on them specific to the characteristics of the rental application. We didn’t know this, of course. But since we implemented a proxy that terminates clients at the BIG-IP and starts a new session to the servers, all those customizations for WAN traffic were lost. Once we built a TCP profile specifically for the rental application servers and tested it under WAN emulation, we not only reached parity with the prior performance but beat it by 10%. Huzzah! Go BIG-IP custom protocol stack configuration! Win #2 For the next internal project, I had to rearchitect the terminal server farm. We had over 700 servers in two datacenters supporting over 60,000 thin clients around the world for rental terminals. Any failures meant paper tickets and unhappy staff and customers. One thing that was problematic with the existing solution is that sometimes clients would detach and upon reconnect would connect directly to the server, which skewed the load balancers view of the world and frequently overloaded some servers to the point all sessions on that server would hang until metrics (but usually angry staff) would notify. Remember my iRules comment earlier on differentiators? Well, iRules architect David Hansen happened to be a community hero and was very helpful to me in the DevCentral forums and really opened my eyes to the art of possible with iRules. He was able to take the RDP session token that was being returned by the client, read it, translate it from its Microsoft encoding format, and then forward the session on to the correct server in the backend so that all sessions continued to be accounted for in our load balancing tier. This was formative for me as a technologist and as a member of the DevCentral community. Win #3 2004-2005 was the era before security patching was as visible a responsibility as it is today, but even then we had a process and concerns when there were obstacles. We had an internal application that had a plugin for the web tier that managed all the sessions to the app tier, and this plugin was no longer supported. We were almost a year behind on system and application patches because we had no replacement for this. Enter, again, iRules. I was able to rebuild the logic of the plugin in an iRule that IIRC wasn’t more than 30 lines. So the benefits ended up not only being a solution to that problem, but the ability to remove that web tier altogether, saving on equipment, power, and complexity costs. And that was just the beginning... TMOS was mature upon arrival, but it got better every year. iControl added REST-based API access; clustered multi-processing introduced tremendous performance gains; TMOS got virtualized, and all the home-lab technologists shouted with joy; a plugin architecture allowed for product modules like ASM and APM; solutions that began as iRules like AFM and SSLO became products. It’s crazy how much innovation has taken place on this platform! The introduction of TMOS didn’t just introduce me to applications and programmability. It did that and I’m grateful, but it did so much more. It unlocked in me that fanboy level that fans of sports teams, video game platforms, Taylor Swift, etc, experience. It helped me build an online community at DevCentral, long before I was an employee. Happy 20th Birthday, TMOS! We celebrate and salute you!559Views10likes1CommentEnhance your GenAI chatbot with the power of Agentic RAG and F5 platform
Agentic RAG (Retrieval-Augmented Generation) enhances the capabilities of a GenAI chatbot by integrating dynamic knowledge retrieval into its conversational abilities, making it more context-aware and accurate. In this demo, I will demonstrate an autonomous decision-making GenAI chatbot utilizing Agentic RAG. I will explore what Agentic RAG is and why it's crucial in today's AI landscape. I will also discuss how organizations can leverage GPUaaS (GPU as a Service) or AI Factory providers to accelerate their AI strategy. F5 platform provides robust security features that protect sensitive data while ensuring high availability and performance. They optimize the chatbot by streamlining traffic management and reducing latency, ensuring smooth interactions even during high demand. This integration ensures the GenAI chatbot is not only smart but also reliable and secure for enterprise use.504Views2likes0CommentsHow I did it - "Remote Logging with the F5 XC Global Log Receiver and Elastic"
Welcome to configuring remote logging to Elastic, where we take a look at the F5 Distributed Cloud’s global log receiver service and we can easily send event log data from the F5 distributed cloud services platform to Elastic stack.605Views1like0CommentsSecurity Automation with F5 BIG-IP and Event Driven Ansible
Updated (September 19th 2023) INTRODUCTION TO EVENT DRIVEN SECURITY: Event Driven Security is one of the projects I have been working on for the last year or so. The idea of creating automated security that can react similarly to how I would react in situations is fascinating to me, and then comes the BIG Question.... "Can I code it?" Originally our solution we had utilized ELK (Elastic Logstash Kibana) where Elasticsearch was my logging and monitoring tool, Kibana was the frontend GUI for helping me visualize and set up my watchers for my webhook triggers, Logstash would be an intermediary to receive my webhooks to help me execute Ansible related code. While using Logstash, if the Ansible code was simple it had no issues, however when things got more complex (i.e., taking payloads from Elastic and feeding them through Logstash to my playbooks), I would sometimes get intermittent results. Some of this could be my lack of knowledge of the software but for me it needed to be simple! As I want to become more complex with my Event Driven Security, I needed a product that would follow those needs. And luckily in October 2022 that product was announced "Event Driven Ansible" it made it so I didn’t need Logstash anymore i could call Ansible related code directly, it even took in webhooks (JSON based) to trigger the code, so I was already half way there! CODE FOR EVENT DRIVEN SECURITY: So now I have setup the preface let’s get down to the good stuff! I have setup a GitHub repository for the code i have been testing with https://github.com/f5devcentral/f5-bd-ansible-eda-demo which is free for all to use and please feel free to take/fork/expand!!! There are some cool things worth noting in the code specifically the transformation of the watch code into something usable in playbooks. This code will take all the times the watcher finds a match in its filter and then then copies the Source IP from that code and puts it into a CSV list, then it sends the list as a variable within the webhook along with the message to execute the code. Here is the code I am mentioning above about transforming and sending the payloads in an elastic watcher. See the Full code in the GitHub repo. (Github Repo --> elastic --> watch_blocked_ips.json) "actions": { "logstash_exec": { "transform": { "script": { "source": """ def hits = ctx.payload.hits.hits; def transform = ''; for (hit in hits) { transform += hit._source.src_ip; transform += ', ' } return transform; """, "lang": "painless" } }, "webhook": { "scheme": "http", "host": "10.1.1.12", "port": 5000, "method": "post", "path": "/endpoint", "params": {}, "headers": {}, "body": """{ "message": "Ansible Please Block Some IPs", "payload": "{{ctx.payload._value}}" }""" } } } } In the Ansible Rulebook the big thing to note is that from the Pre-GA code (which was all CLI ansible-rulebook based) to the GA version (EDA GUI) rulebooks now are setup to call Ansible Automation Platform (AAP) templates. In the code below you can see that its looking for an existing template "Block IPs" in the organization "Default" to be able to run correctly. (Github Repo --> rulebooks --> webhook-block-ips.yaml) --- - name: Listen for events on a webhook hosts: all ## Define our source for events sources: - ansible.eda.webhook: host: 0.0.0.0 port: 5000 ## Define the conditions we are looking for rules: - name: Block IPs condition: event.payload.message == "Ansible Please Block Some IPs" action: run_job_template: name: "Block IPs" organization: "Default" This shows my template setup in Ansible Automation Platform 2.4.x, there is one CRITICAL piece of information i wanted to share about using EDA GA and AAP 2.4 code is that within the template you MUST tick the checkbox on the "Prompt on launch" in the "variables section". This will allow the payload from EDA (given to it from Elastic) to pass on to the playbook. In the Playbook you can see how we extract the payload from the event using the ansible_eda variable, this allows us to pull in the event we were sent from Elastic to Event Driven Ansible and then sent to the Ansible Automation Platform template to narrow down the specific fields we needed (Message and Payload) from there we create an array from that payload so we can pass it along to our F5 code to start adding Blocked IPs to the WAF Policy. (Github Repo --> playbooks --> block-ips.yaml) --- - name: ASM Policy Update with Blocked IPs hosts: lb connection: local gather_facts: false vars: Blocked_IPs_Events: "{{ ansible_eda.event.payload }}" F5_VIP_Name: VS_WEB F5_VIP_Port: "80" F5_Admin_Port: "443" ASM_Policy_Name: "WAF-POLICY" ASM_Policy_Directory: "/tmp/f5/" ASM_Policy_File: "WAF-POLICY.xml" tasks: - name: Setup provider ansible.builtin.set_fact: provider: server: "{{ ansible_host }}" user: "{{ ansible_user }}" password: "{{ ansible_password }}" server_port: "{{ F5_Admin_Port }}" validate_certs: "no" - name: Blocked IP Events From EDA debug: msg: "{{ Blocked_IPs_Events.payload }}" - name: Create Array from BlockedIPs ansible.builtin.set_fact: Blocked_IPs: "{{ Blocked_IPs_Events.payload.split(', ') }}" when: Blocked_IPs_Events is defined - name: Remove Last Object from Array which is empty array object ansible.builtin.set_fact: Blocked_IPs: "{{ Blocked_IPs[:-1] }}" when: Blocked_IPs_Events is defined ... All of this combined, creates a well-oiled setup that looks like the following diagram below, with the code and the flows setup we can now create proactive event based security! Here is the flow of the code that is in the GitHub repo when executed. The F5 BIG-IP is pushing all the monitoring logs to Elastic. Elastic is taking all that data and storing it while utilizing a watcher with its filters and criteria, The Watcher finds something that matches its criteria and sends the webhook with payload to Event Driven Ansible. Event Driven Ansible's Rulebook triggers and calls a template within Ansible Automation Platform and sends along the payload given to it from Elastic. Ansible Automation Platforms Template executes a playbook to secure the F5 BIG-IP using the payload given to it from EDA (originally from Elastic). In the End we go Full Circle, starting from the F5 BIG-IP and ending at the F5 BIG-IP! Full Demonstration Video: Check out our full demonstration video we recently posted (Sept 13th 2023) is available on-demand via https://www.f5.com/company/events/webinars/f5-and-red-hat-3-part-demo-series This page does require a registration and you can check out our 3 part series. The one related to this lab is the "Event-Driven Automation and Security with F5 and Red Hat Ansible" Proactive Securiy with F5 & Event Driven Ansible Video Demo LINKS TO CODE: https://github.com/f5devcentral/f5-bd-ansible-eda-demo2.1KViews10likes0CommentsDevCentral RSA Trip Planning Guide
Well.. It's RSA season and, for me, this will be SUPER exciting, as it's my first one! The Moscone Center in San Francisco, California, will once again host the RSA Conference on April 24th through April 27th, 2023. There are quite a few things to look forward to this year, but the one that I'm most excited about so far is the theme, "Stronger Together." Through time, we've had a very bi-lateral community in security. Whether it's the Spy vs. Spy influenced White Hat / Black Hat concept from my early days or the Team Red / Blue of today, it feels clear that we need some unity... or better yet, community. At DevCentral, everything's about community, so I'm most curious to see how community is reflected in RSA message. As a community member, though, I'd like to share the things I'm most excited about there. Where to find F5? F5 will be all over the place at this show! Our booth on the expo floor is N5435 and, of course, PSilva will be scheming a Find The Booth Video, I'd imagine! For us DevCentral crew, the booth is like a home base. We frequent it between shoots to check the action and to meet with industry experts. On Tuesday at 8:30am, F5 Labs' Sander_Vinberg will be presenting "The Evolution of CVEs, Vulnerability Management and Hybrid Architectures," with Ben Edwards, from The Cyentia Institute. Knowing Ben a bit, this should be a VERY cool talk and one to highlight if you're into community working together types of themes... oh! You are! Well.. you might see me in the audience snapping pics like Peter Parker. Friday has a couple of sessions you'll want to catch! The first is at 9:30 with Angel Grant, with "Metaspace Race, Securing Minors in the Metaverse - from the Start" and, after lunch, get your CTF on with warburtr0n and Malcolm Heath at "Learn the FUNdamentals of API Security," at 1pm! Keynotes / Speakers: Aside from the F5 presence, I'm very excited to see Tanya Janca, who's got 3 sessions! She's doing a Birds of a Feather on Thursday at 1pm, "Creating a Great DevSecOps Culture," and a lab, "Adding SAST to CI/CD, Without Losing Any Friends," Wednesday at 1:15pm, but the one I'm REALLY thrilled to see is her Keynote on Thursday at 9:40 at the South Stage called, DevSecOps Worst Practices. I'm also looking forward to seeing Alyssa Miller. I just went to see her at Security B-Sides Rochester, NY, but this RSA event is a CISO panel, "CISO Legal Risks and Liabilities," on Wednesday at 1:15 PM, Moscone West 2001. Networking Events: I'm not sure what to expect with these as a new attendee. There are a couple events focused around beer, if that's your thing, and I hear the pub crawl is semi-legendary. The Sandbox looks to offer a wide array of experiences.. also with beer. There is also a Women's Networking Reception, sponsored by Cisco, on Tuesday night at South 303. Personally, I'd like to hit up the "Inclusive Security Welcome & Networking Breakfast" on Wednesday at South, 305. To me, that sounds a bit different and, as someone who's always been passionate about the benefits of workplace diversity it seems like a fit. Also, I think it is only appropriate that I try to attend the "RSAC Loyalty Plus & First-Timer Reception" on Sunday, from 5-7:30, but I think that's up to the airline gods. I Hope To See You There! Like, Comment.. Let us know if you're going, as we'd LOVE to connect with you community members! We want to know what YOU are excited about with RSA this year, as well. If there's anything you're missing out on but would like to see covered, let us know that, as well, so we can try to get coverage for you!1.4KViews3likes1CommentRevolutionize F5 BIG-IP Deployment Automation with HashiCorp’s No-Code Ready Terraform Modules
Introduction In organizations today, application infrastructure deployment involves teams such as platform teams, Ops teams, and dev teams all working together to ensure consistency and compliance. This is no easy task as because of siloed teams and expertise and deploying application infrastructure is time-consuming. Platform teams typically address this challenge with automation and by enabling their ops team and developers with self-service infrastructure – which abstracts most steps of deployment. HashiCorp Terraform No-Code Provisioning enables self-service of BIG-IP infrastructure as it allows the platform teams to create and maintain a library of pre-built Terraform modules that can be used by ops teams and developers to deploy multi-cloud BIG-IP infrastructure and services for their applications. This help to ensure consistency and reduce the amount of time organizations need to set up and configure infrastructure. Taking this one step further, the Terraform no-code modules enable infrastructure teams to streamline automation by combining CI/CD pipelines, any custom scripts, and other automation tools in the deployment chain allowing developers and operations teams to deploy F5 application services and infrastructure anywhere with a few clicks from the Terraform Cloud GUI – all while maintaining compliance. What is No-Code? No-code provisioning in Terraform Cloud lets users deploy infrastructure resources without writing Terraform configuration. This lets organizations provide a self-service model to developers with limited infrastructure knowledge and a way to deploy the resources they need. It allows individuals with limited Terraform coding experience or knowledge to provision infrastructure with Terraform. It can accelerate the development process by eliminating the need for coding and testing. It can reduce the reliance on scarce technical resources or expertise. It can improve the flexibility and agility of BIG-IP deployment. How to set up Terraform Cloud for BIG-IP No-Code Module? You need the following: Terraform Cloud account AWS account Terraform Cloud variables set configured with your AWS credentials Fork the example GitHub repository https://github.com/f5businessdevelopment/terraform-aws-bigip-nocode Then, clone your forked repository. Replacing USER with your username. Git clone https://github.com/USER/terraform-aws-bigip-nocode-1 Navigate to the repository directory. Navigate to terraform-aws-bigip-nocode1 directory Make sure you have variables defined as shown below variable "prefix" { description = "provide some prefix for deployment" } variable "region" { description = "AWS region you can define example is us-west-2 " } variable "allow_from" { description = "IP Address/Network to allow traffic from your machine (i.e. 192.0.2.11/32)" } These variable definitions facilitate the exposure of parameters while deploying the BIG-IP instance, you can add/delete any new parameters you need to expose. How to Publish No-code ready module? First, create a tag for your module. Tags are required to create a release on the GitHub repository, Terraform Cloud will use this tag to register the module. git tag 1.0.0 git push –tags Once your release is ready on the Github repository Navigate to Terraform Cloud at https://app.terraform.iohttps://terraform.io Click Registry 🡪 Publish 🡪 Module On the Add Module option select GitHub in Connect to VCS option Browse through your repository and select the repository as shown below. Confirm the selection as shown below Click on Add Module to no-code provision allow list and then hit Publish as shown below. Confirm the selection as shown below Click on Add Module to no-code provision allow list and then hit Publish as shown below. It will take a couple of seconds to Publish the module, once done you will see the screen below as shown. Now you are ready to use the module, to deploy the BIG-IP instance you click on the Provision workspace tab. How to use the BIG-IP No-Code Terraform Module? Once we have the No-Code module published on Terraform Cloud we are ready to use it. Login to Terraform Cloud at https://app.terraform.io and choose an organization Click on Registry 🡪 Module (bigip-1nic-nocode) as shown Click on the Provision workspace button as shown below. Workspaces in Terraform Cloud separate infrastructure configurations to help provide a multi-tenant environment. Provide the 3 parameters below as shown, you can give any name as a prefix, this helps in further providing more multi-tenancy if multiple people are provisioning. Provide workspace name, you can give any name. And finally hit the “Create Workspace” button to deploy the BIG-IP instance. BIG-IP instance will be ready with the Management IP address and password for you. Conclusion: Finally, the Infrastructure team can help to ensure the security and compliance of the infrastructure deployed using the Terraform No-Code Provisioning by implementing security best practices controls and monitoring. Reference Video2.6KViews2likes0Comments