central manager
6 TopicsGetting Started with BIG-IP Next: Fundamentals
In the first article in this series, I introduced BIG-IP Next at the 50,000-foot (or meter for the saner parts of the world...) level. In this article, I will get closer to the brass tacks of tackling some technical tasks, but still hover over the trenches so I can lay a little more groundwork into the components of the BIG-IP Next: Central Manager and instances. Central Manager The Central Manager is the brains of the operation, and aptly named since it is the centralized location where most management tasks regarding BIG-IP Next instances will coalesce. Gone are the days of logging into BIG-IP devices. It won't be supported! Also gone are the days of creating a node to create a pool and creating some profiles and iRules and snat pools and then slapping all that together on a virtual server. That's not to say that some shared objects won't exist--they will, or at least they can. In classic BIG-IP, the virtual server was the "top dog" from an object perspective unless you already have used iApps or AS3 declarations, in which case those options are similar to what we have with BIG-IP Next, where the application service wears the crown. Everything about that application service is defined within that context, including multiple virtual servers where necessary. That will be done in the GUI via application templates, or via the API with AS3 directly or via FAST templates. The included http application template in the Central Manager GUI allows for a lot of checkbox functionality, but accessing some of the functionality you may be used to will require additional or edited templates. Beyond managing the instances and the application services, you'll also be able to manage your security policies, attack and bot signature security services updates and monitor/report on deployed policies. And of course, you'll be able to manage users and performance maintenance on the Central Manager system itself. There is no license required for Central Manager; you can download it now and get started with your discovery as soon as you're ready! I have it installed on my iMac in VMware Fusion currently, and I'll be writing articles in the next couple of weeks on installation for Fusion and ESXi. Instances Whereas Central Manager is the brain of the BIG-IP Next operation, the instances are the brawn. They can take the form of a tenant on F5 VELOS or rSeries hardware, a KVM and/or VMware Virtual Edition for private clouds and coming soon, or a Virtual Edition on select public clouds. (Note: Instances can also take the form of CNFs in headless kubernetes deployments, but that won't be addressed in this series.) Onboarding instances is not as complex a process as setting up classic BIG-IP because day one operations are not intermingled with day two and beyond. You define the CPU, memory, disk, and network resources you need depending on what modules you're licensing for use and fire it up. Once that candle is lit, you run through a few onboarding steps with either a postman collection or write an onboarding script to walk through those steps for you. That's it for setup on the instances; the rest of the process is managed on Central Manager. Limited access will be available on instances for troubleshooting through a sidecar proxy, but even that is configured and managed through Central Manager. Instances are licensed. Make sure to check with your account team; you might already be entitled to BIG-IP Next licensing, but a conversion transaction will be necessary. For lab discovery, you can generate a trial license on MyF5 to get started! I'll cover installation on KVM, Fusion, and ESXi in the next couple of weeks. Leon Seng has already written up installing a BIG-IP Next instance on Proxmox! "Next" Up Alrighty then! Enough talk, Jason, let's do something! I hear you, I hear you...starting next week, I'll be releasing incremental steps into the installation, onboarding, licensing, upgrading, backup/restore, etc, of both the Central Manager and the instances. Here's the general workflow I'll follow: Ignore the platform. I'll step through all the support versions I have access to and keep placeholders to circle back as more platforms are supported. I hope to see you all at AppWorld, but if not, don't be a stranger here on DevCentral, reach out any time!1.9KViews7likes0CommentsGetting Started with BIG-IP Next: Configuring Instance High Availability
With BIG-IP classic, there are a lot of design choices to make and steps on both systems to arrive at an HA pair. With BIG-IP Next, this is simplified quite a bit. Once configured, the highly available pair is treated by Central Manager as a single entity. There might be alternative options in the future, but as of version 20.1, HA for instances is active/standby only. In this article, I'll walk you through the steps to configure HA for instances in the Central Manager GUI. Background and Prep Work I set up two HA systems in my preparation for this article. The first had dedicated interfaces for the management interface, the external and internal traffic interfaces, and the HA interface. So when configuring the virtual machine, I made sure each system had four NICs. For the second, I merged all the non-management interfaces on a single NIC and used vlan tagging, so those systems had two NICs. In my lab that looks like this: The IP addressing scheme in my lab is shown below. First the four NIC system: 4-NIC System next-4nic-a next-4nic-b floating mgmt 172.16.2.152/24 172.16.2.153/24 172.16.2.151/24 cntrlplane ha (vlan 245) 10.10.245.1/30 10.10.245.2/30 NA dataplane ha (int 1.3) 10.0.5.1/30 10.0.5.2/30 NA dataplane ext (int 1.1) 10.0.2.152/24 10.0.2.153/24 10.0.2.151/24 dataplane int (int 1.2) 10.0.3.152/24 10.0.3.153/24 10.0.3.151/24 And now the two NIC system: 2-NIC System next-2nic-a next-2nic-b floating mgmt 172.16.2.162/24 172.16.2.163/24 172.16.2.161/24 cntrlplane ha (vlan 245) 10.10.245.5/30 10.10.245.6/30 NA dataplane ha (vlan 50) 10.0.5.5/30 10.0.5.6/30 NA dataplane ext (vlan 30) 10.0.2.162/24 10.0.2.163/24 10.0.2.161/24 dataplane int (vlan 40) 10.0.3.162/24 10.0.2.163/24 10.0.3.161/24 Beyond the self IP addresses for your traffic interfaces, you'll need additional IP addresses for the floating address, the control-plane HA sub-interfaces (which are created for you), and teh data-plane HA interfaces. Before proceeding, make sure you have a plan for network segmentation and addressing similar to above, you've installed two like instances, and that one (and only one) of them is licensed. Configuration This walk through is for the 2-NIC system shown above, but the steps are mostly the same. First, login to Central Manager, and click on Manage Instances. Click on the standalone mode for the system you want to be active initially in your HA pair. For me, that's next-2nic-a. (You can also just click on the system name and then select HA in the menu, but this saves a click.) In the pop-up dialog, select Enable HA. Read the notes below to make sure your systems are ready to be paired. On this screen, a list of available standalone systems will populate. Click the down arrow and select your second system, next-2nic-b in my case. Then click Next. On this next prompt, you'll need to create two vlans, one for the control plane and one for the data plane. The control plane mechanics are taken care of for you and you don't need to plan connectivity other than to select an available vlan that won't conflict with anything else in your system. For the data plane, you need to have a dedicated vlan and/or interface set aside. Click Create VLAN for the control plane. Name and tag your vlan. In my case I used cp-ha as my vlan name and tag 245. Click Done. Now click Create VLAN for the data plane. Because I'm tagging all networks on the 2-NIC system, my own interface is 1.1. So I named my data plan vlan dp-ha, set the tag to 50, selected interface 1.1, and clicked Done. Now that both HA VLANs have been created, click Next. On this screen, you'll name your HA pair system. This will need to be unique from other HA pairs, so plan accordingly. I named mine next-ha-1, but that's generic and unlikely to be helpful in your environment. Then set your HA management IP, this is how Central Manager will connect to the HA pair. You can enable auto-failback if desired, but I left that unchecked. For the HA Nodes Addresses, I referenced my addressing table posted at the top of this article and filled those in as appropriate. When you get those filled out, click Next. Now you'll be presented with a list of your traffic VLANs. On my system I have v102-ext and v103-int for my external and internal networks. First, I clicked v102-ext. On this screen you'll need to add a couple rows so you can populate the active node IP, the standby node IP, and the floating IP. The order doesn't matter, but I ordered them as shown, and again referenced my addressing table. Once populated, click Save. That will return you to this screen, where you'll notice that v102-ext now has a green checkbox where the yellow warning was. Now click into your other traffic VLAN (v103-int in my case) if applicable to your environment or skip this next step. This is a repeat of the external traffic network for the internal traffic network. I referenced my address table one more time and filled the details out as appropriate, then clicked Save. Make sure that you have green checkboxes on the traffic VLANs, then click Next. Review the summary of the HA settings you've configured, and if everything looks right, click Deploy to HA. On the "are you sure?" dialog where you're prompted to confirm your deployment, click Yes, Deploy. You'll then see messaging at the top of the HA configuration page for the instance indicating that HA is being created. Also note that the Mode on this page during creation still indicates standalone. Once the deployment is complete, you'll see the mode has changed to HA and the details for your active and standby nodes are provided. Also present here is the Enable automatic failover option, which is enabled by default. This is for software upgrades. If left enabled, the standby unit will be upgraded first, a failover will be executed, and the the remaining system will be upgraded. If in your HA configuration you specified auto-failback, then after the second system is upgraded there will be another failover executed to complete the process. And finally, as seen in the list of instances, there are three now instead of four, with next-ha-1 taking the place of next-2nic-a and next-2nic-b from where we started. Huzzah! You now have a functioning BIG-IP Next HA pair. After we conclude the "Getting Started" series, we'll start to look at the benefits of automation around all the tasks we've covered so far, including HA. The click-ops capabilities are nice to have, but I think you'll find the ability to automate all this from a script or something like an Ansible playbook will really start to drive home the API-first aspects of Next.999Views1like6CommentsGetting Started with BIG-IP Next: Licensing Instances in Central Manager
This article assumes that the license was not applied during the initial instance setup and covers only the GUI process. For the API process or for disconnected mode, please reference the instructions for licensing on Clouddocs. Download the JSON Web Token from MyF5 I don't have a paid license, so I'm going to use my trial license available at MyF5. Your mileage may vary here. Go to my products & plans, trials, and then in the my trials listing (assuming you've requested/received one) click BIG-IP Next. Click downloads and licenses (note, however, the helpful list of resources down in guides and references). You can just copy your JSON web token, but I chose to download. Install the Token Login to Central Manager and click manage instances. Click on your new unlicensed instance. In the left-hand menu at the bottom, click License. Click activate license. We already downloaded our token, so after reviewing the information, click next. Note that I made sure that my Central Manager has access to the licensing server and the steps covered in this article assume the same. If you've managed classic BIG-IP licenses, copying and pasting dossiers to get licenses should be a well-understood process. On this screen, paste your token into the box, give it a name, and click activate. After a brief interrogation of the licensing server, you should now have a healthy, licensed, BIG-IP Next Instance! Resources How to: Manage BIG-IP Next instance licenses799Views0likes9CommentsGetting Started with BIG-IP Next: Upgrading Central Manager
Upgrades are one of the major improvements in moving from BIG-IP classic to Next. Whereas there is no direct analog for Central Manager in BIG-IP classic, the improvements from the BIG-IP/BIG-IQ upgrade experience will be noticeable. Simplification is the goal, and in my first Central Manager upgrade experience, I'd say that bar has been reached. In this article, I'll walk you through performing an upgrade to a standalone Central Manager. When HA for Central Manager is released, I'll update this article with those details. The installation steps on Clouddocs (links in the resources at the end of this article) make note that you should upgrade your instances before Central Manager, so keep that in mind as you build out your procedure sets for BIG-IP Next operations. For production I'd also recommend taking a backup of Central Manager as well (I'll do a walkthrough of that process in the coming weeks) but for discovery on my BIG-IP Next journey, I'll skip that step and nuke/pave if I have an issue. The first step in the upgrade process is to download the BIG-IP Next Central Manager upgrade package. After you have the upgraded package, login to your Central Manager. Click in the upper left on the tic-tac-toe board. Then in the dropdown menu that appears, select the System option. There's only one option here currently, and that's the upgrade button. Go ahead and click it. There will be a couple notes on the new window about resources and information on the unavailability to perform tasks during the upgrade. Go ahead and click next. If you didn't grab the package yet, the link to do so is included on this menu page. I selected the upload file option, selected the package from my downloads, and uploaded the file. You'll get the "green means go" checkbox when it's ready, at which point you can click the upgrade button. On the "Are you sure?" alert dialog, go ahead and click yes, upgrade. At this point, the upgrade will begin. On my upgrade, session was grayed out and I could not interact with the Central Manager interface, so my session timed out. I had trouble getting back in for several minutes, but when I got back in, I was presented with this alert dialog. You can click close here. And with that, you can see the new version of code. Congratulations on your first upgrade of Central Manager. Resources Upgrade BIG-IP Next Central Manager299Views2likes4CommentsPrepare BIG-IP Central Manager for Automation
This guide describes the process of setting up F5 BIG-IP Central Manager (CM) via Postman to manage BIG-IP instances with automation templates. It is essential to note that this information is specific to the current version of CM/BIG-IP NEXT (v20) and may change in the future. Introduction Beginning with BIG-IP version 20, F5 has implemented significant changes in managing the new BIG-IP OS, now referred to as BIG-IP Next. BIG-IP NEXT leverages a modern, highly scalable software architecture to support vast, dynamic application service deployment. This new iteration adopts an API-first approach to management, offering enhanced automation capabilities and improved scalability for service expansion. Learn more about BIG-IP Next here. BIG-IP NEXT Central Manager (also known as BIG-IP CM) represents the next-generation management suite for the new BIG-IP OS across hardware and software instances. It provides simplified lifecycle and configuration management tasks across F5 BIG-IP NEXT fleets. There are two primary methods for managing BIG-IP NEXT instances via Central Manager software: through a web browser-based portal or via API-based templates. Notably, BIG-IP NEXT no longer supports individual management through the CLI (tmsh). Before managing Central Manager via postman, it is highly recommended to start with essential components such as managing license and deploying BIG-IP NEXT instance via Central Manager via Web GUI. Detailed instructions for adding and managing BIG-IP NEXT instances and configurations can be found in this KB library https://community.f5.com/kb/technicalarticles/prepare-big-ip-central-manager-for-automation/327785. Getting Started with API-Based Management In addition to the web-based portal, BIG-IP CM provides APIs for orchestration, facilitating instance and configuration management using RestAPI. Authentication to the API requires a token for access and control. To interact with BIG-IP CM, clients must utilize token-based authentication instead of basic authentication. By default, BIG-IP CM rejects API requests made without proper token value. To obtain an access token, we need to send a token request to API login URL with a pre-set username/password for administration, the combination could be changed via WebGUI. To get access token, use a post request to following URL: POST https://<big-ip_next_cm_mgmt_ip>/api/login Include the following syntax in the request body: { "username": "admin", "password": "Welcome123!" } Upon successful authentication, the response body will contain an access token. This token can be utilized in future API calls to manage CM configuration and settings. Let's try injecting an access token from the preceding response and use it as the bearer token of a request to get the current config. Now, we can proceed with a simple get request to test the token by sending a get request without body to the URL https://<big-ip-cm-hostname>/api/v1/spaces/default/appsvcs/blueprints Now let's automate token refresh in Postman and store the access token in a variable, so the request can always use the latest access token. Within the "test" section in Postman, add the following syntax: pm.test("Login status code is 200", function () { pm.response.to.have.status(200); }); var resp = pm.response.json(); pm.globals.set("bigip_next_cm_token", resp.access_token); pm.environment.set("bigip_next_rf_token", resp.refresh_token); The above script will trigger an access token refresh and store the token into a variable named "big-ip_next_cm_token" in the global set when Postman sends a successful login request with a 200 response code. To include the stored access token variable in future requests, you can simply use {{bigip_next_cm_token}} as bearer token value for API requests or as an environment variable. This approach ensures that the token will be automatically attached to each request without requiring manual intervention to get and setting token value. Now let's try creating a sample App via postman using access token bearer: To Create the application service by sending a Post to the /api/v1/spaces/default/appsvcs endpoint. POST https://<big-ip_next_cm_mgmt_ip>POST /api/v1/spaces/default/appsvcs Following is an example of an application service template as API body: { "name": "HelloWorld", "set_name": "Examples", "template_name": "http", "parameters": { "pools": [ { "loadBalancingMode": "round-robin", "loadBalancingRatio": 10, "monitorType": [ "http" ], "servicePort": 80, "application_name": "App3", "poolName": "pool1" }, { "loadBalancingMode": "round-robin", "loadBalancingRatio": 10, "monitorType": [ "https" ], "servicePort": 443, "application_name": "App3", "poolName": "pool2" } ], "virtuals": [ { "FastL4_idleTimeout": 600, "FastL4_looseClose": true, "FastL4_looseInitialization": true, "FastL4_resetOnTimeout": true, "FastL4_tcpCloseTimeout": 43200, "FastL4_tcpHandshakeTimeout": 43200, "TCP_idle_timeout": 60, "UDP_idle_timeout": 60, "accessAdditionalConfigurations": " ", "enable_FastL4": false, "enable_HTTP2_Profile": true, "enable_TCP_Profile": false, "enable_TLS_Client": false, "enable_TLS_Server": true, "enable_UDP_Profile": false, "enable_snat": true, "snat_addresses": [], "snat_automap": true, "enable_WAF": true, "enable_Access": false, "enable_iRules": false, "virtualPort": 80, "pool": "pool1", "virtualName": "vs1", "certificatesEnum": "test11", "WAFPolicyName": "test1" }, { "FastL4_idleTimeout": 600, "FastL4_looseClose": true, "FastL4_looseInitialization": true, "FastL4_resetOnTimeout": true, "FastL4_tcpCloseTimeout": 43200, "FastL4_tcpHandshakeTimeout": 43200, "TCP_idle_timeout": 60, "UDP_idle_timeout": 60, "accessAdditionalConfigurations": " ", "enable_FastL4": false, "enable_HTTP2_Profile": true, "enable_TCP_Profile": false, "enable_TLS_Client": false, "enable_TLS_Server": true, "enable_UDP_Profile": false, "enable_snat": true, "snat_addresses": [], "snat_automap": true, "enable_WAF": true, "enable_Access": false, "enable_iRules": false, "virtualPort": 80, "pool": "pool2", "virtualName": "vs2", "certificatesEnum": "test12", "WAFPolicyName": "test2" } ], "application_name": "App3", "application_description": "TestApp" } } You could further verify the application service status via BIG-IP Central Manager WebGUI.296Views2likes0CommentsGetting Started with BIG-IP Next: Backing Up and Restoring Central Manager
Backing up BIG-IP Next Instances is possible in the Central Manager GUI. Backing up Central Manager, however, requires you to break out those made CLI skilz of yours. And take a backup you shall! You can snapshot your Central Manager virtual machine and restore that as well, but if you want system level backup instead of device level in the event things go south, you need an option currently as high availability, though coming soon to a release near you, is not yet an option for Central Manager. As there will be no screenshots required, most of this is already covered onClouddocs how to on this topic, but in this article, I'll walk through the process by executing the steps and sharing the output. Creating the Central Manager Backup Login to the Central Manager CLI by SSHing to your fqdn or IP address. If you configured the external storage when you set up Central Manager you can do a full backup, which includes all the analytics from Central Manager and your instances. If you only have local storage, you'll need to do a partial. The command to perform the backup and the restore is /opt/cm-bundle/cm. You use the backup subcommand for a backup operation, and as you can probably guess, the restore subcommand for a restore operation. I don't have the external storage in my lab, so I ran a partial backup. admin@cm1:~$ /opt/cm-bundle/cm backup 2024-03-09T00:04:15+00:00 Executing /opt/cm-bundle/cm backup Encryption password: Reenter encryption password: 2024-03-09T00:04:21+00:00 info: Backing up Vault... Created vault backup: /tmp/vault-backup.tgz tar: removing leading '/' from member names var/run/vault-init/ var/run/vault-init/linkerd.csr var/run/vault-init/linkerd.crt var/run/vault-init/vault-client-intermediate-ca.csr var/run/vault-init/vault-client-intermediate-ca.crt var/run/vault-init/unsealkeys var/run/vault-init/ca.crt var/run/vault-init/ingress-intermediate-ca.crt var/run/vault-init/unsealkeys.sha256 var/run/vault-init/linkerd-ca.crt var/run/vault-init/ingress-intermediate-ca.csr var/run/vault-init/linkerd-webhook.csr var/run/vault-init/linkerd-webhook.crt 2024-03-09T00:04:22+00:00 info: Vault backup successful! 2024-03-09T00:04:22+00:00 info: Backing up PostgreSQL... 2024-03-09T00:04:23+00:00 info: PostgreSQL backup successful! 2024-03-09T00:04:23+00:00 info: Performing Prometheus backup... 2024-03-09T00:04:55+00:00 info: Creating Prometheus database snapshot... 2024-03-09T00:05:09+00:00 info: Verifying the Prometheus database snapshot... 2024-03-09T00:05:09+00:00 info: Successfully created Prometheus database snapshot 20240309T000505Z-4c5c8cab103961be 2024-03-09T00:05:09+00:00 info: Copying Prometheus snapshot locally... 2024-03-09T00:05:22+00:00 info: Cleanup the Prometheus snapshot in the pod 2024-03-09T00:05:28+00:00 info: Prometheus backup succeeded! 2024-03-09T00:05:28+00:00 info: Performing Elasticsearch backup... 2024-03-09T00:05:28+00:00 info: Creating Elasticsearch snapshot [elasticsearch-snapshot]... 2024-03-09T00:05:28+00:00 info: Elasticsearch backup succeeded! 2024-03-09T00:05:28+00:00 info: Backing up SQLite... 2024-03-09T00:05:29+00:00 info: SQLite backup successful! 2024-03-09T00:05:29+00:00 info: Creating backup bundle backup.20240309-000421.tgz... 2024-03-09T00:08:26+00:00 info: Encrypting backup bundle... 2024-03-09T00:08:40+00:00 info: Backup bundle created at /opt/cm-backup/backup.20240309-000421.tgz.enc Restoring the Central Manager Backup Sometime after my backup, suppose AubreyKingF5 logged into to my Central Manager and deleted user jrahm and my backup-test certificate (BAD Aubrey!) Maybe he deleted all the resources. Here's the backup script execution on my Central Manager instance. Note the immediate ask for that backup password. Seriously, vault those passwords, don't lose them! admin@cm1:~$ /opt/cm-bundle/cm restore /opt/cm-backup/backup.20240309-000421.tgz.enc 2024-03-09T00:12:40+00:00 Executing /opt/cm-bundle/cm restore /opt/cm-backup/backup.20240309-000421.tgz.enc 2024-03-09T00:12:40+00:00 info: Restoring from backup file /opt/cm-backup/backup.20240309-000421.tgz.enc... Enter decryption password: 2024-03-09T00:12:43+00:00 info: Decrypting backup file... 2024-03-09T00:12:46+00:00 info: Checking available disk space... 2024-03-09T00:13:55+00:00 info: Extracting backup to /opt/cm-backup... 2024-03-09T00:14:35+00:00 info: Validating backup contains all required components 2024-03-09T00:14:35+00:00 info: Restoring Vault... var/run/vault-init/ var/run/vault-init/linkerd.csr var/run/vault-init/linkerd.crt var/run/vault-init/vault-client-intermediate-ca.csr var/run/vault-init/vault-client-intermediate-ca.crt var/run/vault-init/unsealkeys var/run/vault-init/ca.crt var/run/vault-init/ingress-intermediate-ca.crt var/run/vault-init/unsealkeys.sha256 var/run/vault-init/linkerd-ca.crt var/run/vault-init/ingress-intermediate-ca.csr var/run/vault-init/linkerd-webhook.csr var/run/vault-init/linkerd-webhook.crt Vault restored using /tmp/vault-backup.tgz 2024-03-09T00:14:47+00:00 info: Vault data has been successfully restored. 2024-03-09T00:14:47+00:00 info: Renewing all certificates. Manually triggered issuance of Certificate default/mbiq-ingress-nginx-root-cert Manually triggered issuance of Certificate default/mbiq-ado-vault-server-cert Manually triggered issuance of Certificate default/mbiq-ado-vault-client-cert Manually triggered issuance of Certificate default/gateway-feature-ingress-cert Manually triggered issuance of Certificate default/central-manager-ui-ingress-cert Manually triggered issuance of Certificate default/mbiq-apm-vault-client-cert Manually triggered issuance of Certificate default/mbiq-certificate-vault-client-cert Manually triggered issuance of Certificate default/mbiq-gateway-vault-client-cert Manually triggered issuance of Certificate default/mbiq-sslo-vault-client-cert Manually triggered issuance of Certificate default/mbiq-system-vault-client-cert Manually triggered issuance of Certificate default/mbiq-ingress-nginx-admission Manually triggered issuance of Certificate default/mbiq-instance-vault-client-cert Manually triggered issuance of Certificate default/mbiq-journeys-vault-client-cert Manually triggered issuance of Certificate default/mbiq-llm-vault-client-cert Manually triggered issuance of Certificate default/mbiq-qkview-vault-client-cert Manually triggered issuance of Certificate default/mbiq-upgrade-manager-vault-client-cert Manually triggered issuance of Certificate default/node-exporter-server-cert 2024-03-09T00:14:50+00:00 info: Waiting for certificates to be renewed. 2024-03-09T00:14:50+00:00 info: Certificate mbiq-ingress-nginx-root-cert renewed. 2024-03-09T00:14:50+00:00 info: Certificate mbiq-ado-vault-server-cert renewed. 2024-03-09T00:14:51+00:00 info: Certificate mbiq-ado-vault-client-cert renewed. 2024-03-09T00:14:56+00:00 info: Certificate gateway-feature-ingress-cert renewed. 2024-03-09T00:15:01+00:00 info: Certificate central-manager-ui-ingress-cert renewed. 2024-03-09T00:15:02+00:00 info: Certificate mbiq-apm-vault-client-cert renewed. 2024-03-09T00:15:02+00:00 info: Certificate mbiq-certificate-vault-client-cert renewed. 2024-03-09T00:15:02+00:00 info: Certificate mbiq-gateway-vault-client-cert renewed. 2024-03-09T00:15:02+00:00 info: Certificate mbiq-sslo-vault-client-cert renewed. 2024-03-09T00:15:02+00:00 info: Certificate mbiq-system-vault-client-cert renewed. 2024-03-09T00:15:03+00:00 info: Certificate mbiq-ingress-nginx-admission renewed. 2024-03-09T00:15:03+00:00 info: Certificate mbiq-instance-vault-client-cert renewed. 2024-03-09T00:15:03+00:00 info: Certificate mbiq-journeys-vault-client-cert renewed. 2024-03-09T00:15:03+00:00 info: Certificate mbiq-llm-vault-client-cert renewed. 2024-03-09T00:15:03+00:00 info: Certificate mbiq-qkview-vault-client-cert renewed. 2024-03-09T00:15:09+00:00 info: Certificate mbiq-upgrade-manager-vault-client-cert renewed. 2024-03-09T00:15:09+00:00 info: Certificate node-exporter-server-cert renewed. 2024-03-09T00:15:09+00:00 info: Successfully renewed all certificates. 2024-03-09T00:15:09+00:00 info: Restoring PostgreSQL database... 2024-03-09T00:15:12+00:00 info: Restarting init jobs. W0309 00:16:07.005788 2472134 warnings.go:70] path /(mgmt/shared/.*) cannot be used with pathType Prefix 2024-03-09T00:17:03+00:00 info: Successfully restarted init jobs. 2024-03-09T00:17:05+00:00 info: PostgreSQL database has been successfully restored. 2024-03-09T00:17:05+00:00 info: Restarting mbiq-sslo-feature... 2024-03-09T00:17:09+00:00 info: mbiq-sslo-feature has restarted. 2024-03-09T00:17:09+00:00 info: Restarting mbiq-qkview-feature... 2024-03-09T00:17:13+00:00 info: mbiq-qkview-feature has restarted. 2024-03-09T00:17:13+00:00 info: Restarting mbiq-device-feature... 2024-03-09T00:17:17+00:00 info: mbiq-device-feature has restarted. 2024-03-09T00:17:17+00:00 info: Restarting mbiq-certificate-feature... 2024-03-09T00:17:20+00:00 info: mbiq-certificate-feature has restarted. 2024-03-09T00:17:20+00:00 info: Restarting mbiq-gateway-feature... 2024-03-09T00:17:24+00:00 info: mbiq-gateway-feature has restarted. 2024-03-09T00:17:24+00:00 info: Restarting mbiq-proxy-service... 2024-03-09T00:17:28+00:00 info: mbiq-proxy-service has restarted. 2024-03-09T00:17:28+00:00 info: Restarting mbiq-system-feature... 2024-03-09T00:17:35+00:00 info: mbiq-system-feature has restarted. 2024-03-09T00:17:35+00:00 info: Restarting mbiq-apm-feature... 2024-03-09T00:17:46+00:00 info: mbiq-apm-feature has restarted. 2024-03-09T00:17:46+00:00 info: Restarting mbiq-upgrade-manager-feature... 2024-03-09T00:17:49+00:00 info: mbiq-upgrade-manager-feature has restarted. 2024-03-09T00:17:49+00:00 info: Restoring Prometheus... 2024-03-09T00:17:50+00:00 info: Deleting the current Prometheus data... 2024-03-09T00:17:50+00:00 info: Copying Prometheus data from backup... 2024-03-09T00:18:11+00:00 info: Prometheus data has been successfully restored. It may take a few minutes for Prometheus to be available. 2024-03-09T00:18:11+00:00 warning: Only restoring log indexes of ES 2024-03-09T00:18:11+00:00 info: Restoring Elasticsearch... {"acknowledged":true,"persistent":{"action":{"destructive_requires_name":"false"}},"transient":{}}2024-03-09T00:18:11+00:00 info: Closing all indices... 2024-03-09T00:18:12+00:00 info: Deleting all indices... {"acknowledged":true} {"acknowledged":true,"persistent":{"action":{"destructive_requires_name":"true"}},"transient":{}}2024-03-09T00:18:12+00:00 info: Elasticsearch data has been successfully restored. 2024-03-09T00:18:12+00:00 info: Restoring SQLite database... 2024-03-09T00:18:14+00:00 info: Restarting LLM POD 2024-03-09T00:18:18+00:00 info: SQLite database has been successfully restored. 2024-03-09T00:18:18+00:00 info: Migrating old apps to new schema... 2024-03-09T00:18:18+00:00 info: Waiting for migration job to finish... 2024-03-09T00:18:23+00:00 info: Migration job succeeded 2024-03-09T00:18:23+00:00 info: Post-restore updates started... 2024-03-09T00:18:25+00:00 info: Post-restore updates completed successfully 2024-03-09T00:18:25+00:00 info: Restore completed successfully. In this quick video, you can see the video evidence of his alleged shenanigans removing those resources, them me restoring the backup and validating the resources indeed were restored. And that's a wrap! Get your backups going and your processes documented.109Views1like0Comments