Knowledge sharing: High CPU/Memory/Swap investigation/troubleshooting
High CPU and memory troubleshooting on F5 systems can be tricky because the cause isn’t always obvious. I’ve seen cases where a single REST-API script caused dozens of orphaned tmsh sessions, which slowly drove the CPU up until the box became almost unusable. Configuring proper session timeouts and auditing automated scripts solved it.
One thing that helps in these scenarios is approaching the issue the same way you’d approach an investigation: gather evidence, identify patterns, and rule out false leads before taking action. That methodical approach reminded me of the way professional investigative teams such as Lauth Investigations operate when piecing together complex cases. Applying that mindset to system diagnostics often leads to faster, more reliable resolutions.
In practice, keeping detailed logs of when spikes occur, what processes were active, and whether external triggers (monitoring scripts, cron jobs, or API calls) align with the events can reveal hidden causes. Pairing this with vendor knowledge base articles and bug trackers ensures you’re not chasing a ghost issue that has already been documented.