HomeRouter, JokeRFC, Copilot, AttackToAI - March 25th - 31th, 2023 - F5 SIRT - This Week in Security

Editor's introduction 

This week in security editor is Koichi.  Not a day goes by these days that we don't hear about AI. This week I chose topics of Japanese Metropolitan police announcement about home routers, Joke RFCs, Security Co-pilot, and Adversarial AI attacks.

We in F5 SIRT invest a lot of time to understand the frequently changing behavior of bad actors. Bad actors are a threat to your business, your reputation, and your livelihood. That’s why we take the security of your business seriously. When you’re under attack, we’ll work quickly to effectively mitigate attacks and vulnerabilities, and get you back up and running. So next time you are under security emergency please contact F5 SIRT

Japan's Metropolitan Police Department calls on all citizens to monitor home router configuration changes.

Japan's Metropolitan Police Department has issued an announcement “Alert on unauthorised use of home routers”.

In there, “In cooperation with several related manufacturers, we advice to check regularly for any unrecognised configuration changes of home routers, in addition to the conventional countermeasures, such as changing simple IDs and passwords in the default settings, always using the latest firmware, and considering replacing routers that are no longer supported." So they recommend to all the citizens to check the home router's configuration, I understand the needs, but might need tech support for majority of the people.

Announcement (Japanese)

Joke RFCs

April Fool's Day has come again this year.
RFCs (Request For Comments) are IETF documents published to share a wide range of information on the standardisation and operation of technologies used on the Internet. We, engineers often refer to RFCs.
On 1 April, special RFCs, the Joke RFCs, were published. Joke RFCs are published as literally jokes, with nonsense content written in the same format as the original RFCs. While the Jok eRFC is never intended to be implement on actual network, it is often cited as an humor of Internet community.
This year three joke RFCs: RFC9401, RFC9402 and RFC9405 were published.
Among them, RFC 9401 was an RFC that added a death flag (DTH) to the TCP header. A death flag is a meme used in novels and movies, as the dictionary here says. It says, for example, that a death flag should not be sent early in a session, or that it should not be sent with a FIN, but is only exceptionally allowed if the recipient is a ficitional master of martial arts.

The list of joke RFCs is here.

RFC 9401

April Fools' Day Request for Comments

Security Copilot

ChatGPT using GPT-3 is well-known these days, and Copilot, which uses the same technology to give AI code suggestions while coding, is also increasing engineers' productivity. Microsoft Security Copilot, which applies Copilot to security analysis, has been announced by Microsoft.

According to Microsoft, Microsoft Security Copilot is based on GPT-4, which is even more powerful than GPT-3. Security Copilot prioritizes security incidents by correlating and summarising attack-related data in real-time in the form of interactive AI, to support security professionals (mostly CSIRT members) by advising them on the best course of action to rapidly remediate a wide variety of threats. At this stage it is in the preview phase.

Introducing Microsoft Security Copilot: Empowering defenders at the speed of AI

Adversarial AI attack

I regularly read research papers on arxiv.org to keep up with current technology topics. Especially I read papers regarding AI, and, attack against AI.

Among the papers published during this period (3/25-3/31), I found three papers that discuss AI attacks on Deep Neural Network(DNN). Before introducing them, I will briefly explain what AI attacks are.
AI technologies for classifying something basically extract features from the data and create a boundary that determines what the data is classified as. If the data crosses this decision boundary and is classified differently than it should be, the AI has 'made a mistake'.

This method of deliberately creating input data that AI "makes mistakes" by making small changes that are barely noticeable to humans is called Adversarial AI Examples. Attack methods that use these Adversarial Examples to trick AI are sometimes referred to as Adversarial AI Attacks. A well-known example of an Adversarial AI attack is the example used in this paper.

An attacker applies small perturbations against the image of a panda, that is to ensure that the image is recognized by AI with high confidence as a gibbon.

AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking

Improving the Transferability of Adversarial Samples by Path-Augmented Method

Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness

Updated Apr 12, 2023
Version 2.0

Was this article helpful?

No CommentsBe the first to comment