Technology

Prepare for AI-enabled cyber attacks

prepare-for-ai-enabled-cyber-attacks

MIT Technology Review Insights, in partnership with AI cybersecurity firm Darktrace, surveyed more than 300 C-level executives, directors and managers around the world to understand how they are cracking down on the cyber threats they are facing – and how they are using AI to combat it can use against them.

60% of respondents say that human-controlled responses to cyberattacks cannot keep up with automated attacks. As businesses adjust to greater challenges, more sophisticated technologies are critical. In fact, an overwhelming majority of respondents – 96% – say they have already started protecting themselves from AI attacks, with some enabling AI countermeasures.

Offensive AI cyber attacks are daunting, and the technology is fast and smart. Consider deepfakes, a type of AI tool with guns, which are fabricated images or videos depicting scenes or people who were never present or even existed.

In January 2020, the FBI warned that deepfake technology had already reached the point where artificial personas could be created that could pass biometric tests. At the rate at which AI neural networks are evolving, an FBI official said at the time, national security could be undermined by high-definition, fake videos created to mimic public figures so they appear to be the Saying words that the video artists put in their names manipulated mouths.

This is just one example of the technology being used for nefarious purposes. At some point, AI could carry out cyberattacks autonomously, disguise its operations and integrate itself into regular activities. The technology can be used by anyone, including threat actors.

Offensive AI risks and developments in the cyberthreat landscape are redefining corporate security as people are already struggling to keep up with advanced attacks. In particular, respondents said that email and phishing attacks are the most feared. Almost three-quarters said email threats are the most worrying. That’s 40% of respondents who say that email and phishing attacks are “very worrisome”, while 34% say they are “somewhat worrisome”. No wonder, as 94% of detected malware is still sent via email. The traditional methods of stopping threats delivered by email are based on historical indicators – namely, previously seen attacks – as well as the recipient’s ability to see the signs, both of which can be circumvented through sophisticated phishing attacks.

When offensive AI is added to the mix, “fake email” is indistinguishable from real messages from trusted contacts.

How attackers exploit the headlines

The coronavirus pandemic offered cybercriminals a lucrative opportunity. Email attackers, in particular, followed a long-established pattern: Use the headlines of the day – along with the fear, uncertainty, greed, and curiosity they arouse – to lure victims into so-called “fearware” attacks. With employees working remotely without the office security logs in place, organizations have seen successful phishing attempts soar. Max Heinemeyer, director of threat hunting at Darktrace, notes that during the pandemic, his team saw an immediate development of the phishing emails. “We’ve seen a lot of emails saying things like, ‘Click here to see who is infected in your area,'” he says. When offices and universities reopened last year, new attempts at fraud surfaced in lockstep. E-mails offered “cheap or free Covid-19 cleaning programs and tests,” says Heinemeyer.

There was also an increase in ransomware that coincided with the rise in remote and hybrid work environments. “The bad guys know that everyone now has to work remotely. If you get hit now and can’t give your agent remote access, it’s game over, ”he says. “While people could come to work maybe a year ago, they could work more offline, but it hurts a lot more now. And we see that the criminals have started to take advantage of that. “

What is the common theme? Change, rapid change and – in the case of the global shift to work from home – complexity. This shows the problem of traditional cybersecurity, which relies on traditional, signature-based approaches: Static defenses cannot adapt very well to change. These approaches are extrapolated from yesterday’s attacks to determine what tomorrow’s will look like. “How can you predict tomorrow’s phishing wave? It just doesn’t work, ”says Heinemeyer.

Download the full report.

This content was created by Insights, the custom content arm of MIT Technology Review. It was not authored by the editorial staff of MIT Technology Review.

0 Comments
Share

Steven Gregory