Unlocking the Front Door: Phishing Emails Remain a Top Cyber Threat Despite MFA

Share This Post

It is easier to use a key to the front door rather than to force an exploit on the rear window. And it is remarkably easy to get that key, almost just by asking for it, through attacks via users’ email mailboxes.

Abnormal’s email threat analysis for H1 2024 notes that email attacks increased by almost 50% from H2 2023 to H1 2024 (from 139 attacks per thousand mailboxes to 208 attacks per thousand mailboxes). 

The basis for this analysis (PDF) comes from Abnormal’s own telemetry. It has around 2,400 customers across the globe and from all industry sectors. It analyzes the threats it catches to understand the type of attack, and then normalizes the results to a per thousand mailboxes metric.

SecurityWeek spoke with Mike Britton, CISO at Abnormal Security, to understand what the ‘human behavior security’ firm has learned about current social engineering and phishing attacks.

The first question is why doesn’t MFA, which is a primary security recommendation, prevent successful phishing? “There are known attacks against MFA,” said Britton: “the MFA fatigue attack, some session attacks, and some MitM attacks. But I think the biggest problem is that very few organizations, especially organizations of any size or scale, have consistently applied MFA 100% of the time on 100% of accounts.”

It should be a minimum bar, but it’s not a silver bullet. “It doesn’t stop all attacks,” he continued. “It doesn’t stop a social engineering attack. It doesn’t stop a fake invoice attack. It is pretty effective against credential phishing in most situations, but not 100%.”

This explains why email attacks remain popular. But it doesn’t necessarily explain the dramatic growth. A big concern today is the likely effect of gen-AI on phishing and social engineering attacks. In theory, it will improve the sophistication and increase the scale of attacks – but although the report discusses AI, it doesn’t attribute the increase in attacks to any current criminal use of AI.

“We know the criminals are using AI,” said Britton. “We have several tools we use on the backend that will say, hey, this looks like it’s AI generated. But that’s not important to the customer – what is important is that we recognize it as an attack.”

Advertisement. Scroll to continue reading.

He believes the growth is more down to a shift in criminal strategy than the adoption of AI. “They’re using freely available tools that the enterprise is using as well. They’ve become cleverer at attacking Microsoft customers with Microsoft; attacking Google customers with Google.”

This is the file-sharing attack, which has increased by 350%, year over year. “Threat actors leverage popular platforms and plausible pretexts to impersonate trusted contacts and trick employees into disclosing private information or installing malware. A complex and escalating threat, file-sharing phishing attacks increased by 350% year-over-year, with financial organizations and built environment firms being the most targeted,” notes the report.

“It’s successful because that’s the way the human brain works,” adds Britton. “If I see well-known links like Google Docs and things like that, my brain says, ‘that’s legitimate, because I know Google’. But if I see some weird xyz domain I’ve never seen before, I’m more likely to balk.”

Rather than adopting AI, attackers have also shifted focus to SaaS. “It’s just a fundamental flaw in SaaS to begin with. It’s not a criticism of SaaS providers, but they want people to freely and quickly look at their solutions – so they offer free trials and things like that. There’s nothing to stop an attacker from signing up for a free trial, leveraging it for a short period for an attack, and continuing to rinse and repeat on that type of attack.”

AI might not currently be used to scale attacks, but its potential for more targeted attacks such as BEC and VEC is clear. The former have increased by 51%, and the latter by 41%. The ability to add AI-generated voice to reinforce such attacks is possible but remains rare.

“We’ve had a couple of situations where this has happened, but it’s not common,” said Britton. “It’s not necessary. Criminals are already getting good RoI without AI. Only when more people get better at detecting and preventing these attacks will the attackers need to pivot to different methods. We’re not likely to see any great increase in the use of deepfakes until the criminals actually need them.”

It is the ironic problem with security: it is our success that makes the criminals get better.

Abnormal Security earlier this month announced that it had raised $250 million in a Series D funding round at a valuation of $5.1 billion.

Related: Stolen Credentials Have Turned SaaS Apps Into Attackers’ Playgrounds

Related: Ex-Employee’s Admin Credentials Used in US Gov Agency Hack

Related: Donald Trump’s Campaign Says Its Emails Were Hacked

Related: Microsoft Says Russian Gov Hackers Stole Source Code After Spying on Executive Emails

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.