OWASP Beefs Up GenAI Security Guidance Amid Growing Deepfakes

Share This Post

Deepfakes and other generative-AI attacks are becoming less rare, and signs are pointing to a coming onslaught of such attacks: already, AI-generated text is becoming more common in emails, and security firms are finding ways to detect emails likely not created by humans. Human-written emails have declined to about 88% of all emails, while text attributed to large language models (LLMs) now accounts for about 12% of all email, up from around 7% in late 2022, according to one analysis.

To help organizations develop stronger defenses against AI-based attacks, the Top 10 for LLM Applications & Generative AI group within the Open Worldwide Application Security Project (OWASP) released a trio of guidance documents for security organizations on October 31. To its previously released AI cybersecurity and governance checklist, the group added a guide for preparing for deepfake events, a framework to create AI security centers of excellence, and a curated database on AI security solutions.

While the previous Top 10 guide is useful for companies building models and creating their own AI services and product, the new guidance is aimed at the users of AI technology, says Scott Clinton, co-project lead at OWASP.

Those companies “want to be able to do AI safely with as much guidance as possible — they’re going to do it anyway, because it’s a competitive differentiator for the business,” he says. “If their competitors are doing it, [then] they need to find a way to do it, do it better … so security can’t be a blocker, it can’t be a barrier to that.”

Related:Dark Reading Confidential: Pen-Test Arrests, 5 Years Later

One Security Vendor’s Job Candidate Deepfake Attack

In an example of the kinds of real-world attacks that are now happening, one job candidate at security vendor Exabeam had passed all the initial vetting and moved onto the final interview round — that’s when Jodi Maas, GRC team lead at the company, recognized that something was wrong.

While the human resources group had flagged the initial interview for a new senior security analyst as “somewhat scripted,” the actual interview started with normal greetings. Yet, it quickly became apparent that some form of digital trickery was in use. Background artifacts appeared, the female interviewee’s mouth did not match the audio, and she hardly moved or expressed emotion, says Maas, who runs application security and governance, risk, and compliance within Exabeam’s security operations center (SoC) .

“It was very odd — just no smile, there was no personality at all, and we knew right away that it was not a fit, but we continued the interview, because [the experience] was very interesting,” she says.

Related:Can Automatic Updates for Critical Infrastructure Be Trusted?

After the interview, Maas approached Exabeam’s CISO, Kevin Kirkwood, and they concluded it had been a deepfake based on similar video examples. The experience shook them enough that they decided the company needed better procedures in place to catch GenAI-based attacks, embarking on meetings with security staff and an internal presentation to employees.

“The fact that it got past our HR group was interesting … they passed them through because they had answered all the questions correctly,” Kirkwood says.

After the deepfake interview, Exabeam’s Kirkwood and Maas started revamping their processes, following up with their HR group, for example to let them know to expect more such attacks in the future. For now, the company advises its employees to treat video calls with suspicion (half-jokingly, Kirkwood requested this correspondent to turn on my video midway through the interview as proof of humanness. I did).

“You’re going to see this more often now, and you know these are the things you can check for, and these are the things that you will see in a deepfake,” Kirkwood says.

Technical Anti-Deepfake Solutions Are Needed

Deepfake incidents are capturing the imagination — and fear — of IT professionals, with about half (48%) very concerned over deepfakes at present, and 74% believing deepfakes will pose a significant future threat, according to a survey conducted by email security firm Ironscales.

Related:Critical Auth Bugs Expose Smart Factory Gear to Cyberattack

The trajectory of deepfakes is quite easy to predict — even if they are not good enough to fool most people today, they will be in the future, says Eyal Benishti, founder and CEO of Ironscales. That means that human training will likely only go so far. AI videos are getting eerily realistic, and a fully digital twin of another person controlled in real time by an attacker — a true “sock puppet” — is likely not far behind.

“Companies want to try and figure out how they get ready for deepfakes,” he says. “The are realizing that this type of communication cannot be fully trusted moving forward, which … will take people some time to realize and adjust.”

In the future, since the telltale artifacts will be gone, better defenses are necessary, Exabeam’s Kirkwood says.

“Worst case scenario: the technology gets so good that you’re playing a tennis match — you know, the detection gets better, the deepfake gets better, the detection gets better, and so on,” he says. “I’m waiting for the technology pieces to catch up, so I can actually plug it into my SIEM and flag the elements associated with deep fake.”

OWASP’s Clinton agrees. Rather focus on training humans to detect suspect video chats, companies should create infrastructures for authenticating that a chat is with a human who is also an employee, building processes around financial transactions, and creating an incident-response plan, he says.

“Training people on how to identify deepfakes — that’s not really practical, because it’s all subjective,” Clinton says. “I think there have to be more un-subjective approaches, and so we went through and came up with some tangible steps that you can use, which are combinations of technologies and process to really focus on a few areas.”

https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/blt168b66ef9d152bc5/6661e33411eca23f33c7e159/Family_Stock-deepfake-creation-shutterstock.jpg?disable=upscale&width=1200&height=630&fit=crop

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.