Top 5 Most Dangerous Cyber Threats in 2024

Share This Post

RSA CONFERENCE 2024 – San Francisco – Only five months into 2024, and the year has been a busy one for cybersecurity practitioners, with multi-year supply chain attacks, nation-state actors exploiting multiple vulnerabilities in network gateways and edge devices, and multiple ransomware incidents against large healthcare entities. What’s ahead for the rest of year?

At last week’s RSA Conference, Ed Skoudis, the president of the SANS Technology Institute, convened his annual panel of SANS Institute instructors and fellows to dig into topics that should be top-of-mind for cyber defenders for the remaining months in the year.

“This is my favorite panel of the year because we get to hear from real experts about what’s really going on in the wild and what we can do about it to make our organizations more safe and secure,” Skoudis told attendees.

Unsurprisingly, artificial intelligence (AI) was a recurring theme for almost all the threats identified by the panel. Here are the top 5 threats flagged by SANS experts that enterprises should be worried about in the remaining months of 2024.

Security Impact of Technical Debt

The security cracks left behind by technical debt may not sound like a pressing new threat, but according to Dr. Johannes Ullrich, dean of research for SANS Technology Institute, the enterprise software stack is at an inflection point for cascading problems. What’s more, “It affects more and more not only just our enterprise applications, but also our security stack,” he said.

Technical debt is the accumulation of work in software engineering or system design that’s left undone or put off until ‘tomorrow’ for the sake of getting a minimum viable product up and running today. The debt may be accrued intentionally to optimize for speed or cost reasons, or it could build up unintentionally due to immature software engineering practices. Either way, it tends to raise a ton of cybersecurity risks as the debt grows.

And according to Ullrich, the rising accrual of technical debt combined with the growing complexity of the software supply chain is increasing the profile of this threat vector.

“Even as a developer myself it is very easy to say ‘hey, this new library doesn’t really have any new features and doesn’t fix any security vulnerabilities, so I’m not going to apply that update,” he says. “the problem is that five years from now after you skip 10 to 15 different incremental updates, then the big security vulnerability hits that library and now you have to work through all of these little quirks that have added up over the years so you can fix it.”

Synthetic Identity in the AI Age

Proving out identity at the initiation of new credentials and for authentication has been a decades long struggle for the security industry. That struggle will only continue to be amplified in the AI age, explained Ullrich.

Fake videos and fake audio are being used to impersonate people, Ullrich said, and they will foil many of the biometric authentication methods that have gained steam over the last decade. “The game changer today is not the quality of these impersonation, the game changer is cost. It has become cheap to do this, whereas in the past it was pretty expensive. Now it is a couple dollars versus tens of thousands to create those fakes,” he said.

AI generated synthetic media is upending a lot of the innovations that security vendors have made to help reduce friction at registration and identity verification, as well as upon sign on. For example, a website called onlyfakes.com not only creates fake IDs, but also pictures that look like a photo you’d take of driver’s license or other similar IDs to verify identity.

“It has a background like a carpet or a piece of wood that looks like someone took that picture in their home,” Ullrich explained. “And this has already been used to impersonate established identity online with some financial assistance.”

This is going to put pressure in the near term on security practitioners and vendors to keep rethinking how the industry does risk-based identity verification, he warns.

Sextortion

The third top threat was a bit of a shocker compared to some of the other more enterprise-focused issues that SANS usually tackles but it is a serious one warranting attention from the industry, said Heather Mahalik Barnhart, a SANS faculty fellow and senior director of community engagement for Cellebrite.

“Each year I come up here and I try to just kind of wreck your brain in a different way and that’s probably what this topic is going to do because it’s an edgy threat that nobody wants to admit exists,” she explained. “It’s sextortion and it is running wild and it is getting to be out of control.”

According to Barnhart, criminals are increasingly extorting online denizens with sexual pictures or videos, threatening them that they’ll release them if the victim doesn’t do what they ask of them. And in the era of highly convincing AI-generated images, those pictures or videos don’t even need to be real to do the damage.

“The reality is it could be you. It could be me, it could be your children, it could be one of your coworkers,” she says, explaining that sextortion is increasingly linked to teenage suicides, especially among boys aged 10 to 14.

And it isn’t just a personal threat. It could pose a very existential risk to the enterprise as well.

“Now think about this. If it’s one of your coworkers, what does that mean for your organization? If an image is posed, whether it’s AI or not, and they’re saying, I’m going to put this on LinkedIn, it’s going on your company’s website, it’s going to get blasted out on YouTube. It’s going everywhere unless you provide me X,” she says. “People are going to consider providing X, and that’s a scary thing.”

That X could be money, or it could be cooperation in providing company access. The point being that as salacious and personal as it seems, sextortion is very relevant in enterprise context. Barnhart says that there’s no easy answers for this growing threat, but that the first steps in combatting should definitely be rooted in awareness and training.

“Why don’t we train on extortion? It doesn’t have to be sexual in nature, but extortion in general,” she says, explaining that just as enterprises expend a ton of money on anti-phising training, they should also consider investing in anti-extortion training.

GenAI Election Threats

As we progress into the back half of 2024, the US election is going to increasingly gain the spotlight in online communities, social media sharing, and news coverage. As they dominate the headlines and the comment threads, fake media manipulation and other GenAI-generated election threats will be ever present across all of the major platforms, warns Terrence Williams, a SANS instructor and security engineer for AWS..

“You can thank 2024 for giving us the blessing of GenAI plus an election,” Williams wryly joked. “You know how well we handle those things, so we need to understand what we’re coming up against right now.”

The threat of deep fake media is no longer a future threat, it’s a here-and-now threat, warned Williams. GenAI can now be used to generate custom campaign elements and to power automated robocall campaigns for legitimate reasons, but also for the sake of not just misleading voters but feeding them with outlandish fabrications. And that is only going to continue to deteriorate already tenuous trust in the election process. The answer to this is going to be collaboration among security researchers, academia, technology players and political stakeholders.

“Right now we have our academia looking to research, seeing how they could develop something that is detecting if an image or a video is AI generated,” he said. Hopefully that is going to be available in time for tech companies who are doing the innovation to ensure that when they roll out their new solutions, there’s going to be some type of safety net that we can rely on as citizens.”

He says that it’s up to public-private interests to make sure average voters understand that we’re in an age of “trust but verify,” meaning that people need to be meticulous about the sources and the fact checking that they use. At the same time, innovators and politicians must also be held accountable for building safeguards into the technology and the processes that disseminate information, he said.

Offensive AI as Threat Multiplier

Finally, the fifth top threat was offensive AI as a multiplier of existing cyber threats, which was unpacked by Stephen Sims, a SANS fellow and longtime offensive security researcher. Offensive AI is the use of AI and automation by the bad guys to more quickly identify threats and to automate the generation of exploits and attack campaigns. As GenAI grows more sophisticated, even the most non-technical cyber attackers now have a more flexible arsenal of tools at their fingertips to quickly get malicious campaigns up and running.

“The speed at which we can now discover vulnerabilities and weaponize them is extremely fast and it’s getting faster,” Sims said.

He said his offensive security research has shown that GenAI is making it dead simple to automate patch ‘diffing’ or identifying changes across versions of binaries related to patches that will make it possible to reverse engineer exactly where a security fix has been made in order to exploit it on systems that haven’t yet been patched. Similarly, GenAI can speed up weaponization of those patches, and combined with automation tools it is easy to do all of the “technical stuff that would normally take us so long to do manually,” Sims said.

“So, if you’re a defender, this is the big takeaway: (you have to think about) how you’re going to be able to defend against the speed that we’re up against and the automation and the intelligence that’s just going to get better and better,” he said “We can do the same style of automation on the defensive side.”

https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/bltb60b138e3b106bbb/654e39848dfbfc040a9b1cea/cybersecuritythreats-Stuart_Miles_-alamy.jpg?disable=upscale&width=1200&height=630&fit=crop

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.