Noma Security Raises $32 Million to Safeguard Gen-AI Applications

Share This Post

Tel Aviv, Israel based Noma Security has emerged from stealth mode with $32 million in Series A funding led by Ballistic Ventures. 

The new funding follows previously undisclosed seed funding led by Glilot Capital Partners, with participation from Cyber Club London. Dozens of angel investors have also supported Noma’s growth.

Noma provides a platform to protect the data and lifecycle of emerging gen-AI applications, which introduces new threats not covered by existing security controls. “We’re already seeing organizations compromised by misconfigured data pipelines and vulnerable and malicious open source models,” explains Niv Braun, co-founder and CEO of Noma. “It’s only a matter of time before we see AI’s equivalent of SolarWinds or Log4Shell. There’s an urgent need for a new security solution that holistically covers the Data & AI Lifecycle.”

Braun explained the issues to SecurityWeek. There are many different security controls that protect the development and runtime of traditional classic software applications; “But when we look at the Data and AI Lifecycle, it’s truly a different process. To build a working model, you need to train it on data. You need to collect that data and prepare it – you need to make sure that all the data is in the right format and that the different data sets can be correlated.”

This is the data preparation performed by data engineers. Next you have the modeling performed by data scientists. “They define the different configurations and deep learning parameters that eventually become a machine learning model or a gen AI model. But when we look at the process, it includes different risks and vulnerabilities – like code that is never scanned because data scientists work differently than software developers, and their code is stored in different places.”

This is just part of the new risk coming from AI development, and especially with the new emphasis on gen-AI as opposed to machine learning (which is relatively well understood). For gen-AI, many firms download existing models from Hugging Face. These models have not been scanned by the existing classic application security tools because they are a different technology with different objects, and the classic tools don’t know how to scan them. This introduces another new supply chain risk, similar in some ways to the OSS supply chain risk but requiring a different solution.

Braun added a further new risk – the statistical rather than deterministic nature of a gen-AI application. “Classic software is what we call deterministic,” he said. “If you input an x in classic software, you know what it’s going to respond – it’s going to be y or z. AI models are different; they’re statistical. You can go to a model like ChatGPT, and input x three times and you get three different responses. It has options and you cannot absolutely predict which option it will give for its response.”

Because of this, he continued, you have completely new risks. “The new risks called prompt injection and jailbreaking can use crafted inputs to manipulate the statistical reasoning of the model to return data or do other stuff it was never meant to.”

Advertisement. Scroll to continue reading.

There are tools that exist and can help different parts of the AI development lifecycle. But Noma offers a single platform that helps secure the entire process. “If you speak to security guys today,” continued Braun, “you’ll find they need four different tools from four different vendors. The single Noma platform provides complete end to end security for the new Data & AI lifecycle.”

The Noma website elaborates on this: “The Noma platform extends all the way to production, delivering real-time monitoring, blocking, sensitive data masking, and alerting to defend against AI adversarial attacks and data leakage and enforce safety guardrails aligned with your organizational and app-specific policies.”

Noma, headquartered in Herzliya, Tel Aviv, Israel, was founded in 2023 by Niv Braun (CEO) and Alon Tron (CTO). Both are former members of the IDF’s 8200 intelligence unit.

Related: Researchers Bypass ChatGPT Safeguards Using Hexadecimal Encoding and Emojis

Related: ‘Deceptive Delight’ Jailbreak Tricks Gen-AI by Embedding Unsafe Topics in Benign Narratives

Related: Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique

Related: New Scoring System Helps Secure the Open Source AI Model Supply Chain

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.