New Scoring System Helps Secure the Open Source AI Model Supply Chain

Share This Post

Artificial intelligence models from Hugging Face can contain similar hidden problems to open source software downloads from repositories such as GitHub.

Endor Labs has long been focused on securing the software supply chain. Until now, this has largely focused on open source software (OSS). Now the firm sees a new software supply threat with similar issues and problems to OSS – the open source AI models hosted on and available from Hugging Face.

Like OSS, the use of AI is becoming ubiquitous; but like the early days of OSS, our knowledge of the security of AI models is limited. “In the case of OSS, every software package can bring dozens of indirect or ‘transitive’ dependencies, which is where most vulnerabilities reside. Similarly, Hugging Face offers a vast repository of open source, ready-made AI models, and developers focused on creating differentiated features can use the best of these to speed their own work.”

But it adds, like OSS, there are similar serious risks involved. “Pre-trained AI models from Hugging Face can harbor serious vulnerabilities, such as malicious code in files shipped with the model or hidden within model ‘weights’.” 

AI models from Hugging Face can suffer from a similar problem to the dependencies problem for OSS. George Apostolopoulos, founding engineer at Endor Labs, explains in an associated blog, “AI models are typically derived from other models,” he writes. “For example, models available on Hugging Face, such as those based on the open source LLaMA models from Meta, serve as foundational models. Developers can then create new models by refining these base models to suit their specific needs, creating a model lineage.”

He continues, “This process means that while there is a concept of dependency, it is more about building upon a pre-existing model rather than importing components from multiple models. Yet, if the original model has a risk, models that are derived from it can inherit that risk.”

Just as unwary users of OSS can import hidden vulnerabilities, so can unwary users of open source AI models import future problems. With Endor’s proclaimed mission to create secure software supply chains, it is natural that the company should train its attention on open source AI. It has done this with the release of a new product it calls Endor Scores for AI Models.

Apostolopoulos explained the process to SecurityWeek. “As we’re doing with open source, we do similar things with AI. We scan the models; we scan the source code. Based on what we find there, we have developed a scoring system that gives you an indication of how safe or unsafe any model is. Right now, we compute scores in security, in activity, in popularity and quality.”

Advertisement. Scroll to continue reading.

The idea is to capture information on almost everything relevant to trust in the model. “How active is the development, how often it is used by other people; that is, downloaded. Our security scans check for potential security issues including within the weights, and whether any supplied example code contains anything malicious – including pointers to other code either within Hugging Face or in external potentially malicious sites.”

One area where open source AI problems differ from OSS issues, is that he doesn’t believe that accidental but fixable vulnerabilities is the primary concern. “I think the main risk we’re talking about here is malicious models, that are specifically crafted to compromise your environment, or to affect the outcomes and cause reputational damage. That’s the main risk here. So, an effective program to evaluate open source AI models is primarily to identify the ones that have low reputation. They’re the ones most likely to be compromised or malicious by design to produce toxic outcomes.”

But it remains a difficult subject. One example of hidden issues in open source models is the danger of importing regulation failures. This is a currently ongoing problem, since governments are still struggling with how to regulate AI. The current flagship regulation is the EU AI Act. However, new and separate research from LatticeFlow using its own LLM checker to measure the conformance of the big LLM models (such as OpenAI’s GPT-3.5 Turbo, Meta’s Llama 2 13B Chat, Mistral’s 8x7B Instruct, Anthropic’s Claude 3 Opus, and more) is not reassuring. Scores range from 0 (complete disaster) to 1 (complete success); but according to LatticeFlow, none of these LLMs are compliant with the AI Act.

If the big tech firms cannot get compliance right, how can we expect independent AI model developers to succeed – especially since many if not most start from Meta’s Llama. There is no current solution to this problem. AI is still in its wild west stage, and nobody knows how regulations will evolve. Kevin Robertson, COO of Acumen Cyber, comments on LatticeFlow’s conclusions: “This is a great example of what happens when regulation lags technological innovation.” AI is moving so fast that regulations will continue to lag for some time.

Although it doesn’t solve the compliance problem (because currently there is no solution), it makes the use of something like Endor’s Scores more important. The Endor rating gives users a solid position to start from: we can’t tell you about compliance, but this model is otherwise trustworthy and less likely to be unethical.

Hugging Face provides some information on how data sets are collected: “So you can make an educated guess if this is a reliable or a good data set to use, or a data set that may expose you to some legal risk,” Apostolopoulos told SecurityWeek. How the model scores in overall security and trust under Endor Scores tests will further help you decide whether to trust, and how much to trust, any specific open source AI model today.

Nevertheless, Apostolopoulos finished with one piece of advice. “You can use tools to help gauge your level of trust: but in the end, while you may trust, you must verify.”

Related: Secrets Exposed in Hugging Face Hack

Related: AI Models in Cybersecurity: From Misuse to Abuse

Related: AI Weights: Securing the Heart and Soft Underbelly of Artificial Intelligence

Related: Software Supply Chain Startup Endor Labs Scores Massive $70M Series A Round

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.