Open Source LLM Tool Sniffs Out Python Zero-Days

Share This Post

Researchers at Protect AI released Vulnhuntr, a free, open source tool that can find zero-day vulnerabilities in Python codebases using Anthropic’s Claude AI model.

The tool, available on GitHub, provides detailed analysis of the code, proof-of-concept exploits for the vulnerabilities identified, and confidence ratings for each flaw, Protect AI said in its announcement.

Vulnhuntr breaks the codebase into smaller chunks rather than feeding the LLM with the entire file at once. By analyzing the code in a loop, the tool can map out the application from user input to server output. This way, the LLM can focus on specific sections of the codebase, which the research team claimed helps decrease false positives and negatives.

The tool currently focuses on the following types of vulnerabilities which can be exploited remotely: Arbitrary file overwrite (AFO), local file inclusion (LFI), server-side request forgery (SSRF), cross-site scripting (XSS), insecure direct object references (IDOR), SQL injection (SQLi), and remote code execution (RCE).

Vulnhuntr’s team claimed the tool has already discovered more than a dozen zero-day vulnerabilities in popular Python projects on GitHub such as gpt_academic, FastChat, and Ragflow.

https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/bltf9143cfa6ed163c3/671b8fc1a537368c52963f69/vulnhuntr-protectai.jpg?disable=upscale&width=1200&height=630&fit=crop

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.