CISA Rolls Out New Guidelines to Mitigate AI Risks to US Critical Infrastructure

Share This Post

The US government’s cybersecurity agency CISA has rolled out a series of guidelines aimed at beefing up the safety and security of critical infrastructure against AI-related threats.

The newly released guidelines categorize AI risks into three significant types: the utilization of AI in attacks on infrastructure, targeted assaults on AI systems themselves, and failures within AI design and implementation that could jeopardize infrastructure operations.

The CISA guidelines advocate a four-part mitigation strategy that centers on a robust organizational culture centered around AI risk management.

The organization culture, CISA argues, must emphasize the importance of safety and security outcomes, promote radical transparency, and create structures that prioritize security as a core business directive.

The guidelines also call for a focus on mapping where organizations develop a deep understanding of each entity’s unique AI usage context and risk profile to tailor risk evaluation and mitigation efforts effectively.

The cybersecurity agency, which is housed in the Department of Homeland Security (DHS), is also pushing for the implementation of systems to assess, analyze, and continuously monitor AI risks and their impacts, utilizing repeatable methods and measurable metrics.

The guidelines calls on management to act decisively on identified AI risks to enhance safety and security, ensuring that risk management controls are implemented and maintained to optimize the benefits of AI systems while minimizing adverse effects.

 Digging a bit deeper, CISA is categorizing the threat into three distinct types:

Advertisement. Scroll to continue reading.
  • Attacks Using AI:  The use of AI to enhance, plan, or scale physical attacks on, or cyber compromises of, critical infrastructure.
  • Attacks Targeting AI Systems: Targeted attacks on AI systems supporting critical infrastructure.
  • Failures in AI Design and Implementation: Deficiencies or inadequacies in the planning, structure, implementation, or execution of an AI tool or system leading to malfunctions or other unintended consequences that affect critical infrastructure operations.

 “Although these guidelines are broad enough to apply to all 16 critical infrastructure sectors, AI risks are highly contextual. Therefore, critical infrastructure owners and operators should consider these guidelines within their own specific, real-world circumstances,” the agency said.

Related: SecurityWeek AI Risk Summit — June 25-26, Half Moon Bay, CA

Related: Biden, Harris Meet With CEOs About AI Risks

Related: Security Experts Describe AI Technologies They Want to See

Related: First Major Attempts to Regulate AI Face Headwinds From All Sides

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.

Article

BFU – Seeing is Believing

Oh no, the device is in BFU. This is the common reaction; a device needs extracting, and you find it in a BFU state. Often, there’s an assumption that a BFU extraction will only acquire basic information, but that isn’t always the case.