How to Improve the Security of AI-Assisted Software Development

Share This Post

By now, it’s clear that the artificial intelligence (AI) “genie” is out of the bottle – for good. This extends to software development, as a GitHub survey shows that 92 percent of U.S.-based developers are already using AI coding tools both in and outside of work. They say AI technologies help them improve their skills (as cited by 57 percent), boost productivity (53 percent), focus on building/creating instead of repetitive tasks (51 percent) and avoid burnout (41 percent).

It’s safe to say that AI-assisted development will emerge even more as a norm in the near future. Organizations will have to establish policies and best practices to effectively manage it all, just as they’ve done with cloud deployments, Bring Your Own Device (BYOD) and other tech-in-the-workplace trends. But such oversight remains a work in progress. Many developers, for example, engage in what’s called “shadow AI” by using these tools without the knowledge or approval of their organization’s IT department or management.

Those managers include chief information security officers (CISOs), who are responsible for determining the guardrails, so developers understand which AI tools and practices are OK, and which aren’t. CISOs need to lead a transition from the uncertainty of shadow AI to a more known, controlled and well-managed Bring Your Own AI (BYOAI) environment.

The time for the transition is now, as recent academic and industry research reveals a precarious state: Forty-four percent of organizations are concerned about risks related to AI-generated code, according to the State of Cloud-Native Security Report 2024 (PDF). Research from Snyk shows that fifty-six percent of software and security team members say insecure AI suggestions are common. Four of five developers bypass security policies to use AI (i.e., shadow AI), but only one of every ten are scanning most of their code, often because the process adds more cycles for code review and thus slows overall workflows.

In a Stanford University study, researchers found that a mere 3 percent of developers using an AI assistant wrote secure products, compared to 21 percent without access to AI. Thirty-six percent of those with AI access created products that were vulnerable to SQL injections, compared to 7 percent of those without access.

The adoption of a well-conceived and executed BYOAI strategy would greatly help CISOs overcome the challenges as developers leverage these tools to crank out code at a rapid pace. With close collaboration between security and coding teams, CISOs will no longer stand outside of the coding environment with zero awareness of who is using what. They will cultivate a culture in which developers recognize they cannot trust AI blindly, because doing so will lead to multitudes of issues down the road. Many teams are already familiar with the need to “work backwards” to fix poor coding and security that weren’t addressed from the start, so perhaps AI security awareness will also highlight this more obviously for developers going forward.

So how do CISOs reach this state? By incorporating the following practices and perspectives:

Establish visibility. The surest way to eliminate shadow AI is to remove AI from the shadows, right? CISOs need to acquire “lay of the land” visibility of the tools developer teams are using, what tools they aren’t using, and why. With this, they will have a solid sense of where the code is coming from and whether any AI involvement is introducing cyber risks.

Advertisement. Scroll to continue reading.

Strike a security/productivity balance. CISOs cannot keep teams from finding their own tools – nor should they. Instead, they must seek a fine balance between productivity and security. They need to be willing to allow relevant AI-related activity within certain boundaries, if it results in meeting production goals with minimal or at least acceptable risks.

In other words, as opposed to adopting a “Department of No” mentality, CISOs should approach the creation of guidelines and endorsed processes for their developer teams with a mindset of, “We appreciate that you’re discovering new AI solutions that will enable you to create software more efficiently. We just want to ensure your solutions won’t cause security problems that ultimately hinder productivity. So let’s work on this together.”

Measure it. Again, in the spirit of collaboration, CISOs should work with coding teams to come up with key performance indicators (KPIs) that measure both the productivity and reliability/safety of software. The KPIs should answer the questions, “How much are we producing with AI? How quickly are we doing it? Is the security of our processes getting better, or worse?”

Bear in mind that these are not “security” KPIs. They are “organizational” KPIs and must align to company strategies and goals. In the best of possible worlds, developers will perceive the KPIs as something that better informs them, rather than something that burdens them. They will recognize that KPIs help them reach “more/faster/better” levels, while keeping the risk factor in check.

Developer teams may be more on board with a “security first” partnership than CISOs anticipate. In fact, these team members rank security reviewing at the top of their priority list when deploying AI coding tools, along with code reviewing. They also believe collaboration results in cleaner and more protected code writing.

That’s why CISOs should move forward quickly with an AI visibility and KPI plan that supports a “just right” balance to enable optimal security and productivity outcomes. The genie, after all, isn’t going back into the bottle—ever. So it’s critical to ensure the genie is able to bring out our best work without introducing unnecessary risks.

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.