Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?

Share This Post

Governments are rushing to regulate artificial intelligence. Is meaningful regulation currently possible?

AI is the new wild west of technology. Everybody sees enormous potential (or profit) and huge risks (to both business and society). But few people understand AI, nor how to use nor control it, nor where it is going. Yet politicians wish to regulate it.

We cannot deny that AI, currently in the form of generative AI (gen-AI) large language models (LLMs), is here and is here to stay. This is the beginning of a new journey: but are we on a run-away horse that we can neither steer nor control, or can we rein it in through regulation?

Gen-AI is controlled by Big Tech, and Big Tech is driven by profit rather than user benefit. We can see the problems this can cause by looking at Microsoft and OpenAI. Similar problems and pressures will exist within all Big Tech companies heavily invested in developing AI. 

The purpose of this analysis is not to be critical, but to demonstrate the complexities in developing and funding large-scale gen-AI, and by inference (pun intended), the difficulties for regulation.

OpenAI was founded in 2015, describing itself as a non-profit artificial intelligence research company. Sam Altman is one of the co- founders. In 2019, Microsoft invested $1 billion with OpenAI.

At this time, the company was overseen by a board of directors whose role was, in part, to ensure OpenAI operated within its stated intention to develop safe and beneficial AI. One of the directors was Helen Toner, formerly a research affiliate with Oxford University’s Center for the Governance of AI, and subsequently director of strategy at the Center for Security and Emerging Technology (CSET). Her knowledge and experience aligned perfectly with ensuring OpenAI kept to its founding purposes.

This is not the place to discuss the problems that developed between Altman, Toner, and the rest of the board in any detail, but suffice it to say that Altman gave the board no prior notice of his intention to release ChatGPT in November 2022. Toner only learned after the event via Twitter. Toner and the rest of the board concluded that the only way to keep OpenAI to its principles was to remove Altman – which it did on November 17, 2023. 

Advertisement. Scroll to continue reading.
Helen Toner, director of strategy at CSET.

Less than one week later, on November 22, 2023, he was reinstated. And within two months, Microsoft further invested in OpenAI. (The timeline might suggest, but does not prove, a connection between these two events.) The precise amount, terms and conditions of the investment are unclarified, but it is thought the total sum might amount to upwards of $13 billion. It is unrealistic to believe that Microsoft (a for profit business) exerts no influence on OpenAI (originally born as a non-profit organization). On the date of publishing this article, it was reported that Microsoft would give up its observer seat on the OpenAI board due to regulatory scrutiny.

Microsoft will need to recoup its investment. This brings in new problems. Everybody assumes that AI will generate huge profits, but nobody really knows yet how this can be achieved. And Microsoft has its own problems. 

In May 2024, Microsoft published a paper titled Responsible AI Transparency Report – How we build, support our customers, and grow. It includes the following statements. Firstly, “In 2016, our Chairman and CEO, Satya Nadella, set us on a clear course to adopt a principled and human-centered approach to our investments in Artificial Intelligence (AI).” This is from the foreword by Brad Smith and Natasha Crampton.

Secondly, in the body of the report, it says, “In this report, we share how we build generative applications responsibly, how we make decisions about releasing our generative applications, how we support our customers as they build their own AI applications, and how we learn and evolve our responsible AI program.”

This report was published after April’s CSRB report that had stated, “Microsoft’s security culture was inadequate and requires an overhaul, particularly in light of the company’s centrality in the technology ecosystem and the level of trust customers place in the company to protect their data and operations.”

Since then, Smith told a US House Homeland Security Committee in June 2024, “We accept responsibility for each and every finding in the CSRB report.” At this time, Microsoft was planning a new gen-AI product called Recall, to be delivered to PCs with default-on. This is the product that would take, capture and store frequent screenshots – an inexpensive and effective way of generating gen-AI training data.

Recall caused a public and professional outcry. and Microsoft was forced to switch to default-off. Many security experts still consider that Recall is an invitation to security disaster, and on June 13, 2024, the firm announced it would delay the roll-out of Recall. 

Almost simultaneously, but separately. Microsoft announced, “GPT Builder is being retired. Important: Microsoft will remove the ability to create GPTs starting July 10, 2024, and then remove all GPTs (created by Microsoft and by customers) along with their associated GPT data also starting July 10, 2024, through July 14, 2024.”

This brief and limited history of OpenAI and Microsoft demonstrates two fundamentals: Big Tech has latched on to the potential for AI to increase profits but has not yet learned how to harness that potential. The danger is the search for profit will be at the cost of AI users.

Regulation is the go-to method for curbing the effects of out of control run-away horses.

Few people question the need for AI Regulation. Rik Ferguson, VP of security intelligence at Forescout, points out, “It is incumbent upon the democratic societies of the world to act in the best interests of their populations, regulating the use cases for AI, ensuring that it is fit for purpose and as free from bias and error as possible.”

Rik Ferguson, VP of Security Intelligence at Forescout
Rik Ferguson, VP of Security Intelligence at Forescout

In May 2024, Toner spoke with Bilawal Sidhu in a TedTalk podcast. She explained what happened while she was at OpenAI, and outlined some examples of where and why AI must be as ‘fit for purpose and as free from bias and error as possible’.

The talk started with Sidhu introducing the problem. “The OpenAI saga is all about AI board governance and incentives being misaligned among some really smart people. It also shows us why trusting tech companies to govern themselves may not always go beautifully, which is why we need external rules and regulations.”

Toner provided a few specific examples on this need for regulation, including, “If people are using AI to decide who gets a loan, to decide who gets parole, to decide who gets to buy a house, then you need that technology to work well. If that technology is going to be discriminatory, which AI often is, you need to make sure that people have recourse… There are already people who are being directly impacted by algorithmic systems and AI in really serious ways.”

This is the fundamental purpose of regulation – to give innocent victims recourse for redress when mistreated by technology. The need for AI regulation is clear. The real question is whether the regulation itself can be fit for purpose.

There are two basic regulation models: monolithic and horizontal or patchwork and vertical. The first attempts to provide a single overall regulation covering all aspects of the subject for all organizations in all parts of the jurisdiction. The classic example would be GDPR from the European Union. 

The patchwork approach is used by federal agencies in the US. Different agencies have responsibility for different verticals and can therefore introduce regulations more relevant to specific organizations. For example, the FCC regulates interstate and international communications, the SEC regulates capital markets and protects investors, and the FTC protects consumers and promotes competition.

Both models have strengths and weaknesses, and both models have their share of failures. GDPR, for example, is often thought to have failed in its primary purpose of protecting personal privacy from abuse by Big Tech.

Ilia Kolochenko, chief architect and CEO at ImmuniWeb, attorney-at-law with Platt Law LLP, and adjunct professor of cybersecurity & cyber law at Capitol Technology University, goes further. “I’d be even more categorical,” he says. “GDPR is foundationally broken. It was created with good and laudable intent, but it has failed in its purpose. Small and medium businesses are wasting their time and resources on pseudo-compliance; it is misused as a method of silencing critics; and large companies and Big Tech are not conforming to it.”

Ilia Kolochenko
Ilia Kolochenko, chief architect and CEO at ImmuniWeb.

The basic problem for GDPR is that individual users have ineffective recourse to redress – Big Tech can simply throw resources (money and lawyers) at the problem. “They hire the best lawyers to intimidate both the plaintiffs and the authorities, and they send so many documents the plaintiff abandons the complaint or settles for a very modest amount,” continued Kolochenko.

The danger is that the EU’s recent monolithic AI Act will go the same way as GDPR. Kolochenko prefers the US model. He believes the smaller, more agile method of targeted regulations used by US federal agencies can provide better outcomes than the unwieldy and largely static monolithic approach adopted by the EU.

He is not alone in believing that sector-specific regulation would be better than a ‘one-size fits all approach’. In Truly Risk-Based Regulation of Artificial Intelligence published on June 16, 2024, Martin Ebers (Professor of IT Law at the University of Tartu, Estonia) writes: “Regulators should tailor regulations based on the specific risks associated with different AI applications in various sectors.”

Kolochenko believes the US model provides both risk-based regulation and better support for end users. 

Of course, agency-based regulations are not perfect – consider the current concerns over the SEC disclosure rules – but he believes they can and do rapidly improve. He points to the effect of the LabMD case against the FTC. The FTC sought to require a complete overhaul of LabMD’s data security following breaches in 2005 and 2012. LabMD appealed; and the court ruled that the FTC couldn’t require a complete security overhaul without specifying the exact inadequacies of LabMD’s practices.

“Since then, the FTC has effectively increased its technical and legal teams” continues Kolochenko. “Now, if you read their settlements, you see training, data minimization, penetration testing, vulnerability scanning, backups, resilience – all kinds of details.” Agency rules are inherently easier to adapt to evolving circumstances than monolithic laws. 

“I guess with the SEC we’ll have a similar source of knowledge soon. Honestly, I don’t think it will be a big challenge to define ‘a material cybersecurity incident’. It will be doable. I suspect that defining the requirements for AI rules will be equally doable.” And equally adaptable going forward.

The big difference for Kolochenko is that with the AI Act (and GDPR), the wronged must go to court and prove their case against mega-rich companies; while the US model requires each individual company, large or small, to state effectively under oath and subject to personal legal repercussions: “We have done no wrong.” It’s a question of reversing the onus. Lying to the agency could lead to criminal liability for wire fraud.

AI is already here, and it is moving faster than legislators can legislate. Since retrospective (or retroactive) legislation is disfavored if not disallowed, new regulation is based on the regulators’ assumption of the future developments and use of AI. This explains why the AI Act concentrates on the inference (or use) rather than the creation of gen-AI models – the models already exist, the data used to train them has already been ‘stolen’.

“I think this is the biggest robbery in the history of humanity,” comments Kolochenko. “What the big gen-AI vendors did was simply scrape everything they could from the internet, without paying anyone anything and without even giving credit.” Arguably, this should have been prevented by the ‘consent’ elements of existing privacy regulation – but it wasn’t. 

Once scraped it is converted into tokens and becomes the ‘intelligence’ of the model (the weights, just billions or trillions of numbers). It is effectively impossible to determine who said what, but what was said is jumbled up, mixed and matched, and returned as ‘answers’ to ‘queries’. The AI companies describe this response as ‘original content’. Noam Chomsky describes it as ‘plagiarism’, perhaps on an industrialized scale. Either way, its accuracy is dependent upon the accuracy of existing internet content – which is frequently questionable.

The AI Act stresses the need for a lack of bias (impossible), recognition of copyright (unprovable), and ethical use (a subjective concept that cannot be universally defined); it does not question the basic legality of the models.

Sarah Clarke, owner at Infospectives Ltd
Sarah Clarke, owner at Infospectives Ltd.

Sarah Clarke, owner at Infospectives, believes AI is simply conforming to a well-worn business process: do it first, do it fast, dominate the market and clear up the bodies afterward. This, incidentally, could help explain Altman’s decision to suddenly release ChatGPT. Other examples could include Uber and perhaps even Amazon, with Clearview being the most direct comparison: scrape images from the internet before anyone else does it, and dominate the US market for law enforcement and national security facial recognition.

This indicates a primary problem for any regulation. Regulations are created by governments, and governments have multiple drivers: please the voters (to get re-elected); protect the economy (which means accept the profit motive of business); and promote innovation (for fear of falling behind adversarial nations). The latter two are not compatible with the first – which means that regulations are invariably a compromise. 

Navigating a meaningful path that maximizes benefit to users while minimizing damage to the economy and innovation in an area that is moving fast and is little understood – even by scientists, never mind politicians – is difficult. Clarke has a novel suggestion: “Maybe we need something similar to the concept of jury service, where users and practitioners who actually understand what the coalface looks like, are paid travel and accommodation to meet and talk with the regulators.” At the moment, this space is almost entirely occupied by specialist and highly paid lobbyists and scientists pushing the economy and innovation sides of the argument.

Nevertheless, regardless of difficulties in developing regulations, they all suffer from one primary problem: they are reactive to threats. The threats exist before the regulation is considered. If AI is a wild horse, this horse has already bolted. Regulation is akin to pedestrians trying to catch and restrain a run-away stallion. 

To regulate or not to regulate is a rhetorical question – of course AI must be regulated to minimize current and future harms. The real questions are whether it will be successful (no, it will not), partially successful (perhaps, but only so far as the curate’s egg is good), and will it introduce new problems for AI-using businesses (from empirical and historical evidence, yes).

The EU AI Act attempts Ai governance by harm prevention – it attempts to be strong in preventing harmful products, but is eminently weak in providing AI-harmed individuals with any means of redress. In fact, personal redress will probably need to be via other legislation such as GDPR – and we have already seen how difficult this can prove – or anti-discrimination laws.

It also attempts to be risk-based by defining four levels of AI harm: unacceptable and prohibited (such as subliminal manipulation of behavior and/or beliefs); high risk (such as use in autonomous vehicles, recruitment, and credit scoring); limited risk (such as chatbots and spam filters); and minimal risk (such as recommendation systems used in e-commerce systems or service delivery).

The difficulty comes from the subjective element in defining both cause and harm, exacerbated by the rapid development of AI products and capabilities. Take chatbots. They might be defined as being of limited risk to the wider economy, but could deliver serious harm to individual companies and/or users harmed through them. Current studies suggest that chatbots in their current form cannot be secured; and the AI Act provides little redress for any victims. 

There is another consideration. Monolithic regulations, by their nature, are difficult to update and amend. But AI is changing (evolving) almost daily. Firstly, what we have now is not what we will have in a couple of years’ time. Secondly, there is the strong possibility of a dot-AI style bubble burst similar to the dot-Com bubble at the turn of the century. Current AI development is enormously expensive but not yet delivering any return on that investment; yet massive investment is pouring in. It is at least questionable whether the market can sustain the current level of excitement it is generating. It is quite possible that a bubble-burst will lead to the failure of some of the existing companies – and if that happens, we have no way of forecasting the ripple effects. At the turn of the century, the effects of the dot-Com bubble spread far beyond the dot-Com companies. 

So, we are regulating now for a market we do not understand and cannot predict. Does that mean we should halt all regulation attempts? Certainly not. Like e-commerce after dot-Com, AI will continue beyond any possible dot-AI bubble – but what emerges will be more market competent, more profitable, and better understood than it is today. We simply need better regulatory mechanics, able to adapt at the same speed as AI evolution, than are common today. The highly focused, (relatively) easily adaptable, and naturally risk-based approach of federal agency regulation (which makes vendors prove compliance under threat of criminal liability rather than forcing victims to instigate costly legislation that they cannot afford) is almost certainly the better approach for the complex issues that arise from artificial intelligence.

But however regulation management is constructed, it will always be subject to a variation on project management’s Iron Triangle – protection for the people, maintenance of the economy, and promotion of innovation. Here the choice is between protecting the people and promoting the industry. You cannot focus on one without detriment to the other. Apologists will say it is possible, but sadly it is not.

Related: OpenAI Co-Founder Sutskever Sets up New AI Company Devoted to ‘Safe Superintelligence’

Related: AI’s Future Could be Open-Source or Closed. Tech Giants Are Divided as They Lobby Regulators

Related: Cyber Insights 2024: Artificial Intelligence

Related: UN Adopts Resolution Backing Efforts to Ensure Artificial Intelligence is Safe

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.