The AI Convention: Lofty Goals, Legal Loopholes, and National Security Caveats

Share This Post

Signed on September 5, 2024, the AI Convention’s official title is the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. 

The Council of Europe (46 member states) overlaps with the member states of the EU (27 member states), but includes additional European nations such as the UK, Ukraine, Albania, Norway, and Georgia. While the EU is a political organization, the Council of Europe focuses on human rights. Its best known achievement is the European Convention of Human Rights.

The AI Convention treaty (PDF) is not the EU AI Act. The Convention is primarily focused on protecting human rights, democracy and the rule of law from infringement by artificial intelligence. It is a laudable intent but suffers from the usual exclusions and exemptions that are necessary to satisfy multiple national signatories.

“The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability,” said Francesca Fanucci, a legal expert at the European Center For Not-For-Profit Law (ECNL).

A prime example of her concerns includes, “A Party shall not be required to apply this Convention to activities within the lifecycle of artificial intelligence systems related to the protection of its national security interests.” In periods of high geopolitical tensions – such as today – almost anything can be defined by any nation as pertinent to national security. This is stated very clearly elsewhere: “Matters relating to national defense do not fall within the scope of this Convention.”

The Convention also fails to solve the common problem with all such multinational agreements: how can you control the use of technology to protect people without limiting innovation and potential loss of economic competitiveness across different cultures? The Convention is, of course, an agreement rather than a regulation, but the principle remains: different national cultures have different attitudes to subjects such as security, privacy, and personal freedom.

In this instance, there is a strong difference between how public authorities and private industries must behave. “Each Party shall apply this Convention to the activities within the lifecycle of artificial intelligence systems undertaken by public authorities…” 

Private industry, however, is less restrained. “Each Party shall address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors to the extent not covered in subparagraph a [referring to public authorities as above] in a manner conforming with the object and purpose of this Convention.”

Advertisement. Scroll to continue reading.

Public authorities ‘shall apply’, while private industry ‘shall address… in a manner…’ Ultimately, this difference means that the government cannot get you (unless through the national security exemption), but private industry can get you so long as it considers the Convention.

If it were not for these caveats, the purpose and the intention of the Convention is commendable. It declares that signatories will consider human dignity and individual autonomy, transparency and oversight, accountability and responsibility, equality and non-discrimination, privacy and personal data protection, and reliability and safety. And more, as would be expected for a document designed to protect human rights. 

It is better to have such an agreed treaty than not have one. It’s a target for good faith signatory nations. But ultimately it will not, cannot, achieve its stated purpose to protect human rights and democracy against the misuse or abuse of artificial intelligence. You cannot balance human rights with the different definitions of national security and economic benefit. Good lawyers, whether government or private, will always be able to use the caveats to benefit their employers.

Related: Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?

Related: Tech Industry Leaders Endorse Regulating AI at Rare Summit in Washington

Related: Former OpenAI Employees Lead Push to Protect Whistleblowers Flagging AI Risks

Related: UN Adopts Resolution Backing Efforts to Ensure Artificial Intelligence is Safe

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.