Thoroughly knowledgeable,
very pragmatic and
quick-thinking

Chambers Guide

Insights

Keep up to date with our latest insight pieces, news and industry developments. See below for the latest posts or use the categories to hone your search for stories of interest.

Rather listen? The WABChats Podcast provides engaging and informative conversations with contacts, clients, advisors and friends of White & Black Limited. Listen Now.

Artificial Intelligence – Regulatory Update – The EU’s Approach

In our AI-Mini Series, our commercial team take a look at recent regulatory developments in the AI space, what the effects of regulation will look like and expected timelines for enforcement. Part one of our WABComment series explores the EU’s regulatory approach to AI, looking at the EU Artificial Intelligence Act in more detail.

What is Artificial Intelligence?

Generally speaking, AI is a system this is trained upon specific data to analyse and detect patterns, which is then used to detect or replicate those patterns in new data, such as interpreting speech or spotting specific items in texts or images. There is not yet one single legal definition of AI, although as we will see below EU regulators are seeking to set a definition for their purposes.

Regulatory Landscape

The UK and EU have diverged significantly in terms of their outlook on regulating AI.

The EU is in the process of passing specific legislation governing the uses of AI in the form of ‘The AI Act’, the first regime of its kind and which will undoubtedly be closely watched across the world. The use of legislation is a risk-conscious and prescriptive approach, clearly defining the rules under which AI may be used, and also sets out uses of AI that will be prohibited, based upon the risks posed to individuals’ rights. It is also worth noting the wide reach of the AI Act, which will result in the Act applying to many entities that are based outside of the EU.

By contrast, the UK will not be enacting any new AI statute, on the logic that a rapidly changing AI landscape could quickly render comprehensive legislation obsolete. Instead the UK has declared it will establish a flexible and pro-innovation framework that will be enacted by existing regulatory bodies. It is also notable that under the UK regime, there will be no central classification of high risk uses of AI, and no uses are prohibited.

The EU approach

The intention of the EU’s AI Act is to have a single set of comprehensive rules, with extra-territorial application. The legal framework will be overseen by a European Artificial Intelligence Board, which will drive the development of European AI standards. EU Member States will then have the opportunity to create their own domestic oversight bodies.

Scope

The EU Regulation will have a wide territorial scope. AI providers who make their systems available in the EU, whose systems affect people in the EU or who have an output in the EU will be bound by the AI Act, irrespective of where their business is based.

Definition

The EU AI Act utilises a wide definition of AI. The definition includes software that is developed with one or more specified techniques including machine learning; logic and knowledge based approaches; and statistical approaches, and which can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

Effect

The EU AI Act does not seek to widely ban the use of AI, and limits outright prohibition to uses deemed to pose an ‘unacceptable risk’ to individuals. Other than these, the Act applies safeguards dependent upon the level of risk posed by the use on an AI system.

The Act sets multiple risk classes based on the proposed uses of AI, divided into:

  • Unacceptable risk systems are prohibited – these are AI systems considered a clear threat to the safety livelihoods and rights of people (e.g. social scoring, harmful subliminal behavioural manipulation, harmful manipulation of children);
  • High risk systems are subject to stringent regulatory requirements – (e.g recruitment candidate selection, medical devices, scoring of exams);
  • Low-risk systems are subject to special transparency requirements such as notifying users they are interacting with AI systems (e.g customer chatbots); and
  • Minimal or no risk systems are permitted with no restrictions, but are subject to compliance with general laws.

While we can all no doubt agree that harmful manipulation of children by an AI system is something that should be prohibited, it has been commented that in practice this definition could apply to everyday uses such as Instagram and Facebook. These uses and risk categories parameters are likely to change often with the technological landscape, so it will be interesting to see how these rules are interpreted in practice.

Part 2 of our AI-Mini Series will explore the regulatory landscape for AI within the UK. Follow us on LinkedIn to make sure you don’t miss our latest WABComment Insight Pieces.


Our AI-Mini Series was produced by Richard Wilkin, Ella Paskett and Emily Read. Please reach out to one of the team for more information on AI, or find out more about how we can help you.

Disclaimer: This article is produced for and on behalf of White & Black Limited, which is a limited liability company registered in England and Wales with registered number 06436665. It is authorised and regulated by the Solicitors Regulation Authority. The contents of this article should be viewed as opinion and general guidance, and should not be treated as legal advice.

This site uses cookies to improve your user experience. By using this site you agree to these cookies being set. To find out more see our cookies policy