Thoroughly knowledgeable,
very pragmatic and

Chambers Guide


Keep up to date with our latest insight pieces, news and industry developments. See below for the latest posts or use the categories to hone your search for stories of interest.

Rather listen? The WABChats Podcast provides engaging and informative conversations with contacts, clients, advisors and friends of White & Black Limited. Listen Now.

Artificial Intelligence – Regulatory Update – Part 2

In part two of our Mini-Series, our team take a closer look at the UK’s approach to regulating AI.

In our AI-Mini Series, our commercial team take a look at recent regulatory developments in the AI space, what the effects of regulation will look like and expected timelines for enforcement.

Part one of our WABComment series looked at the EU’s regulatory approach to AI, looking at the EU Artificial Intelligence Act in more detail.

What is Artificial Intelligence?

Generally speaking, AI is a system this is trained upon specific data to analyse and detect patterns, which is then used to detect or replicate those patterns in new data, such as interpreting speech or spotting specific items in texts or images. There is not yet one single legal definition of AI, although as we will see below EU regulators are seeking to set a definition for their purposes.

Regulatory Landscape

The UK and EU have diverged significantly in terms of their outlook on regulating AI.

The EU is in the process of passing specific legislation governing the uses of AI in the form of ‘The AI Act’, the first regime of its kind and which will undoubtedly be closely watched across the world. The use of legislation is a risk-conscious and prescriptive approach, clearly defining the rules under which AI may be used, and also sets out uses of AI that will be prohibited, based upon the risks posed to individuals’ rights. It is also worth noting the wide reach of the AI Act, which will result in the Act applying to many entities that are based outside of the EU.

By contrast, the UK will not be enacting any new AI statute, on the logic that a rapidly changing AI landscape could quickly render comprehensive legislation obsolete. Instead the UK has declared it will establish a flexible and pro-innovation framework that will be enacted by existing regulatory bodies. It is also notable that under the UK regime, there will be no central classification of high risk uses of AI, and no uses are prohibited.

The UK approach

The proposals are in early stages at present, but they provide insight as to the UK government’s likely approach to regulating AI.

In September 2021, the UK Government published a ten-year National AI Strategy (the ‘AI Strategy’), setting out its priorities as three central Pillars:

  1. Investing in the long term needs of the AI Ecosystem;
  2. Ensuring AI benefits all sectors and regions; and
  3. Governing AI effectively.

The first step in carrying out this third pillar was to develop a pro-innovation national position on governing and regulating AI, as envisioned in the government white paper “A pro-innovation approach to AI regulation” (the White Paper) published on 29 March 2023.

Regulation Framework

The White Paper sets out the government’s vision of creating a regulatory framework which is “pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative” which aims to:

  • drive growth and prosperity by making responsible innovation easier and reducing regulatory uncertainty;
  • increase public trust in AI by addressing risks and protecting individuals’ fundamental values;
  • strengthen the UK’s position as a global leader in AI.

The framework envisioned by the White Paper does not intend to set out one universal definition of AI, nor introduce new legislation regulating AI, but instead will introduce guidelines and core principles. This will allow regulating bodies to interpret and build on these core principles and evolve their own definitions that will be tailored to their specific domains or sectors.


While no definition for AI is set by the White Paper, two core characteristics have been identified in the proposals when seeking to identify the systems that will be regulated:

  • “Adaptiveness” meaning systems are more likely to count as AI for the purposes of regulation if they operate on the basis of instructions that have not been expressly programmed with human intent, having instead been learnt through training data; and

  • Autonomy meaning that systems demonstrate a high degree of autonomy or automate complex cognitive tasks will be more likely to be considered AI and so within the regulatory scope.

No examples are given, and it will be for specific regulators to apply the characteristic above to specific circumstances to decide if a product or service is AI for the purposes of regulation. However in a previous policy paper self-driving car control systems and natural language processing were used as examples of systems that meet the above characteristics.


The White Paper sets out an initial set of five ‘cross-sectoral principles’ that are to guide and inform the responsible development and use of AI in all sectors of the economy. These are:

  1. Safety, security and robustness – AI systems should operate as intended and in a way that is technically secure.
  2. Appropriate transparency and explainability – it should be clear when AI is being utilised, and how it operates.
  3. Fairness – AI systems should not discriminate unfairly or create unfair outcomes.
  4. Accountability and governance – AI use should be subject to appropriate governance and oversight.
  5. Contestability and redress – where appropriate, those affected by AI driven decisions or processes should be able to challenge these.

The White Paper makes it clear that the responsibility for interpreting, implementing and providing guidance on the AI regulations will sit with existing regulators, such as Ofcom, the Financial Conduct Authority, Solicitors Regulation Authority and Information Commissioner’s Office. The UK government will be asking regulators to “adopt a proportionate approach … by focusing on the risks that AI poses in a particular context” in order to encourage innovation and avoid unnecessary barriers. It is believed that this that this approach will promote a proportionate and tailored regulatory response by being in the hands of authorities who already are familiar with the relevant sector and its operating methods.

Interestingly, given that no changes to legislation have been made, there will not be any statutory duty on regulators requiring them to have due regard to the principles, but the White Paper specifies this may follow in due course “when parliamentary time allows” if required.

Despite the White Paper’s aim to “promote coherence across the regulatory landscape” as implementation of the regulations sits with multiple regulators simultaneously, there will be a risk that different regulators make contradictory decisions and there are gaps between the remits of different bodies. The government seeks to address this by taking on general duties including a “Central Risk Function” to identify, assess, prioritise and monitor cross-cutting AI risks that may require government intervention, although the exact structure and process of this new government function is unclear at the moment.

Our AI-Mini Series was produced by Richard Wilkin, Ella Paskett and Emily Read. Please reach out to one of the team for more information on AI, or find out more about how we can help you.

Disclaimer: This article is produced for and on behalf of White & Black Limited, which is a limited liability company registered in England and Wales with registered number 06436665. It is authorised and regulated by the Solicitors Regulation Authority. The contents of this article should be viewed as opinion and general guidance, and should not be treated as legal advice.

This site uses cookies to improve your user experience. By using this site you agree to these cookies being set. To find out more see our cookies policy