find-partner-btn-inner

AI Round-up - May 2024

There has been a flurry of AI-related announcements from UK government bodies and regulators recently, the most noteworthy of which are summarised below.

Joint launch of the AI and Digital Hub

The Digital Regulation Cooperation Forum launched the AI and Digital Hub, a new multi-agency informal advice service to support innovators working on AI or digital products. The DRCF was set up in 2020 by the Competition and Markets Authority, the Financial Conduct Authority, the Information Commissioner’s Office and Ofcom to make it easier for them to collaborate on digital regulatory matters. The new service aims to provide consolidated advice from the four regulators on complex regulatory questions that cross their remits.

DSIT guidance published on Responsible AI in Recruitment

The Department for Science, Innovation and Technology published guidance on the responsible use of AI in the HR and recruitment sector. The guidance, which is non-binding, outlines key considerations for HR and recruitment professionals when procuring and deploying AI such as assurance mechanisms for procurement from third-party suppliers (for more details, see our insight). The UK approach can be contrasted with that taken by the EU AI Act which classifies as high-risk all AI systems used in the recruitment and employment arena (see our insight on the EU AI Act).

Financial Regulators respond to Government White Paper

The FCA published an update on its approach to AI in response to the UK government's recent pro-innovation AI White Paper and initial guidance for regulators on implementing the country's AI regulatory principles, with its key objectives being to promote the safe and responsible use of AI in UK financial markets whilst leveraging AI in a way that drives beneficial innovation. The FCA’s plans for the next 12 months include close scrutiny of the systems and processes financial services firms have in place to ensure that the FCA’s regulatory expectations are met.

On the same day, the Bank of England and the Prudential Regulation Authority published a letter addressed to the Secretary of State for Science, Innovation and Technology, Michelle Donelan MP, and the Economic Secretary to the Treasury and City Minister, Bim Afolami MP, setting out their updated approach to AI. Like the FCA, the Bank and the PRA take the view that existing regulatory frameworks are appropriate to support AI innovation in ways that will benefit the industry and the wider economy whilst also addressing the risks, in line with the White Paper’s five principles. The PRA also notes that the continued adoption of AI by financial services firms could have potential financial stability implications, and says that it will undertake deeper analysis on this.

DSIT publishes Responsible AI Toolkit

The Department for Science, Innovation and Technology also published a Responsible AI Toolkit intended to support organisations and practitioners to safely and responsibly develop and deploy AI systems. The Toolkit includes resources and guidance for public sector organisations deploying data-driven technologies, as well as assurance techniques to support the development of responsible AI, and will be updated over time with new resources by the Responsible Technology Adoption Unit.

Consultation launched by the Information Commissioner’s Office

The ICO launched the third chapter of its consultation on generative AI, focused on how the accuracy principle of data protection applies to the outputs of generative AI models and the impact that accurate training data has on the output. Launching the consultation, the Information Commissioner, John Edwards, said: “In a world where misinformation is growing, we cannot allow misuse of generative AI to erode trust in the truth. Organisations developing and deploying generative AI must comply with data protection law – including our expectations on accuracy of personal information.” The consultation is aimed at developers of Gen AI with submissions open until 10 May. The previous two chapters of the consultation considered the lawfulness of web scraping to train generative AI models, and how the purpose limitation principle should apply to such models.

CMA updates its AI Foundation Models Report

The Competition and Markets Authority published an update to its initial report on AI Foundation Models. The update outlines seven principles, particularly targeting foundational AI models like ChatGPT, which focus on maintaining fair market competition and preventing market suppression and dominance by AI developers. The CMA sees these principles as a "fair warning" to businesses about the types of AI-related conduct it would view as problematic, even though the principles are not legally binding laws at this stage.

The rapid pace of AI development, the complex regulatory landscape, and the far-reaching societal and economic impacts of AI make it critical for stakeholders to closely monitor the evolving regulatory announcements and initiatives in this space. 

If you would like to discuss how this affects your business, please get in touch with Tim Wright or Nathan Evans

Featured Insights

How can we partner with you?

Fladgate has always been structured around deep relationships, creating true partnerships with clients.

get in touch
Photo