find-partner-btn-inner

AI Round-Up – May 2025

With OpenAI CEO Sam Altman appearing to suggest in an April Forbes magazine interview that ChatGPT is now at 1 billion weekly active users, we are pleased to publish our latest monthly round-up of the significant legal, regulatory and technical developments in the field of AI that have caught our attention this month.

UK initiatives

UK launches AI Energy Council

The UK government has launched the AI Energy Council, bringing together leaders from the energy and technology sectors to ensure the nation’s infrastructure can sustainably meet the soaring power demands of AI. Co-chaired by the Technology and Energy Secretaries, the council’s first meeting focused on aligning the UK’s AI ambitions with its clean energy strategy, supporting the responsible scaling of data centres and AI development while promoting economic growth and job creation. Key participants include major utilities, regulators and tech giants such as Microsoft, Google and Amazon, who will advise on boosting energy efficiency, expanding renewable and nuclear power, and accelerating grid access in designated AI Growth Zones - areas with enough capacity to power two million homes.

Cyber Security and Resilience Bill: policy statement laid before Parliament

The UK’s proposed Cyber Security and Resilience Bill directly addresses the growing intersection between AI and national cyber security. While the Bill’s primary focus is to strengthen protections for critical infrastructure, public services and digital supply chains, it also recognises the unique risks posed by AI systems and the need for robust oversight. The legislation is set to expand current regulatory frameworks, bringing managed service providers and high-impact suppliers - including those deploying advanced AI -within its scope. Complemented by the AI Cyber Security Code of Practice - outlining best practices for securing AI systems - and a renewed mandate for the UK’s AI Security Institute to focus on AI-related national security threats, the legislation aims to ensure that as AI adoption accelerates, robust safeguards are in place to protect public services, infrastructure and the wider digital economy.

European initiatives

EU consultation on general-purpose AI guidelines

The European Commission launched a consultation to inform its upcoming guidelines on the AI Act's rules for general-purpose AI models. The guidelines aim to clarify key concepts, such as what constitutes a general-purpose AI model, when it is considered placed on the market, and who is deemed a general-purpose AI provider in different situations. It will also explain how the Code of Practice should reduce the compliance burden for model providers. The deadline for feedback is 22 May 2025.

Commission's preliminary approach to general-purpose AI under the AI Act

The European Commission also published its preliminary interpretative guidance on general-purpose AI models under the AI Act. The Commission has adopted an approach that distinguishes between different tiers of GPAI providers based on model capabilities and potential risks, based on a compute threshold of 10^25 FLOPS (floating-point operations) for categorising Tier 1 "systemic risk" GPAI models. This threshold represents the total computational resources used during model training. Models trained using computational resources exceeding this threshold fall under the more stringent Tier 1 requirements, while those below this threshold may be subject to the less intensive Tier 2 obligations, depending on their capabilities and potential risks.

Hamburg declaration promotes responsible AI for sustainable development

The United Nations Development Programme and Germany’s Federal Ministry for Economic Co-operation and Development introduced the Hamburg Declaration on Responsible AI for the Sustainable Development Goals. This initiative aims to align AI development with global sustainability objectives, emphasising ethical considerations and inclusive growth. The declaration is set to be officially adopted at the Hamburg Sustainability Conference in June 2025.

Irish DPC investigates X's Grok AI training data

The Irish Data Protection Commission has launched an investigation into X (formerly Twitter) to assess the legality of its data processing practices related to the training of its Grok Large Language Model. The probe will examine whether X secured and processed user data from the EU and EEA in compliance with GDPR regulations.

EU AI Continent Action Plan

The European Commission launched its AI Continent Action Plan, setting out an ambitious strategy to position the EU as a global leader in AI by investing in infrastructure, talent and data policy. Centred on five pillars - scaling AI infrastructure with factories and gigafactories; unlocking access to quality data; driving AI adoption across key sectors; developing AI talent; and simplifying regulation - the plan aims to boost innovation, foster digital sovereignty and accelerate the responsible deployment of AI in industries ranging from healthcare to telecoms. Backed by initiatives like the €200 billion InvestAI programme and new regulatory support, the Action Plan represents a decisive step towards transforming Europe’s economy and public services through trustworthy, competitive and inclusive AI.

The United Nations…

UN warns of global surge in AI-powered cyber scams

The United Nations has issued a stark warning about the rapid global rise in cyber scams, many of which are now being powered by generative AI and sophisticated criminal networks. These AI-driven scams are enabling fraudsters to create highly convincing deepfakes and clone voices, and to generate fake documents, making it increasingly difficult for individuals and organisations to distinguish between real and fraudulent interactions.

The UN notes that such operations are expanding beyond their traditional bases, with South East Asian crime groups, for example, now targeting regions as far as Africa, South America and the Pacific, contributing to an interconnected ecosystem of cyber-enabled fraud. In response, the UN is calling for stronger regulatory frameworks, enhanced law enforcement co-operation, and improved public awareness to counter the evolving threat landscape posed by the misuse of generative AI in cybercrime.

Elsewhere…

Trump administration moves to revitalise coal amid AI-driven energy demand

Amid the escalating energy demands of AI technologies, including forecasts of a doubling in data centre electricity demand by 2030, the Trump administration has initiated measures to revitalise the coal industry. President Trump signed executive orders invoking the Defense Production Act to bolster coal production and designating coal as a "critical" mineral. The administration asserts that coal is vital for meeting the nation's growing electricity needs, though this approach faces criticism due to environmental and economic concerns.

New AI acquisition guidance for US federal agencies

The Trump administration released new guidance for federal agencies on acquiring and using AI, aiming to maximise the adoption of American-made AI technologies and streamline procurement processes to foster innovation in government operations. The memoranda, issued in April 2025, replace previous guidance and introduce standardised requirements for evaluating AI performance, managing risks and protecting intellectual property and privacy.

Chinese censorship expands with AI

There have been reports of a leaked dataset which appeared to reveal that China is actively training large language models to automate and scale online censorship, using AI to flag politically sensitive content on topics ranging from satire and government corruption, to social issues like pollution and poverty. Unlike traditional censorship methods that rely on human moderators and keyword filters, these AI systems can detect nuanced dissent and context, making state-led information control more efficient and granular. Experts warned this marks a significant escalation in digital repression, as authoritarian regimes like China can now leverage AI to suppress dissent and shape public opinion on an unprecedented scale.

Cleo AI settles FTC deception charges

Cleo AI has agreed to pay $17 million to settle Federal Trade Commission charges that it misled users by exaggerating cash advance amounts, falsely promising instant access to funds and failing to disclose that its paid subscription was optional. The FTC found that most users received far less than the advertised advances, often faced hidden fees for faster access, and encountered significant obstacles when trying to cancel subscriptions, with some being charged monthly fees despite repeated cancellation attempts. Under the settlement, Cleo must provide refunds, notify affected users, and clearly disclose all terms and conditions, including making subscription cancellation straightforward and obtaining informed consent before charging users in the future.

Business announcements

NVIDIA announces plans to build AI chips in the US

NVIDIA is set to produce its AI supercomputer chips entirely in the US, with Blackwell chip production already underway at a TSMC semiconductor facility. Together with leading manufacturing partners including TSMC, Foxconn, Wistron, Amkor and SPIL, NVIDIA has commissioned more than a million square feet of manufacturing space to build and test NVIDIA Blackwell chips in Arizona and AI supercomputers in Texas. NVIDIA reported that manufacturing will accelerate over the next year, signalling a significant investment in US domestic production capabilities.

OpenAI secures $40 billion funding round

OpenAI reported that it has raised an unprecedented $40 billion in a private funding round led by SoftBank, valuing the company at $300 billion and marking the largest tech fundraising to date. The investment, with an initial $10 billion upfront and the remainder contingent on OpenAI transitioning to a for-profit structure by the end of 2025, also includes major backers like Microsoft and Coatue. The capital will fuel advancements in AI research, scale computational infrastructure, and support the Stargate joint venture with SoftBank and Oracle, aimed at building large-scale AI data centres.

xAI acquires X in $80 billion all-stock deal

Elon Musk’s xAI acquired social media platform X (formerly Twitter) in an all-stock transaction that values xAI at $80 billion and X at $33 billion, creating a new holding company that combines both entities. This merger gives X’s shareholders a stake in xAI’s higher valuation, while xAI gains exclusive access to X’s user base and real-time data-critical assets for training and enhancing its flagship AI chatbot, Grok.

Meta unveils Llama 4

Meta launched Llama 4, its most advanced AI model yet, featuring a powerful Mixture-of-Experts architecture that enables dynamic collaboration between specialised sub-models for superior efficiency and performance. Llama 4 stands out with native multimodality, allowing it to process and reason over both text and images from the ground up, rather than as an add-on, and boasts massive context windows - up to 10 million tokens in the Scout variant - for handling long documents or complex codebases. With enhanced reasoning, coding and multilingual abilities, Llama 4 rivals top-tier AI models and is available as an open model, making advanced AI capabilities more accessible to developers and researchers worldwide.

Virgin Atlantic launches industry-first AI champion apprenticeship

Virgin Atlantic has become the first airline to launch an AI Champion apprenticeship, partnering with education technology provider Cambridge Spark to accelerate digital transformation across the organisation. The programme is designed to empower non-technical professionals from departments such as flight operations, engineering, finance and communications to become advocates for AI adoption, equipping them with practical skills to leverage tools like Microsoft Copilot in their daily work. This initiative aims to boost productivity, efficiency and innovation by embedding AI fluency throughout Virgin Atlantic’s business.

As AI's capabilities continue to expand and its integration into various sectors deepens, these legal, regulatory and technical developments underscore the growing importance of balancing innovation with robust safeguards for public safety, data protection and ethical practices. 

Get in touch with Tim Wright or Nathan Evans if you would like to discuss any of the contents of this article in more detail, or to explore how we can help you on your AI journey. 

Featured Lawyers

Featured Insights