As much of Europe shuts up shop and heads for the beach, we look back on another busy month for the AI sector. Noteworthy events in July included Nvidia’s stock market valuation briefly passing $4 trillion, and Meta’s plans for new AI data centres. One of the DCs, set to be called Hyperion, will be nearly the size of Manhattan and will draw on 5 gigawatts (GW) of energy, whilst the other, Prometheus will be built in Ohio, and will draw at least 1GW. The names are fitting. In Greek mythology, Prometheus gave fire to humanity and was punished by Zeus, while Hyperion was a Titan eventually defeated and cast into the depths of the underworld. Just as the myths centre on elemental forces and power struggles, the real challenge for these new data centres is securing reliable energy, not vanquishing Greek gods.
At home in the UK…
The UK’s modern industrial strategy
The government updated its Modern Industrial Strategy, a 10-year plan to increase business investment and grow the industries of the future in the UK. The strategy emphasises leveraging AI to drive economic growth, innovation, and productivity across various sectors, and aims to make the UK a global leader in AI. It fosters both its development and adoption within businesses and public services. Key initiatives include a new Sovereign AI Programme to build UK-led AI capabilities and promote its use across various sectors, and a long-term compute strategy to ensure the UK has the necessary infrastructure for AI innovation and scientific discover.
UK government buddies up with OpenAI
Announcing a partnership agreement between the government and OpenAI, the AI lab is set to work with the government departments in a bid to boost productivity in UK public services. Whilst the announcement is a statement of intent and not a comprehensive legal deal, the partnership promises to explore potential routes to deliver the infrastructure priorities laid out in the AI Opportunities Action Plan, recognising the importance of UK sovereign capability in achieving the economic benefits of AI, according to Secretary of State for Science, Innovation and Technology, Peter Kyle. The initiative will also see OpenAI share technical information with the UK AI Security Institute to deepen knowledge of AI capabilities and security risks, as well as supporting the government’s mission to use AI to transform taxpayer-funded services.
Global AI benchmark released
The British Standards Institution released BS ISO/IEC 42006:2025 to enhance trust in AI system audits. This guidance is the first globally to establish certification criteria for auditors assessing AI management systems, rather than the systems themselves. The new standard follows BSI’s earlier release of BS ISO/IEC 42001:2023, the world’s first standard for AI management systems. 42006:2025 focuses on those conducting the audits, ensuring they have defined competencies, governance mechanisms, and standardised methodologies for evaluating organisations against 42001.
Across Europe…
EU won’t stop the clock after all
Despite reports to the contrary, the European Commission has said that the AI Act will be implemented according to the Act’s original schedule, with the general-purpose AI model obligations commencing this August and the high-risk AI requirements taking effect in 12 months’ time. Despite intensive lobbying from companies such as Google, Meta, Mistral and ASML, and with similar concerns raised publicly by the Swedish Prime Minister, Commission spokesperson Thomas Regnier said there would be ‘no stop the clock’, ‘no grace period’, and ‘no pause’, whilst acknowledging industry concerns. In this regard, the Commission plans to propose simplification measures for digital rules later this year, particularly reducing reporting obligations for smaller companies.
AI cannot be an inventor…Switzerland in lockstep with other jurisdictions
The Swiss Federal Administrative Court held that artificial intelligence systems cannot be recognised as inventors under Swiss patent law. This ruling, part of the ongoing cases surrounding the DABUS AI system, brings Switzerland in line with most global jurisdictions, such as the United States, United Kingdom, European Union, and Australia, emphasising the essential role of human beings in the patent application process. The Court also provided clarification on the specific conditions under which human users of an AI system may be deemed inventors.
Strong focus on General-Purpose AI
The final draft of the GPAI Code of Practice was published in July. It will now be assessed for adequacy by EU Member States and the Commission. The code is complemented by Commission guidelines on key concepts related to general-purpose AI models. The code has three chapters: Transparency, Copyright, and Safety and Security. Providers of general-purpose AI models can sign the code by completing the Signatory Form and sending it to the EU AI Office. The AI Office will publish the first list of model providers who have signed the Code on 1 August although there is no deadline for signatures. AI model providers who voluntarily adopt the Code can demonstrate AI Act compliance whilst reducing administrative burden and gaining greater legal certainty compared to alternative compliance methods. So far, signatories include OpenAI, Mistral and Anthropic, although Meta has publicly declined to do so.
AI model providers should also note that the AI Act’s rules on GPAI model took effect on 2 August (although the Commission’s enforcement powers don’t kick in until August 2026).
AI Training Data Disclosure Template
The Commission also published a template to help GPAI providers summarise the content used to train their model. The template is a simple, uniform and effective manner for GPAI providers to meet their transparency requirements in line with the AI Act, including making such a summary publicly available. Unlike the voluntary GPAI Code of Practice, however, the disclosure required by the template is mandatory for all model providers operating in the EU.
Parliament study urges EU-wide liability rules for defective AI
A report conducted by the European Parliament has come out in favour of EU-wide liability rules to address defective AI systems. The study highlights the need for a cohesive and unified approach to managing the risks and responsibilities associated with AI technologies across all member states. The report criticises the Commission’s withdrawal of the proposed AI Liability Directive (AILD) proposal and recommends that the proposed Directive is revamped and reintroduced to introduce strict liability for high-risk AI systems.
The United States…
US Senate dumps proposed ban on state AI laws
The US Senate voted to remove a moratorium on states regulating AI systems from Trump’s so-called “big, beautiful bill.” Legislators agreed by an overwhelming margin of 99 to 1 to abandon the controversial proposal during a protracted fight over the omnibus budget bill, which was subsequently signed into law by the President. The moratorium, said to be intended to avoid a patchwork of state AI regulations that could inhibit industry growth, required states to avoid regulating AI and “automated decision systems” if they wished to receive funding for broadband programs. However, President Trump subsequently said: “we need one commonsense, federal standard that supersedes all states, supersedes everybody, so you don't end up in litigation with 43 states at one time.”
A National AI Action Plan is unveiled
“Winning the Race: America’s AI Action Plan” was released by the White House citing a primary mission to create an environment within the U.S. for AI technologies to “succeed and thrive.” Government officials said that the plan includes the requirement that AI developers ensure their chatbots are “free of ideological bias” to be eligible for federal contracts; and that federal government procurement guidelines will be updated to contract only with large language model developers that “allow free speech and expression to flourish”. Some observers thought that AI companies might respond with "anti-woke" versions of their chatbots with fewer safeguards to land lucrative government business.
And Trump goes All In
Just days after the release of the AI Action Plan, President Trump appeared at the "Winning the AI Race" summit hosted by the All‑In Podcast and Hill & Valley Forum where he signed three executive orders aimed at streamlining federal permitting for energy infrastructure to handle AI application computing needs; directing Department of Commerce and Department of State heads to promote the U.S.-made AI tech stack abroad; and removing biased or “woke” AI technologies from government. Speaking at the summit, Trump said his administration aims to enable the U.S. to compete with advanced Chinese AI, whilst preventing adoption of a restrictive regulatory regime “like that of the European Union”, with the goal of supporting economic advancement and national security. He also dashed content owners’ hopes of a regulatory licensing regime for copyright works: “you can't be expected to have a successful AI program when every single article, book or anything else that you've read or studied you're supposed to pay for…”
Business announcements
Oracle announces £1.4bn investment for German AI and Cloud infrastructure expansion
Oracle plans to significantly scale Oracle Cloud Infrastructure (OCI) capacity in the Frankfurt region, supporting both public and private sector organisations with advanced compute power for AI workloads, cloud migration, and sovereign data requirements. The investment will be used to develop and expand data centres, increase computational power, and improve storage capacities. The initiative will bolster Germany's claims to be a leading hub for digital and AI-driven enterprises.
Dutch news publishers band together
A coalition of Dutch news publishers, led by De Telegraaf publisher NDP Nieuwsmedia, announced the launch of a national collective database to facilitate the licensing of their content for AI training purposes. The initiative is designed to serve as a centralised resource for AI developers looking to license Dutch news content, presenting an alternative to potential copyright disputes or individual licensing agreements. The first agreement has been secured with GPT-NL (Generative Pre-trained Transformer – Netherlands), a publicly funded Dutch large language model specifically developed for the Dutch language and operated by a collaboration of Dutch research institutions, universities, and public organisations.
We’ll be taking a break over the remaining summer months but will be back in the autumn. In the meantime if you’d like to chat about what these developments could mean for your business, feel free to get in touch with Tim Wright and Nathan Evans.