September saw the announcement of a rash of AI-related investments as Trump’s tech bro heavy entourage came to London, with companies such as Microsoft, Google and Nvidia all pledging to invest billions in the UK as part of a landmark Tech Prosperity Deal. The hype was matched by policy noise on both sides of the Atlantic, with regulators scrambling to keep pace and corporates jostling for first-mover advantage. From billion‑dollar funding rounds to fresh compliance headaches, AI remains as much about governance as it is about growth.
Unpacking the US-UK Tech Prosperity Deal
President Trump and Prime Minister signed the Tech Prosperity Deal on 18 September 2025 at Chequers, the PM’s grace and favour country estate. The agreement ceremony formed one of the highlights of President Trump's state visit to the UK, and was attended by senior officials including Michael Kratsios, Director of the White House Office of Science and Technology Policy, and UK Secretaries of State Liz Kendall (Science, Innovation and Technology) and Peter Kyle (Business and Trade). Highlights include over £31 billion in capex pledges from Microsoft, Google, Nvidia, OpenAI, Salesforce, and CoreWeave, focused on data centres, cloud, and AI hardware deployments, and the establishment of an “AI Growth Zone” in North-East England with the ‘promise’ of tens of thousands of new skilled tech jobs and additional private investment in supercomputing, chips, and AI data centre capacity. The Google investment, for example, will support DeepMind’s London operations, with the opening of the company’s first data centre in Hertfordshire.
The non-binding memorandum of understanding also commits to coordinating AI standards and evaluation science, including shared safety test suites, red-teaming protocols, and pre-deployment reviews by UK’s AI Safety Institute and US NIST, and seeks to align regulatory approaches for easier transatlantic deployment and compliance of AI systems and technologies.
Other UK news…
DSIT publishes third party AI assurance roadmap
The UK government published a roadmap towards creating a third party AI assurance industry. This initiative, led by the Department for Science, Innovation and Technology, sets out ambitions to build an assurance market to ensure that AI systems are developed and deployed responsibly. The roadmap identifies key challenges in quality, skills shortages, information access, and innovation, and commits to actions such as establishing a multi-stakeholder consortium to develop an AI assurance profession, crafting a skills framework, improving information-sharing practices, and launching an AI Assurance Innovation Fund to support novel assurance technologies.
Launch of AI action plan for criminal justice
The UK Ministry of Justice launched an ambitious AI Action Plan to embed AI across courts, prisons, probation, and supporting services in England and Wales. Central to this plan are AI-driven tools like violence-risk prediction models that assess offenders and enable proactive interventions. The strategy emphasises responsible adoption with strong ethical oversight, human judgment preservation, and transparency. Early deployments include AI-assisted transcription and document analysis that reduce administrative burdens, while future pilots will explore more advanced predictive models to improve sentencing and risk assessments.
New AI-driven evidence synthesis project
UK Research and Innovation (UKRI) announced an £11.5 million AI-driven five year project called METIUS (Mobilising Evidence Through AI and User-informed Synthesis). Led by Queen’s University Belfast in collaboration with several partners, METIUS aims to revolutionise how scientific evidence is synthesised and delivered to policymakers in the UK and internationally, using cutting-edge AI combined with human expertise to dramatically speed up and improve the relevance and accessibility of evidence synthesis. The project aims to help policymakers tackle urgent challenges across education, justice, climate, and international development, among other areas.
In Europe…
Potential AI Act simplification rumbles on
The Danish presidency has asked member states which AI rules, if any, they would like to see included in the Commission’s simplification push. EU countries were asked to put forward their ideas for simplifying the AI Act in the Council's telecommunication working party. In parallel, they will address the main issue underlying the ongoing “stop the clock” debate on AI - delays plaguing the standards for high-risk systems.
EU publishes list of signatories to the GPAI Code of Practice
The European Union published the list of signatories to its voluntary General-Purpose AI (GPAI) Code of Practice as part of efforts to facilitate compliance with the EU AI Act. The Code provides a framework focused on three pillars: transparency, copyright compliance, and safety & security, aimed at managing systemic risks posed by advanced AI models across Europe. Signatories include Amazon, Microsoft, Google, Anthropic, OpenAI and IBM. Signing the code enables providers to streamline regulatory compliance, reduce administrative burdens, and gain enforcement predictability, while non-signatories may face stricter regulatory scrutiny and requirements.
Work on guidance for labelling of AI-generated content ramps up
Under the EU AI Act, providers of AI systems that generate synthetic content must label their outputs with machine-readable markers clearly identifying them as AI-generated or manipulated. Labelling may involve cryptographic digital watermarks and metadata embedding, ensuring content authenticity and traceability throughout the lifecycle, while preventing unauthorised alterations. Deployers of deepfake content have additional obligations to disclose the synthetic nature of manipulated images, audio, or video to avoid public deception. Preliminary work with AI firms, experts and stakeholders recently started, with guidance to follow in due course.
Italy enacts the EU’s first national AI law
On 17 September, the Italian Senate approved a comprehensive national framework, mandating traceability, human oversight across sectors, parental consent under 14, and criminal penalties for harmful deepfakes. The Senate also approved a €1B support fund. The new law sees Italy become the first EU member to enact legislation which is fully aligned with the EU AI Act, setting a concrete, enforceable national template and raising compliance expectations for providers operating across Europe.
Switzerland releases open-source multilingual AI model
Apertus is Switzerland’s first large-scale, fully open-source large language model (LLM), developed collaboratively by researchers at EPFL, ETH Zurich, and the Swiss National Supercomputing Centre. Released in September 2025, it offers complete transparency with its architecture, training data, model weights, and development process publicly accessible under a permissive license. Trained on 15 trillion tokens across over 1,000 languages, with about 40% non-English data including Swiss German and Romansh, Apertus promotes linguistic diversity and inclusivity and is designed for use in research, education, and commercial applications, whilst aiming to strengthen AI expertise and digital sovereignty in Switzerland. The model is available via Swisscom’s AI platform, Hugging Face, and the Public AI network, setting a blueprint for open, sovereign AI systems.
The United States…
Anthropic reaches copyright settlement in class action
Anthropic is reported to have reached a provisional settlement in the certified class action brought by a group of authors alleging its models were trained on over 500,000 pirated literary works. The lawsuit, known formally as Bartz v. Anthropic, was initiated by several well-known authors in 2024, claiming Anthropic unlawfully used their books to train its Claude AI model, sometimes downloading them from piracy sites like LibGen. In June, California district court judge William Alsup ruled that while training AI with books could be considered 'fair use,' creating a permanent digital library of pirated books was not, and Anthropic faced the possibility of billions or even over $1 trillion in statutory damages, with a trial set for December this year. After mediation, both Anthropic and the plaintiffs jointly requested the court to pause proceedings and submitted a binding term sheet outlining the settlement’s core terms in August.
Google faces potential scrutiny over secret AI enhancements
Google faced a backlash for secretly using AI to enhance creators' YouTube Shorts videos without their consent, leading to concerns over transparency, creator rights, and possible legal challenges around unauthorised content modification. Creators like Rick Beato and Rhett Shull noticed visual distortions such as unnatural skin smoothing, sharper details, and warped features that felt intrusive and misrepresentative of their original work, raising serious transparency and authenticity concerns. Legal repercussions could arise due to potential violations of creator rights, copyright law, and consent requirements.
Musk launches AI-only software company targeting Microsoft
Elon Musk’s AI venture xAI announced the creation of a new purely AI-driven software company called Macrohard, designed to simulate traditional software companies like Microsoft entirely via artificial intelligence. Macrohard, which is closely integrated with xAI’s powerful Colossus 2 supercomputer cluster in Memphis, aims to deploy hundreds of specialised generative AI agents that collaborate to handle a wide range of software development, coding, and management tasks autonomously, potentially transforming how software products are created and maintained. The new company adds to Musk’s growing portfolio of AI and tech ventures.
Volkswagen extends its cloud partnership with AWS
Volkswagen announced the extension of its agreement with Amazon Web Services for a further five years, expanding their joint Digital Production Platform across 43 global factories and reinforcing its foundation for AI-driven manufacturing. By leveraging AWS machine learning and IoT services, Volkswagen is reducing manual workloads, optimising logistics, and improving real-time error detection. The partnership is central to Volkswagen’s strategic shift toward software-defined vehicles, furthering the rapid integration of new electronics architectures and functionalities directly into automotive manufacturing.
Rolling Stone owner sues Google over AI summaries
Penske Media, owner of Rolling Stone, Billboard, and Variety, is reported to have sued Google alleging its AI-generated summaries, known as AI Overviews, unlawfully use its journalistic content without permission and siphon off web traffic. Filed in a federal court in Washington, D.C., the lawsuit claims these summaries reduce clicks to Penske’s sites by showing answers directly on Google’s results pages, hurting advertising and subscription revenues. Penske argues Google abuses its search monopoly to force publishers into accepting AI content use without compensation, causing a revenue drop exceeding one-third. Google denies these claims, stating AI Overviews enhance user experience and drive broader content discovery, promising to defend against what it calls baseless allegations.
Other (mainly quantum) announcements
UCLA and UC Riverside’s quantum breakthrough
Researchers at UCLA and UC Riverside announced the development a quantum computing system that operates at room temperature by utilising quantum oscillator networks, overcoming the long-standing obstacle of ultra-cold operational requirements. With traditional quantum computers requiring near absolute zero conditions, the ability to operate at room temperature promises the ability to integrate with existing silicon technology, enabling hybrid computing architectures which combine classical and quantum computing.
Another quantum breakthrough – this time from Quantum Motion
Quantum Motion, a UK start-up spun out from University College London and Oxford, announced the first full-stack quantum computer fabricated using standard silicon CMOS chip technology, the same manufacturing process behind today’s smartphones and AI GPUs. Installed at the National Quantum Computing Centre, this compact system fits within three standard server racks, including the dilution refrigerator and control electronics, and supports popular quantum software frameworks like Qiskit and Cirq, marking a crucial step toward making quantum computing commercially viable and integrable into existing data centre environments.
Yet another quantum breakthrough – cracking 6-bit crypto key
Various reports suggest that engineer Steve Tippeconnic has successfully cracked a 6-bit elliptic curve cryptographic key using IBM’s 133-qubit quantum computer, marking a key practical demonstration of quantum attacks on cryptographic systems. While a 6-bit key is still far simpler than the 256-bit keys securing Bitcoin and other cryptocurrencies, this breakthrough validates theoretical quantum algorithms in real hardware and highlights the accelerating timeline for quantum computing’s potential to threaten current encryption standards. Regulators and governments should take note, with an urgent need to safeguard digital assets before large-scale quantum threats emerge.
British Quantum Computer debuts in New York AI hub
British startup Oxford Quantum Circuits has installed New York City’s first quantum computer at a leading data centre in Manhattan. The OQC GENESIS system, integrated with Nvidia’s cutting-edge AI superchips and hosted by Digital Realty, will support hybrid quantum-AI workloads for sectors including finance and security. Science Minister Patrick Vallance said that the deployment showcased British innovation on the global stage, and would help drive economic growth and high-skilled jobs, while reinforcing transatlantic technology collaboration.
UN establishes Global Dialogue and Scientific Panel to lead AI governance
Finally, the UN General Assembly formally established two landmark initiatives to guide international AI governance: the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance. The Scientific Panel, composed of 40 multidisciplinary experts serving staggered three-year terms, provides independent, transparent, and rigorous scientific advice on AI’s opportunities, risks, and impacts to the UN and member states, ensuring policymaking is grounded in cutting-edge research. Meanwhile, the Global Dialogue convenes biennially as a multi-stakeholder forum, cochaired by representatives from developed and developing countries, to foster international cooperation, discuss AI governance challenges, and align efforts with the Sustainable Development Goals.
If you’d like to chat about what these developments could mean for your business, feel free to get in touch with Tim Wright and Nathan Evans.