June saw a surge of momentum in the AI sector, driven by high-profile gatherings and significant policy developments, with London Tech Week 2025 spotlighting the UK’s ambition to become a global AI leader and featuring major announcements from Prime Minister Keir Starmer and NVIDIA CEO Jensen Huang on investments and upskilling initiatives. The week underscored the dynamism of the UK’s tech ecosystem, with international start-ups, ground-breaking innovations and robust discussions on AI’s transformative impact across industries. We also saw a number of cases before various courts on the use of copyrighted material for training AI models, with consequential issues of rights ownership, including Getty Images v Stability AI which began a long-awaited 18-day hearing in the High Court.
London Tech Week 2025
London Tech Week 2025 was a landmark event for the UK’s tech and AI sectors, drawing over 45,000 attendees from more than 90 countries and featuring a star-studded line-up of speakers, including NVIDIA’s Jensen Huang and Sir Tim Berners-Lee. The week was marked by major policy and investment announcements, notably Prime Minister Keir Starmer’s pledge of £1 billion for AI infrastructure by 2030, aimed at bolstering the UK’s “sovereign AI” ambitions and supporting upskilling initiatives. AI and cybersecurity took centre stage, with discussions highlighting AI’s role as a foundational technology across industries and the importance of building robust digital infrastructure to sustain growth. The Fladgate team took time out to attend the event in London’s Olympia and published commentaries on their key takeaways from the week which can be found via weblinks below.
What else was going on in the UK?
UK’s comprehensive AI legislation kicked down the road
The government has postponed AI legislation by at least a year, setting aside immediate regulatory action in favour of a broader bill designed to address both safety concerns and copyright disputes. Technology Secretary Peter Kyle confirmed plans to introduce a “comprehensive” AI bill during the next parliamentary session, with implementation likely delayed until next year, a notable departure from Labour’s initial strategy of swiftly enacting narrow, targeted laws to regulate advanced AI models. The delay means that any new rules or exceptions for AI and copyright, including increased transparency or technical measures for rights reservation, will not be implemented for the time being in the face of vocal opposition from the UK’s creative industries.
AI developers and the Data (Use and Access) Bill
Baroness Kidron’s Amendment 49 to the Data (Use and Access) Bill proposed a transparency requirement which would have obliged AI developers to disclose the data used in training their models. Similar transparency requirements already exist across the EU under the AI Act, which will require developers to publish summaries of copyrighted data used for training and to comply with EU copyright law. The Bill finally passed without Amendment 49 after a government concession in the form of a Commons amendment which requires the Secretary of State for Science, Innovation and Technology to publish a full technical report on its copyright and AI proposals within nine months of the Bill receiving Royal Assent, with an interim report to be published within six months.
Getty Images v. Stability AI
The eagerly anticipated trial between Getty Images and Stability AI kicked off in the High Court, marking a pivotal moment in the ongoing debate over copyright and AI. The case centres on allegations that Stability AI unlawfully used millions of copyrighted images from Getty to train its generative AI model, Stable Diffusion, raising fundamental questions about whether AI model training constitutes copyright infringement under UK law. The outcome of this case has potentially far-reaching implications for intellectual property rights and licensing practices in the UK. Judgement is not expected until after the summer break.
UK government launches AI Knowledge Hub
The UK government unveiled its new AI Knowledge Hub, a central resource designed to help public sector teams explore, adopt and apply AI responsibly and effectively. The Hub features a curated Use Case Library showcasing real-world AI applications across government, as well as structured, trusted guidance tailored for decision-makers navigating the evolving AI landscape. Built in response to the AI Opportunities Action Plan, the platform aims to foster knowledge sharing, reduce duplicated effort and accelerate confident, safe AI adoption throughout the public sector.
ICO unveils new AI and Biometric strategy
On 5 June, the Information Commissioner’s Office (ICO) announced its new AI and biometric strategy, outlining plans to develop a fresh code of practice over the next year. Alongside this, the ICO will update its existing guidance on automated decision-making and profiling, and produce a horizon-scanning report focused on agentic AI. These moves reflect the ICO’s commitment to staying ahead of evolving technologies and safeguarding data protection rights in the UK’s rapidly advancing digital landscape. The ICO will publish a detailed blog on the new strategy soon.
FCA launches NVIDIA AI sandbox
The UK Financial Conduct Authority unveiled its Supercharged Sandbox, a secure, supportive environment for financial services firms to experiment with advanced AI technologies. Beginning in October, participating firms will gain access to NVIDIA’s accelerated computing and AI Enterprise software, enhanced datasets and tailored regulatory guidance, empowering them to prototype and refine AI solutions for applications like fraud detection and risk management.
Across the European Union
European Commission may “stop the clock” on AI Act enforcement
Reports suggest the Commission is considering a pause - or “stopping the clock” - on the enforcement of certain provisions of the EU AI Act, as the rollout faces mounting industry pressure, unresolved technical standards and geopolitical tensions. The delay would affect future obligations, including those for high-risk and general-purpose AI systems, while already implemented bans and transparency requirements remain in place. In parallel, the Commission is preparing a broader digital simplification effort, with plans to bundle AI Act amendments and other regulatory streamlining measures into an omnibus package later this year, aiming to clarify rules, reduce burdens and boost the EU’s digital competitiveness.
EU opens public consultation on high-risk AI systems
The European Commission launched a public consultation to gather stakeholder input on the implementation of rules for high-risk AI systems under the EU’s AI Act. This information will inform forthcoming Commission guidelines on the classification of high-risk AI systems and their associated requirements.
In the US
New York passes RAISE Act
New York State passed the Responsible AI Safety and Education (RAISE) Act, marking the first state in the US to impose mandatory transparency and safety standards on developers of high-risk, frontier AI systems such as those built by OpenAI and Anthropic. The law requires major developers to submit detailed risk disclosures and safety protocols to New York’s Attorney General, with enforcement backed by penalties of up to $30 million for non-compliance.
FDA introduces AI tool
The US Food and Drug Administration unveiled Elsa, a generative AI tool engineered to streamline workflows for its employees, including scientific reviewers and investigators. Elsa is designed to help staff quickly access, process and synthesize vast amounts of regulatory and scientific information, supporting faster and more informed decision-making. This innovation aims to enhance efficiency across the agency, ultimately enabling the FDA to keep pace with the rapid evolution of food and drug products while maintaining the highest standards of safety and compliance.
Disney and NBCUniversal sue Midjourney for AI-generated copyright infringement
Disney and NBCUniversal have jointly filed a copyright lawsuit against Midjourney, an AI image generation company, in the US District Court for the Central District of California. The lawsuit accuses Midjourney of training its AI on “countless” copyrighted works from Disney and Universal, enabling users to generate and distribute images (and soon videos) that blatantly replicate iconic characters such as Darth Vader, Elsa from Frozen and Minions without authorisation. The studios describe Midjourney’s actions as a “bottomless pit of plagiarism” and a threat to the foundational incentives of US copyright law, which protect creators and their substantial investments in original content. Disney and Universal are seeking damages and injunctive relief to prevent further use and distribution of their intellectual property by Midjourney.
OpenAI successfully defends AI defamation case
A Georgia court has ruled in favour of OpenAI in the closely watched case of Walters v. OpenAI. The defamation lawsuit was brought by Mark Walters, a prominent US radio host, who alleged that ChatGPT had “hallucinated” and falsely accused him of embezzlement and fraud in response to a journalist’s query. The court found that, given ChatGPT’s clear disclaimers about potential inaccuracies and the context of the journalist’s use, no reasonable reader would have believed the fabricated claims to be true. The judge also ruled that Walters failed to establish defamatory intent or damages, granting summary judgment in OpenAI’s favour and reinforcing the importance of user context and platform warnings in assessing liability for AI-generated content.
Bartz v. Anthropic: judge rules AI training as fair use, but piracy claims advance
A case in the Northern District of California, Bartz v. Anthropic, saw District Judge William Alsup rule that the use of copyrighted books to train generative AI models such as Anthropic’s Claude, constitutes "exceedingly transformative" fair use, marking the first major federal court precedent on the issue. However, the judge also found that Anthropic’s creation and retention of a “central library” containing millions of pirated books could violate copyright law, allowing plaintiffs to proceed to trial on the piracy claim. This split decision underscores the evolving legal boundaries for AI companies and sets the stage for further scrutiny of how copyrighted materials are sourced and used in the AI industry.
Kadrey v. Meta: another ruling bolsters AI fair use defence
In a similar case, District Judge Vince Chhabria, sitting in the Northern District of California, ruled in favour of Meta in the case of Kadrey v. Meta, rejecting claims brought by a group of authors including Richard Kadrey, Sarah Silverman, Ta-Nehisi Coates and Junot Díaz, that the unauthorised use of copyrighted books to train Meta’s LLaMA AI models constituted copyright infringement. The court found Meta’s actions protected under the fair use doctrine, despite evidence that the company used millions of pirated books and research papers in its datasets. So far the plaintiffs have not lodged an appeal although there is speculation one is likely, with both fair use and infringement rulings as key grounds for further litigation.
Business announcements
Meta’s Scale AI play
Meta is reported to be in talks regarding a $14.3 billion investment in Scale AI, securing a 49% stake and bringing Scale’s CEO, Alexandr Wang, on board to lead a new Superintelligence division. This partnership appears to be aimed at supercharging Meta’s access to high-quality, labelled data - a crucial bottleneck in training advanced AI models - while also poaching top talent from rivals, perhaps also signalling Mark Zuckerberg’s determination that Meta catch up with its rivals in the AI space.
Consortium aims to save 1.5 million meals from food waste using AI
A UK trial, featuring companies such as Nestlé and Google Cloud, is deploying AI to detect, track and redistribute surplus food across supply chains. Part of Innovate UK’s BridgeAI initiative which provided a £1.9 million grant which is match-funded for cutting-edge projects intent on harnessing AI to drive productivity and innovation, the project seeks to prevent the waste of up to 1.5 million meals, significantly cutting environmental impact by reducing emissions and supporting food equity.
Microsoft announces investment in Swiss AI infrastructure
Microsoft announced a $400 million investment to expand and upgrade its cloud and AI infrastructure in Switzerland, focusing on four data centres near Zurich and Geneva. This move aims to meet surging demand for AI and cloud services from over 50,000 existing customers, particularly in regulated sectors like healthcare, finance and government, ensuring data remains within Swiss borders for compliance and security. The investment also includes partnerships with local innovation parks, support for start-ups and SMEs, and a major skilling initiative to train one million Swiss citizens in AI and digital competencies by 2027.
NVIDIA and Deutsche Telekom launch Europe’s first industrial AI cloud factory
NVIDIA and Deutsche Telekom announced Europe’s first industrial AI cloud factory, to be based in Germany and managed by Deutsche Telekom. This cutting-edge facility will empower regional manufacturers and businesses with advanced AI-driven solutions, including robotics, digital twin technology and sophisticated engineering tools, all accessible via the cloud. “In the era of AI, every manufacturer needs two factories: one for making things and one for creating the intelligence that powers them,” said Jensen Huang, founder and CEO of NVIDIA. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”
OpenAI lands major US defence deal
OpenAI secured its first major US government contract - a $200 million agreement with the US Department of Defense - as part of the “OpenAI for Government” programme. By July 2026, OpenAI will develop and deploy advanced AI solutions tailored for defence, administrative and cybersecurity operations.
Fladgate's London Tech Week 2025 Top Takeaways
- Quantum Computing, Deeptech and AI - the dawn of a new computational era
- How tech is tackling our biggest problems in healthcare
- Data privacy & compliance - turning regulation into competitive advantage
- Cybersecurity threats in the AI era - what is changing for businesses and what can be done?
- How can data centres procure green, cheap and reliable power?
- How tech is tackling our biggest problems in climate change
- Decoding the investor mindset
If you’d like to chat about what these developments could mean for your business, feel free to get in touch with Tim Wright and Nathan Evans.