find-partner-btn-inner

AI Round-up - December 2025

November saw a much-anticipated High Court judgment in the case of Getty v Stability AI, continued concern about an AI-bubble, warnings that unprecedented AI data centre demand is causing a chip shortage and triggering escalating prices, and publication of the European Commission’s Digital Omnibus “simplification package” initiative. November also saw ‘vibe coding’ pronounced as word of the year by the Collins Dictionary!

Copyright versus AI

Getty Images dealt blow by Stability AI in the UK

The High Court judgment in Getty Images v Stability AI was handed down on 4 November 2025. The case centred on intellectual property rights arising from the training and outputs of the Stable Diffusion generative AI model, with Justice Joanna Smith’s ruling largely favouring Stability AI, dismissing most of Getty’s copyright infringement claims, with Getty achieving limited success on its trademark claims. However, the ruling was very narrowly focused on the specific claims, location of training, and technical nature of the AI model. Getty will be considering its options including a potential appeal. A key finding of the court was that the training and development of the model did not take place in the UK, leading to dismissal of the main copyright claims on territoriality grounds.

OpenAI on the losing side in Germany

On the other hand, the recent German ruling by the Regional Court of Munich in GEMA v OpenAI (concerning ChatGPT) offers a distinct contrast, particularly in how European courts are approaching the use of copyright works for AI training. The case involved a claim of copyright infringement involving the use of song lyrics (represented by the collecting society GEMA) to train OpenAI's large language models (LLMs) like ChatGPT. The court ruled against OpenAI, finding that the "memorisation" of copyright lyrics within the LLM's parameters constituted unauthorised reproduction (a form of fixation) and the subsequent output of these lyrics by ChatGPT in response to user prompts constituted a further act of reproduction and communication to the public. Further, the Text and Data Mining (TDM) exception under the EU’s copyright directives was found not applicable to justify outputs that reproduce substantial parts of protected works.

Key differences between the cases:

Stability AI v Getty Images (UK)

  • Model Type: Image generation model (Stable Diffusion)
  • Claim Focus: Secondary copyright (infringing article) and trademark infringement.
  • Key Ruling on Model: The model was not an "infringing copy" because it did not store or reproduce the images.
  • Outcome on Copyright: Stability AI prevailed (copyright claims failed largely due to technical/jurisdictional issues).
  • Territorial Focus: Narrowly focused on acts occurring within the UK (training was overseas).

GEMA v OpenAI (Germany)

  • Model Type: Large language model (ChatGPT)
  • Claim Focus: Primary copyright (reproduction, communication to the public).
  • Key Ruling on Model: The model memorised/fixed lyrics in its parameters, constituting an unauthorised reproduction.
  • Outcome on Copyright: GEMA prevailed (OpenAI found liable for infringement).
  • Territorial Focus: Focused on infringement within Germany (European TDM exception context).

Figma faces class-action suit over alleged AI training misuse

Design software company Figma is on the receiving end of a proposed class-action lawsuit in which it is accused of improperly using clients' designs to train its models without consent. The complaint, filed in the U.S. District Court for the Northern District of California, alleges that San Francisco-based Figma automatically enrolled users in a program that exploited their data and intellectual property to enhance its generative AI tools. Unlike most AI-related litigation focused on copyright infringement, this case specifically alleges that Figma stole customer trade secrets and illegally accessed their data.

In the European Union…

High-risk AI rules to be delayed

A digital simplification package announced on 19 November saw the European Commission propose a one-year delay to the AI Act’s rules governing high-risk AI systems, pushing implementation into 2027. Announced as part of the Digital Omnibus package, the proposal, which requires approval from EU members and the European Parliament, follows pressure from the US administration, tech companies and lobby groups, along with some EU member states citing delays in technical standards development. The primary reason given for the delay is to link the application of the rules to the availability of harmonised standards, common specifications, and guidance - which have been delayed. The package also proposes targeted amendments across several key pieces of digital legislation - including the GDPR, the ePrivacy Directive, and the Data Act.

Work on the Codes of Practice continues

The Commission initiated work on a code of practice for marking and labelling AI-generated content, starting with a plenary meeting on 5 November 2025. The initiative responds to increasing difficulties in distinguishing between AI-generated and human-created content and aims to reduce risks of misinformation, fraud, impersonation, and consumer deception. The voluntary code is intended to help providers meet AI Act transparency requirements which mandate clear marking of deepfakes and certain AI-generated content.

Commission to codify Legitimate Interest as legal basis for AI model training

The Commission is planning, as part of its Digital Omnibus package, to introduce a specific GDPR amendment that confirms controllers may rely on legitimate interests (under defined safeguards) as a legal basis for processing personal data to train and operate AI models. This is intended to resolve previous legal uncertainty and divergent national interpretations by providing a clear, harmonised ground for AI training across the EU, particularly where large volumes of personal data, including publicly available data, are used. The amendment is also designed to close the gap between the AI Act and the GDPR by clarifying the lawful basis for AI training and by framing additional safeguards, such as transparency obligations, balancing tests, data minimisation requirements, and narrowly framed conditions for using special category data (for example, to detect and correct bias, where synthetic or anonymised data are insufficient).

ChatGPT set to be first DSA regulated AI service

ChatGPT is being assessed by the Commission to determine if it qualifies as a regulated service under the EU's Digital Services Act (the DSA). The key factor is whether ChatGPT, particularly its search feature, surpasses the 45 million monthly active users threshold in the EU for services deemed to pose “systemic risks”, which it reportedly exceeded with over 120 million global users. If designated, ChatGPT would likely be classified as a "Very Large Online Platform" (VLOP) or a "Very Large Online Search Engine" (VLOSE) under the DSA, subjecting it to strict oversight obligations including systemic risk assessment, transparency requirements, audit reporting, and data access for researchers.

Denmark looks at granting copyright-like rights to citizens’ likeness

Denmark is set to become one of the first countries globally to grant individuals ‘quasi copyright’ rights over their own likeness, including face, voice, and body, through a proposed amendment to its Copyright Act. This law aims to empower citizens to control and prevent unauthorised AI-generated deepfakes and other digital impersonations by allowing them to issue takedown notices, claim compensation, and hold platforms liable for failing to act. The legislation includes safeguards for freedom of expression, carving out exceptions for satire and parody, and extends protections to all individuals within Denmark’s jurisdiction, not only public figures.

Uber facing fresh legal claim

Uber Technologies is reported to be facing a legal challenge over its algorithmic pay structure. Advocacy group Worker Info Exchange has served a formal notice alleging violations of data protection laws by the ride-hailing company. The non-profit organization, led by former Uber driver James Farrar, claims the company's dynamic pricing algorithms reduced driver earnings while increasing Uber's commission share. The potential legal proceedings would take place in Amsterdam (the site of Uber's European headquarters).

Deals and announcements…

Starcloud to deploy AI Data Centres in...space

Starcloud successfully launched its first AI data centre satellite, Starcloud-1. The satellite, carrying an NVIDIA H100 GPU – reportedly the most powerful GPU operated in space so far - was launched aboard a SpaceX Falcon 9 rocket and has entered orbit at about 325 kilometres altitude. This space data centre uses solar-powered systems that utilise the vacuum of space for cooling and solar energy for power, enabling lower electricity costs and zero water use compared to Earth-based centres.

OpenAI transitions to a Public Benefit Corporation

After months of speculation, OpenAI has formalised its shift into OpenAI Group PBC, a for-profit public benefit corporation currently valued at around $500 billion and controlled by the original nonprofit, the OpenAI Foundation. This restructuring ends the “capped-profit” model that limited investor returns to 100x and clarifies ownership stakes, with the nonprofit holding 26% equity, Microsoft 27%, and the remainder owned by employees and investors. The new structure allows OpenAI to raise capital like a traditional company while remaining under nonprofit control, supposedly balancing profit with public benefit and paving the way for future investment and infrastructure growth. OpenAI is now said to be preparing for an IPO next year that could value the company at up to $1 trillion.

Trump blocks sale of Nvidia’s AI chips to China

President Trump has firmly ruled out allowing Nvidia to sell its most advanced Blackwell AI chips to China, citing national security and military concerns. Describing the Blackwell chip as "10 years ahead of every other chip," Trump emphasised that the cutting-edge technology would remain exclusive to the United States. Subsequently, the Trump administration informed federal agencies that it will block Nvidia from selling even scaled-down AI chips to China, including the B30A model designed specifically to comply with earlier export controls. CEO Jensen Huang then confirmed the company's Chinese market share had collapsed to zero.

Nvidia orders 50% more wafers from TSMC amid Blackwell demand

Notwithstanding Nvidia’s China crisis, CEO Huang has confirmed that the chipmaker is experiencing "very strong demand" for its Blackwell platform and has ordered Taiwan Semiconductor Manufacturing Company (TSMC) to supply additional wafers to meet surging orders, pushing TSMC's monthly 3nm output at its Southern Taiwan Science Park facility from the current 100,000-110,000 wafers to 160,000 wafers, requiring an additional 35,000 wafers monthly dedicated to Nvidia products. Separately, Nvidia reported accelerating global demand for AI driving quarterly revenues up by 62%, reaching $57 billion in the three months to October 2025. CEO Jensen Huang underscored that both large-scale AI deployments and record sales of its data centre chips are fuelling unprecedented growth, forecasting even stronger results in the next quarter.

AI creates antibodies de novo

Researchers at the University of Washington announced the successful use of AI to design fully functional antibodies entirely from scratch, achieving unprecedented atomic-level precision. The breakthrough centres on RFdiffusion, a sophisticated generative AI model that has been fine-tuned to create antibodies. Putting aside the potential medical benefits, it raises interesting legal questions, such as whether inventions with minimal human intervention meet inventive step and non-obviousness criteria required for the grant of patent rights, as well as questions of regulatory compliance since regulatory agencies like the FDA require transparency and robust validation of AI-derived therapeutics.

SoftBank offloads entire Nvidia stake

SoftBank sold its entire $5.8 billion stake in Nvidia, surprising markets given Nvidia’s central role in the AI boom. The sale is part of SoftBank CEO Masayoshi Son’s strategic capital reallocation to fund his massive AI infrastructure ambitions, including the $500 billion Stargate data centre project as well as a significant financial commitment to OpenAI. The sale locks in substantial returns while enabling SoftBank to bankroll transformative AI ventures beyond chip manufacturing.

Nebius inks $3 billion AI infrastructure deal with Meta

Amsterdam-based Nebius signed a $3 billion, five-year contract with Meta, marking its second major tech partnership in under three months. The contract reflects strong demand for Nebius’s neocloud capacity, which is currently sold out, supporting its aggressive expansion plans across Europe and North America. Neocloud refers to specialised, AI-native cloud infrastructure platforms designed specifically to support intensive AI and machine learning workloads. The contract represents Nebius's second major AI infrastructure partnership with a hyperscaler, following its $17.4 billion Microsoft deal earlier this year.

OpenAI and AWS sign $38 billion multi-year deal

OpenAI signed a seven-year, $38 billion agreement with Amazon Web Services to access hundreds of thousands of state-of-the-art NVIDIA GPUs and tens of millions of CPUs, enabling rapid scaling of its AI workloads. The deal provides immediate access to AWS’s optimised AI infrastructure, with full deployment targeted by the end of 2026 and expansion capabilities extending into 2027 and beyond. The agreement diversifies OpenAI’s cloud footprint beyond Microsoft, reinforcing its compute capacity to advance next-generation AI models.

Project Prometheus – Jeff Bezos’ $6.2 billion start-up

Amazon founder Jeff Bezos launched Project Prometheus, an AI start-up focused on ‘physical AI’ - using advanced artificial intelligence to transform engineering and manufacturing in sectors like aerospace, computing, and vehicles. The company, backed to the tune of $6.2 billion with Bezos as co-CEO, has already recruited top talent from leading AI labs and aims to use machine learning to accelerate prototyping, automate complex tasks, and innovate at the intersection of robotics and industrial design.

If you’d like to chat about what these developments could mean for your business, feel free to get in touch with Tim Wright and Nathan Evans.

Featured Insights