find-partner-btn-inner

AI Round-up - November 2025

October saw a wave of blockbuster OpenAI deals, culminating in the trillion-dollar Stargate project with Oracle and SoftBank, which led many to wonder whether the AI sector’s financial plumbing was becoming as circular as the feedback loops powering generative models themselves, although others pointed to long established vendor financing practices. By way of example, Nvidia is investing upwards of $100 billion in OpenAI, providing funds that are then substantially reinvested in the purchase or lease of Nvidia's GPUs and chips. OpenAI also inked a separate multi-year agreement with AMD for six gigawatts of next-generation Instinct MI450 series processors, with a unique commercial term using 160 million shares of AMD common stock by way of part payment.

These types of arrangements appear, to some, have all the hallmarks of a technocratic ouroboros. The classical ouroboros symbol - an ancient snake eating its own tail - stands for endless cycles, unity, and renewal, as well as the idea that systems can destroy and recreate themselves continuously. In the context of technology or technocracy, the term is increasingly used to describe situations, such as those in AI markets, where investments, outputs, and returns are locked in a recursive loop - stakeholders reinvest, sell, and buy within the same closed network, making these systems seem self-sustaining but also vulnerable to circular logic and lack of outside validation.

In the UK and Europe…

The Society for Computers and Law holds annual AI conference

The Society for Computers and Law held its annual AI conference on 8 October 2025, at Herbert Smith Freehills Kramer in London, gathering legal professionals, regulators, academics, and industry experts to explore the evolving intersection of AI and law. The event featured sessions and case studies focussing on AI regulation, governance, compliance, commercial applications, and dispute resolution, designed to equip attendees with practical insights on advising businesses of all sizes and sectors.

The keynote speech, titled “Copyright is a Human Right” was delivered by Baroness Beeban Kidron OBE, who recounted the progress of the Data Use and Access Bill through the UK’s legislative process and her Amendment 49 which gained overwhelming support in the House of Lords. The amendment which would have required AI developers to disclose the categories and sources of copyrighted material used to train their models was blocked by the government. Kidron went on to discuss a potential claim against the UK government being considered by a group of UK-based creatives on grounds of breach of their human rights in particular the rights of authors in their literary and artistic works, as enshrined in international law (including the Berne Convention). The prime minister has received, but not yet responded to, the group’s pre-claim letter, said Kidron, and the government had clearly taken the side of the large US tech firms rather than the UK’s creative industries.

The EU’s $1.1 billion strategy for AI sovereignty

The European Commission unveiled a comprehensive €1 billion "Apply AI Strategy" to accelerate the adoption of AI and boost the competitiveness of 10 strategic sectors such as healthcare, manufacturing, energy, and defence. The Strategy encourages an ‘AI first policy’ (i.e. AI is considered as a potential solution whenever organisations make strategic or policy decisions, taking careful account of the tech’s benefits and the risks), and promotes a 'buy European' approach, particularly for the public sector, with a focus on open source AI solutions. Crucially, it supports measures and actions to increase the EU’s technological sovereignty by tackling cross-cutting challenges to AI development and adoption.

The AI Act guidelines keep coming

  • High-risk systems

    According to a presentation shared with EU member states, the Commission will split the AI Act guidelines on high-risk AI systems, with the guidelines on classifying AI systems set to be published by 23 February 2026 and separate guidance on high-risk obligations, substantial modifications and the AI value chain expected in the second or third quarter of 2026 – significantly closer to the 2 August 2026 enforcement date.
  • Serious incident reporting

    The Commission’s guidelines and reporting template, published as draft guidance on 26 September 2025, clarify the serious incident reporting obligations under Article 73 of the AI Act. The guidance aims to assist providers in fulfilling their duty to report any serious incidents or malfunctions that directly or indirectly cause death, serious harm to health, significant disruption of critical infrastructure, or fundamental rights violations. The Commission’s consultation on the draft guidance will close on 7 November 2025, with the guidelines becoming effective from 2 August next year.
  • Transparency

    The Commission’s public consultation to inform the development of guidelines and a voluntary Code of Practice on AI transparency obligations, as set out in Article 50 of the AI Act, closed on 2 October 2025. From 2 August next year, the Act will require providers and deployers to ensure that users are informed when they are communicating with an AI system (unless it is self-evident), are exposed to emotion recognition or biometric categorisation, or encounter content that has been artificially generated or manipulated, including deepfakes. The initiative aims to provide practical guidance and technical solutions - such as watermarking, metadata tagging, and clear notification practices - for providers and deployers of generative and interactive AI systems, especially regarding the detection and labelling of AI-generated or manipulated content.
  • Interaction with other EU laws

    The Commission will also publish comprehensive guidance clarifying the interaction between the AI Act and other key EU laws - including the GDPR, the Digital Services Act, the Digital Markets Act, product safety regulations, and copyright law – by the end of this year. The guidance will help companies navigate complex regulatory overlaps by explaining how AI-specific obligations complement, rather than override, existing rules.

AI Act Single Information Platform launched

The Commission unveiled its AI Act Single Information Platform - My AI, as part of its broader AI Act Service Desk initiative. The centralised digital hub provides stakeholders with up-to-date, comprehensive guidance on the AI Act’s provisions, practical compliance tools like a Compliance Checker, and easy navigation of legal texts through an AI Act Explorer. It also offers access to expert support via an online query form.

AI hub for scientists planned

The Commission is also preparing to launch a “virtual institute” named RAISE (Resource for AI Science in Europe) to support Europe’s scientists in using AI, according to the EU’s European Strategy for AI in Science. The institute will pool research capabilities across the EU, providing scientists with access to the EU AI Gigafactories. A pilot version will start in November although RAISE will not be fully operational until 2028.

ICT Supply Chain Security Toolbox nears adoption

The EU’s ICT Supply Chain Toolbox, a voluntary initiative designed to enhance the security and resilience of ICT supply chains across member states, looks set to be adopted soon, perhaps as early as November this year. Building on the previous 5G Toolbox experience, the new framework extends its scope beyond telecommunications to cover all digital technologies, including AI models and systems. The Toolbox provides a set of risk assessment criteria and mitigation measures targeting high-risk suppliers, with the aim to guard against cyber threats, espionage, supply chain disruptions, and dependencies on non-trusted vendors. It also supports alignment with broader EU regulations like the Cybersecurity Act and NIS2 Directive, fostering a harmonised approach to securing AI and other critical digital infrastructure and encouraging member states to exclude or phase out high-risk ICT vendors, and embed supply chain security into their procurement processes and operational practices.

Other news…

Apple sued over AI training data

Two neuroscientists, Professors Susana Martinez-Conde and Stephen Macknik, have filed a proposed class action lawsuit against Apple in a California federal court, accusing the company of illegally using thousands of copyrighted books from "shadow libraries" without permission to train its Apple Intelligence AI model. The lawsuit specifically cites their works among the pirated materials Apple allegedly used, including a large dataset known as Books3, composed of pirated e-books. This case is part of a broader wave of copyright litigation targeting major tech firms like OpenAI, Microsoft, and Meta for unauthorised use of copyrighted content in AI training, with significant financial implications highlighted by prior settlements such as Anthropic's $1.5 billion payout. The neuroscientists seek monetary damages and an injunction against Apple's continued misuse of their works.

OpenAI reverses Sora copyright policy after backlash

OpenAI's Sora AI video app, which quickly gained popularity for generating short AI-created videos featuring copyrighted characters, responded to a Hollywood-led backlash by changing from opt-out to opt-in and giving those rights holders who opt-in granular control over how their characters are used. Additionally, OpenAI announced a revenue-sharing model with copyright holders who allow their characters to be used in user-generated content, currently in a testing phase. Disney and other studios are reported to have already opted out of the app.

AI startups capture record 53% of VC funding

Pitchbook data, as reported by TechCrunch, saw venture capital investment in AI startups reach a (projected) record $192.7 billion globally in 2025, surpassing half of all venture capital funding worldwide and marking a historic concentration in the sector. This surge is dominated by a handful of major AI firms, such as Anthropic and xAI, which have attracted multi-billion-dollar funding rounds. The intense focus on AI appears to have bifurcated the market, creating distinct winners and losers, with AI startups capturing most VC funding while other sectors face difficulties in capital acquisition.

UN General Assembly launches global AI Red Lines Initiative

The 80th UN General Assembly saw the launch of the ‘AI Red Lines’ initiative, a global campaign urging governments to agree binding international "red lines" by the end of 2026 that prohibit AI uses deemed too harmful. Supported by over 200 prominent figures including Nobel laureates, former heads of state, tech pioneers, and AI researchers, the initiative emphasises the urgent need for clear, enforceable limits on AI to prevent risks like engineered pandemics, mass disinformation, human rights abuses, and loss of meaningful human control, and aims to create a framework for international cooperation, compliance, and oversight, including a potential treaty backed by technical verification and national enforcement.

AI uncovers forgotten Caravaggio masterpiece

A painting long dismissed by major auction houses and museums as a mere copy has now been authenticated as a genuine work by the celebrated Italian Baroque master Michelangelo Merisi da Caravaggio, thanks to AI analysis. The AI study, conducted by Swiss firm Art Recognition in collaboration with the University of Liverpool, assigned an 85.7% probability that the painting known as "The Lute Player" is an original Caravaggio masterpiece, challenging decades of expert opinion. The AI’s pixel-level analysis of brushstrokes, lighting, and composition uncovered unique hallmarks consistent with Caravaggio’s style, prompting renewed debate in the art world over traditional connoisseurship versus technological verification.

Scientists develop atom-thin material to cut memory chip power by 90%

Swedish researchers at Chalmers University of Technology have developed an innovative atom-thin magnetic material combining ferromagnetism and antiferromagnetism within a single two-dimensional crystal, enabling ultra-efficient memory chips that reduce energy consumption by a factor of ten. Their design is said to eliminate the need for external magnetic fields and complex multilayer structures, simplifying manufacturing and boosting reliability. Given data processing's growing energy demands - expected to reach nearly 30% of global consumption - this breakthrough holds promise for AI, mobile technology, and advanced computing by enabling faster, smaller, and greener memory devices crucial for future digital infrastructure.

If you’d like to chat about what these developments could mean for your business, feel free to get in touch with Tim Wright and Nathan Evans.

Featured Insights