find-partner-btn-inner

AI Round-Up - April 2026

March delivered a decisive shift towards agentic AI at scale, a regulatory recalibration across the EU and UK, and a sharpened focus on copyright, provenance, and platform accountability. Agentic systems and data-centre infrastructure took centre stage in industry news, while legislators and regulators moved to extend near‑term EU AI Act deadlines, tighten disclosure and watermarking expectations, and push platforms on safety, age assurance, and labelling, with the UK creative sector converging on a licensing‑first approach to AI training data.

Industry momentum and the build‑out for agentic AI

Industry announcements in March were dominated by capital formation and systems engineering aimed at serving low‑latency, autonomous agents rather than mere chat interfaces. At NVIDIA’s GTC on 16 March, Jensen Huang described an “inference inflection,” with the company positioning its new Vera Rubin rack‑scale platform and a companion Groq 3 LPX rack to deliver up to 35x higher inference throughput per megawatt and materially greater revenue opportunity for trillion‑parameter models - an architecture explicitly framed to accelerate reinforcement learning and agentic workflows. This hardware pivot was matched in software, with NVIDIA introducing NemoClaw to support the OpenClaw ecosystem in isolated sandboxes for self‑evolving agents, again signalling a move from training‑centric stacks to autonomous software systems. Complementing that direction, Palantir and NVIDIA previewed a “Sovereign” architecture proposition during March, aimed at governments that want to retain control over sensitive data and workloads while adopting state‑of‑the‑art AI stacks, underscoring the rising salience of data residency and procurement sovereignty for public‑sector deployments.

Funding and corporate developments reinforced this theme of autonomy at scale. Defence AI company Shield AI announced a $2 billion financing package, including a $1.5 billion Series G, at a $12.7 billion post‑money valuation, citing US Air Force selection of its Hivemind autonomy as mission software for Collaborative Combat Aircraft and plans to acquire simulation firm Aechelon to integrate with the Pentagon’s Joint Simulation Environment. In the UK ecosystem, former Meta executives Nick Clegg and Sheryl Sandberg joined the board of Nscale, a British AI data‑centre startup, in a move treated as validation of the domestic build‑out thesis. More broadly, note‑taking app Granola became the latest UK unicorn after a March fundraising, indicating continued investor appetite for software products that translate foundation‑model capabilities into consumer utility.

Agentic AI also crept into mainstream consumer discourse via the “OpenClaw” memeplex, with industry commentary noting that autonomous agents that execute tasks without direct prompts are moving from research to adoption; one incident involved an OpenClaw agent attempting unauthorised code contributions and publishing attacks on community volunteers - anecdotes that raise governance, liability and brand‑risk questions for companies experimenting with scheduled or tool‑using agents. At the same time, the Competition and Markets Authority (CMA) opened a line of analysis on agentic AI’s consumer impact, emphasising the potential to reduce friction and deliver hyper‑personalised outcomes while warning of manipulation risks, lock‑in, bias, and the need for secure digital identity and interoperability in delegated decision‑making.

Research, product shifts, and the state of capabilities

Advances in model efficiency and multimodal neuroscience were prominent in March. Google Research’s TurboQuant method focused on compressing the key‑value cache for large‑model inference to as low as three bits per value with no accuracy loss in the cited tests, cutting memory use sixfold and delivering up to eightfold faster attention on NVIDIA H100s across popular open‑source models; the significance lies in attacking the memory‑bandwidth bottleneck as context lengths grow, yielding lower serving costs and enabling capable models on smaller clusters or edge devices. In parallel, Meta released TRIBE v2, a trimodal brain‑predictive foundation model trained on more than 1,000 hours of fMRI from 720 subjects, capable of predicting responses to video, audio and language stimuli for individuals it has not seen, with reported two‑ to three‑fold improvements over prior methods and an explicit emphasis on aiding diagnosis and experiment design. Benchmark debates also continued with the ARC Prize’s announcement of ARC‑AGI‑3, highlighting a sustained human‑AI gap on unsaturated agentic intelligence tasks, underscoring that generality in learning remains elusive relative to human performance.

Not all product news pointed to expansion. OpenAI shut down its video app Sora and ended a prospective Disney arrangement, reflecting intensifying scrutiny around synthetic media provenance, IP, and platform trust. Separately, Grammarly terminated its “Expert Review” marketing initiative after author backlash to the use of writers’ names without consent, further illustrating that attempts to associate AI with human authority figures are attracting reputational and possibly legal risk. March also saw a glimpse of retail platform governance for autonomous agents, with a California judge banning Perplexity’s “shopaholic” agent from Amazon’s store pending appeal, while Perplexity argued for consumer freedom to choose agents - an early testbed for how marketplaces will balance consumer agency against platform integrity and merchant harm.

Litigation, global rule‑making

In a closely watched US case pitting defence designations against AI speech, Anthropic secured an injunction after arguing that compelling its model to behave in certain ways constituted unconstitutional compelled speech, reframing a Pentagon dispute as a First Amendment question; the judge’s receptiveness suggests constitutional speech arguments will increasingly surface where governments seek to control model behaviours. Within the UK and EU enforcement landscape, regulators trained attention on age assurance, illegal content, and procedural readiness. Ofcom wrote to major platforms, including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube, requiring stronger enforcement of minimum‑age rules and broader child‑safety controls by 30 April 2026, coupled with a commitment to report publicly on responses and next steps in May. The Information Commissioner’s Office (ICO) issued an open letter pressing platforms to move beyond self‑declaration by children towards stronger age‑verification measures, and also fined Police Scotland £66,000 for excessive and unfair extraction and disclosure of highly sensitive personal data - failures that included a 39,233‑page device download with vast irrelevant special‑category content.

On the IP front, Britannica pursued claims against OpenAI in the US courts, widening the field of rightsholder actions over training data and model outputs, while the World Intellectual Property Organization (WIPO) launched its Artificial Intelligence Infrastructure Interchange (AIII) to convene experts on attribution standards, watermarking, fingerprinting, rights management, and AI‑enabled IP enforcement infrastructure, with a first annual public meeting set for October.

The EU AI Act and adjacent European rule‑sets: timelines, scope, and enforcement

The EU’s legislative machinery concentrated on adjustments to the AI Act’s implementation and content‑marking expectations in March. Parliamentary committees adopted amendments under the Digital Omnibus aiming to postpone compliance deadlines for certain high‑risk AI systems to 2 December 2027 and, where systems are covered by EU sectoral legislation, to 2 August 2028; they also proposed a ban on AI “nudifiers” that create sexually explicit images without consent and extended the watermarking compliance deadline to 2 November 2026. At the same time, Parliament backed streamlining proposals and a joint position on simplifying the Act, even as an open letter from equality and human‑rights bodies Equinet and ENNHRI warned that moving towards sector‑specific legislation without full impact assessments risks fragmenting protections, undermining legal certainty, and imposing higher long‑term compliance burdens, particularly on SMEs. The European Commission advanced a second draft of a voluntary Code of Practice to support Article 50 transparency obligations, emphasising multi‑layered machine‑readable marking of AI‑generated content through signed metadata and imperceptible watermarking, coupled with effective detection and clear disclosure by deployers in matters of public interest.

Copyright, licensing, and provenance across the creative economy

The UK creative sector’s policy and market posture coalesced around licensing and transparency in March. The House of Lords Communications and Digital Committee urged the Government to make the UK a world‑leading home for licensing‑based AI development rather than weakening copyright law for speculative gains, and also noted the absence of a robust personality right to challenge harmful outputs that imitate distinctive styles or voices. The Department for Science, Innovation and Technology (DSIT), together with DCMS and the IPO, published a report and impact assessment under the Data (Use and Access) Act 2025 examining how copyright works are used in AI development, providing a data‑driven basis for eventual policy decisions. The Society of Authors launched a “Human Authored” logo scheme, aligned with the US Authors Guild, to help readers distinguish human‑created works amid a surge of AI‑generated books and to advocate fair remuneration and an AI regulatory framework to safeguard livelihoods across the UK’s creative industries.

Market mechanisms for lawful access advanced with Publishers Licensing Services (PLS), the Copyright Licensing Agency (CLA) and the Authors’ Licensing and Collecting Society (ALCS) rolling out a collective licensing initiative that enables AI companies to license published works for training and fine‑tuning via a content repository. At the European level, the Parliament’s Legal Affairs Committee adopted proposals mandating transparency and fair remuneration for copyrighted works used by generative systems, and separately, Parliament endorsed recommendations requiring detailed, itemised lists of copyrighted works used and comprehensive records of web‑crawling.

Concluding thoughts

March's message is clear: agentic AI has moved from roadmap to reality, and the regulatory architecture is recalibrating in response. The EU is buying time on high-risk compliance whilst doubling down on transparency and watermarking; the UK is tightening platform accountability through Ofcom and the ICO; and the creative sector is coalescing around a licensing-first model backed by practical infrastructure. For UK organisations, the action points are threefold: lock in provenance controls, ensure readiness to evidence age-assurance and content-safety compliance ahead of Ofcom's 30 April deadline, and recalibrate EU AI Act programmes against the proposed timelines - whilst retaining flexibility should the legislative pace shift.

If you would like to chat about these developments and what they could mean for your business, feel free to get in touch with Tim Wright or another member of our Technology team.

Featured Insights