January closed with AI leaders converging on political heavyweights at Davos, while AI and robotics stole the show at CES in Las Vegas, from humanoid bots to edge AI chips. Elsewhere, regulation, enforcement, infrastructure and corporate strategy accelerated, setting the tone for a pivotal year ahead.
Legal and regulatory developments
United Kingdom
Ofcom probes Grok
Ofcom opened a formal investigation into X under the Online Safety Act focusing on Grok‑generated sexualised imagery. The regulator signalled it may act before the inquiry concludes. In parallel, the UK Government confirmed that provisions of the Data (Use and Access) Act will be brought into force to criminalise the creation – or solicitation - of non‑consensual intimate images, including sexual deepfakes. The Technology Secretary also indicated plans to designate the offence as a “priority” under the Online Safety Act, requiring proactive platform measures. Ofcom has also published blog and guidance content on the placement of age gates on pornography sites and consulted on the prominence and accessibility code for connected TV platforms.
ICO’s “Tech Futures” report on agentic AI
The Information Commissioner’s Office published a Tech-Futures assessment of agentic AI, setting out expected technical developments, potential use cases and associated data protection risks. The report flags particularly difficult controller–processor role allocation across multi‑party agentic supply chains, the risk of purpose creep from open‑ended tasking, scaled‑up automated decision‑making, and the practical challenge of honouring data subject rights where agents rely on complex, persistent memory architectures. The ICO plans to consult later this year on updated statutory guidance on automated decision‑making and profiling.
London AI taskforce launched
London Mayor Sadiq Khan launched a London AI and future‑of‑work taskforce to assess the impact of AI on jobs across the capital. The policy thrust is to get ahead of what Khan has described as a potentially “seismic” and “colossal” labour‑market shift in sectors such as finance, professional services and the creative industries, with a particular focus on entry‑level roles. The accompanying narrative stresses both AI’s potential to transform public services and boost productivity, and the risk that, if unmanaged, it could accelerate inequality and concentrate economic power. Findings from the taskforce’s review are expected in the summer.
Free AI training for UK adults announced
The UK government announced a major expansion of its national skills strategy, setting a target to upskill 10 million adults in AI skills and competencies by 2030. New partners appointed to deliver the training include the British Chambers of Commerce, the NHS and techUK. The expanded programme aims to make practical AI training freely accessible to every adult in the country, positioning Britain as a global leader in AI adoption and workforce readiness.Legal challenge to AI-enhanced live facial recognition technology
A new court case against police use of live facial recognition technology (LFRT) began, challenging the Metropolitan Police’s deployment of AI‑driven biometric identification that scans faces in real time against watchlists. It follows the 2020 Court of Appeal ruling in R (Bridges) v Chief Constable of South Wales Police, which found South Wales Police’s use of LFRT unlawful for breaching privacy rights and the Equality Act 2010. Currently, the UK has no specific legislation governing LFRT, in contrast to the EU AI Act, which generally prohibits such systems in public spaces except for narrow law‑enforcement exceptions.
GDS publishes AI-ready government data guidelines
The UK Government Digital Service released new guidelines to help public sector bodies prepare datasets for AI applications, addressing issues such as siloed data, poor metadata and quality gaps. The framework outlines four pillars - technical optimisation, data/metadata quality, organisational context, and legal/ethical compliance - alongside an action plan and self-assessment checklist to enable responsible AI use. It promotes treating datasets as governed products with clear stewardship, DPIAs and human oversight, with plans for updates based on feedback as AI evolves.
UK call for evidence on AI sandbox closes
The UK Government’s call for evidence on the proposed AI Growth Lab - a cross-economy regulatory sandbox to test AI-enabled products and services in live markets where current rules pose barriers, has closed. It would grant time-limited regulatory modifications under close supervision, targeting sectors like professional services and manufacturing to accelerate innovation, attract investment and inform permanent reforms.
European Union
First EU AI Act transparency Code draft published
The European Commission released the first draft Code of Practice on transparency for AI-generated content under Articles 50(2) and (4) of the AI Act, developed collaboratively by industry, academia, civil society and Member States via two Working Groups launched in November 2025. The Code requires providers and deployers to mark AI-generated or manipulated content (e.g. deepfakes, synthetic text on public-interest matters) in machine-readable, detectable and interoperable formats to help users identify it. Following further stakeholder feedback, a second draft will be published by March, with the Code expected to be finalised by June. The AI Act rules covering the transparency of AI-generated content will apply from 2 August 2026.
EU Council amends rules to enable AI gigafactories
The Council of the EU adopted an amendment to the EuroHPC Joint Undertaking regulation, expanding its mandate to develop large-scale AI gigafactories and adding a quantum technologies pillar. The regulation authorises the creation of energy-efficient, massive-compute facilities to support the full AI lifecycle - from training foundation models to inference - for European researchers, startups and industry, with clear funding, procurement and public-private partnership rules. Cyprus’s deputy minister for research called it a boost to Europe’s competitiveness and sovereignty; the regulation was published in the Official Journal on 19 January and entered force the next day.
EU prepares AI Act guidelines amid standards delays
The European Commission is reported to be drafting contingency guidelines to support compliance with high-risk AI systems under the AI Act, should technical standards from CEN-CENELEC miss their 2027 deadline. These standards detail obligations for high-risk AI, but repeated delays - some pushed to April 2027 - have raised concerns, prompting calls to freeze rules until standards arrive and the Commission’s November proposal to delay high-risk obligations to late 2027 or 2028. The guidelines would act as a temporary bridge, distinct from the AI Act’s formal “common specifications” backup that the Commission could adopt if industry standards fail to materialise.
EU closes consultation on AI Act regulatory sandboxes
The European Commission’s public consultation to establish rules for AI regulatory sandboxes under the AI Act closed on 13 January. These controlled frameworks will let prospective providers develop, train, validate and test innovative AI systems - sometimes in real-world settings - under regulatory supervision to balance innovation with compliance. The Commission will finalise common rules for sandbox setup and operation by national authorities, as required by the Act.
The United States
Musk v OpenAI/Microsoft and “wrongful gains” theory
Elon Musk filed for “wrongful gains” of up to $134 billion against OpenAI and Microsoft, contending that his seed funding, recruitment and credibility contributions entitle him to disgorgement tied to later value accretion following OpenAI’s pivot to a capped‑profit structure. OpenAI has described the claims as baseless and part of a harassment pattern, with both defendants moving to exclude the claimant’s expert analysis. While the legal merits will be tested at trial, the case highlights unresolved governance and donor - sponsor entitlements when non‑profits spin out or restructure around commercial partnerships, and it adds another strand to the public narrative war between competing model houses.
Bipartisan US bill targets AI copyright transparency
On 22 January 22 House Representatives Deborah Ross (D-NC) and Nathaniel Moran (R-TX) introduced H.R. 9720, the "New House Bill on AI Transparency to Pull Back the Curtain on Training Data Usage." The bipartisan bill seeks to give copyright owners more insight into how generative AI models are trained on their works, addressing ongoing IP concerns amid legal uncertainty. It aims to enhance responsibility and transparency in AI development without halting innovation. Bipartisan transparency bills like this typically stall without broader buy-in, especially amid Trump's DOJ task force targeting similar disclosure mandates. State laws such as California's AB 2013 are proceeding regardless.
Google and Character.AI settle teen suicide lawsuits
Google and Character.AI agreed to settle multiple wrongful-death and injury lawsuits filed by families of teenagers who allegedly self-harmed or died by suicide after engaging with Character.AI chatbots, with terms undisclosed and no admission of liability. The settlements, covering cases from 2024 - 2025, highlight rising legal risks for conversational AI amid claims of inadequate safety guardrails for vulnerable minors, despite prior content filters.
Meta signs AI licensing deals with major news publishers
Meta has struck multi-year commercial agreements with news publishers including USA Today, People Inc., CNN, Fox News, The Daily Caller, Washington Examiner and Le Monde to license content for its Meta AI chatbot. These deals enable real-time news responses across Facebook, Instagram, WhatsApp and Messenger - with attribution and links back to publisher sites - reversing Meta's prior retreat from news while competing in AI.
Business news from around the globe
Micron breaks ground on $100B New York megafab
Micron Technology has broken ground on a $100 billion semiconductor megafab in Onondaga County, New York - the largest U.S. manufacturing facility of its kind and the biggest private investment in state history. The campus will house up to four fabrication plants producing advanced DRAM (Dynamic Random Access Memory) and HBM (high-bandwidth memory) critical for AI workloads, generating around 50,000 jobs (9,000 direct) over 20+ years amid surging AI-driven demand. HBM, where Micron competes with SK Hynix and Samsung, supports GPU-intensive model training; full output is targeted for the 2040s.
TSMC reports record profits
Taiwan Semiconductor Manufacturing Company (TSMC) posted record profitability in its latest results, driven by surging AI chip demand, and is doubling down on US expansion with additional fabs and advanced packaging facilities in Arizona - bringing total investment to $165 billion. The chip foundry’s management guidance is for robust 2026 revenue growth and raised capex to $52–56 billion (up 27–37% YoY), with 60–80% allocated to advanced processes amid tight capacity.
UK fusion magnet breakthrough accelerates clean AI energy
UK engineers at the STEP fusion programme have successfully tested pioneering "remountable" high-temperature superconducting magnets with plug-and-socket joints, overcoming a key barrier to commercial fusion reactors. STEP (Spherical Tokamak for Energy Production) is the United Kingdom's flagship programme to design, build and operate the world's first prototype fusion power plant by the early 2040s. This innovation, featuring precision cryogenic clamping, enables rapid disassembly and maintenance of massive tokamak magnets under extreme conditions, with the potential to slash costs and reduce facility downtime. Reliable fusion power could deliver the massive, sustainable baseload energy required for hyperscale data centres and continuous model training.
Anthropic launches Claude Cowork
Anthropic released Claude Cowork, a new desktop AI agent that accesses shared folders to read, edit or create files, with optional Chrome integration for browser-based tasks. Designed as a collaborative "coworker," it handles document workflows autonomously while maintaining Claude's safety guardrails, targeting productivity in research, coding and content creation. Separately, multiple outlets report that Anthropic is in late‑stage discussions to raise approximately $10 billion at a valuation around $350 billion, nearly doubling from four months ago.
Games Workshop imposes company-wide generative AI ban
British company, Games Workshop has formalised a firm ban on generative AI tools across creative workflows, permitting only limited evaluation by senior management. Games Workshop is perhaps best known for Warhammer 40,000 (sci-fi miniature wargaming) and Warhammer Age of Sigmar (fantasy battles), as well as collectibles such as the Citadel miniature range. The company’s new policy aims to safeguard intellectual property and prioritise "human creators," signalling rising legal/reputational risks for rights-holders who blend AI outputs with their core IP.
Apple and Google ink multiyear Gemini deal
Apple and Google announced a multiyear partnership integrating Google's Gemini models and cloud into Apple's foundation models to enhance Siri and Apple Intelligence features. The deal reflects Apple’s decision to lean on Google’s advanced backbone models rather than solely develop every AI layer internally – as a result, Apple fans hope that Siri will gain deeper personalisation (and utility…) while upholding Apple's on-device processing and Private Cloud Compute privacy standards.
Google DeepMind closes three AI deals in one week
Google DeepMind has acquired Common Sense Machines (2D-to-3D AI models), struck a licensing deal with Hume AI (bringing its CEO and engineers to boost Gemini voice/emotion tech), and partnered with Sakana AI (Transformer pioneers for Japan-focused research) in a rapid spree closing late January 2026. These moves target multimodal generation, voice interfaces and regional scientific AI to strengthen Gemini against rivals like OpenAI, blending talent acquisition with strategic licensing to dodge full merger scrutiny.
OpenAI signs $10B Cerebras compute deal
OpenAI agreed to purchase up to 750 megawatts of computing power over three years from Cerebras Systems in a multibillion-dollar contract valued at over $10 billion. The deal deploys Cerebras' wafer-scale AI chips to accelerate ChatGPT inference and scaling, reducing reliance on Nvidia while diversifying beyond Microsoft Azure. Phased rollout targets 2028 completion, supporting OpenAI's aggressive infrastructure buildout amid surging AI demand.
Nvidia-backed Nscale eyes $2B funding round
UK AI data centre operator Nscale, fresh off $1.5B+ raised in late 2025, is in talks for up to $2 billion in new capital just three months after its prior close. The hyperscaler, backed by Nvidia and tied to OpenAI/Microsoft "Stargate" projects in Norway/UK, needs massive funding for GPU-powered AI factories amid surging compute demand.
If you’d like to chat about what these developments could mean for your business, feel free to get in touch with Tim Wright and Nathan Evans.



