As another month draws to a close, we take a moment to reflect on the dynamic world of AI and the remarkable developments shaping the landscape. May 2025 has been a particularly eventful month, with innovations and insights emerging across industries at a rapid pace. In this bumper edition of Fladgate's AI round-up, we highlight the most compelling stories, trends and breakthroughs that have captured our attention and that promise to influence the future of AI.
What we saw in the UK
House of Lords challenges government over AI copyright transparency
The House of Lords backed amendments to the UK government's AI plans, specifically voting in favour of changes to the Data (Use and Access) Bill that would require AI developers to disclose all copyrighted works used to train their models, giving creators greater visibility and the ability to opt out. The move, led by crossbench peer Lady Kidron, and supported by artists and creative organisations, aims to address concerns that current government proposals favouring only summary disclosures and an opt-out model do not sufficiently protect the rights and economic interests of the creative sector. However, the government has resisted these amendments, stripping them out in the House of Commons and signalling its intention to proceed with its preferred approach, setting the stage for ongoing parliamentary "ping pong" as the Lords consider rephrased amendments and the debate over AI copyright transparency continues.
UK AI Security Institute publishes research agenda addressing advanced AI risks
The UK AI Security Institute released its research agenda, outlining a strengthened focus on addressing the most pressing security risks posed by advanced AI technologies, including threats to national security, cybersecurity and public safety. The agenda prioritises three main areas: identifying and mitigating critical AI security threats such as AI-enabled cyberattacks and misuse for criminal or harmful scientific purposes; advancing robust methods for evaluating AI systems to ensure they remain under human control and are resilient to misuse; and developing technical solutions for AI alignment and oversight. The Institute, which partners with leading industry labs, government agencies and international bodies, aims to provide policymakers with a rigorous scientific evidence base to inform regulation and safeguard the UK as AI capabilities rapidly evolve.
Across the European Union
Commission opens consultation on rules for general-purpose AI models
The European Commission’s targeted consultation – to gather feedback from stakeholders on forthcoming guidelines that will clarify the rules for general-purpose AI (GPAI) models under the EU AI Act – closed on 22 May 2025. The consultation invited input from AI providers, downstream developers, civil society, academia and public authorities to help define key concepts such as what constitutes a GPAI model, provider responsibilities and the requirements for placing such models on the market. These guidelines will explain how the AI Office will support compliance, detail the benefits of adhering to the Code of Practice - including potentially reducing administrative burdens - and address practical issues like estimating training compute and supervision. While the guidelines will not be legally binding, they will clarify how the Commission interprets and intends to enforce GPAI rules, with both the guidelines and the final Code of Practice expected before August 2025, when the relevant AI Act provisions take effect.
EU Commission clarifies AI Act’s relationship with other key laws
The European Commission also provided EU lawmakers with an early look at forthcoming guidelines that clarify how the AI Act will interact with other relevant laws, including product safety laws such as the Medical Devices Regulation, the GDPR, the Digital Markets Act, the Digital Services Act and EU copyright rules. The Commission highlighted the AI Act’s complementarity with product safety requirements, the mutual reinforcement of obligations for platform regulations on general-purpose AI systems, and practical examples for GDPR compliance. The guidelines will also address anticipated challenges around copyright, illustrating how the AI Act will work in tandem with existing regulatory frameworks to ensure comprehensive oversight of AI across sectors.
AI Office launches tender for third-party AI safety support
The EU’s AI Office launched a €9 million tender (called Technical Assistance with AI Safety) seeking third-party contractors to provide technical assistance for monitoring compliance with the EU AI Act, particularly focusing on risk assessment of general-purpose AI models at the Union level as outlined in Articles 89, 92 and 93. The call for tender, divided into six lots, aims to bring in external expertise to support the Office’s oversight activities, ensuring that AI systems meet safety, transparency and accountability standards set by EU regulations. The tender closes on 27 May 2025. A separate tender for the AI Act Service Desk closed on 19 May.
Italy fines AI developer €5m over GDPR breaches
Italy’s Data Protection Authority (Garante) has imposed a €5m fine on US-based Luka Inc., the developer of the AI chatbot Replika, for serious violations of the EU’s General Data Protection Regulation (GDPR). The regulator found that Luka failed to establish a valid legal basis for processing users’ personal data and provided an inadequate privacy policy that lacked transparency and clarity, especially for Italian-speaking users and minors. Critically, Luka did not implement effective age verification, allowing minors to access the chatbot despite claims to the contrary, and failed to assess or mitigate risks to vulnerable users, including children and those seeking emotional support. In addition to the financial penalty, the Garante has ordered Luka Inc. to bring its data processing operations into full compliance and launched a separate probe into the lawfulness of Luka’s AI training practices.
A look at what’s going on in the US
Trump administration overhauls AI chip export controls
The Trump administration rescinded a Biden-era restriction due to come into effect on 15 May which would have placed limits on the number of AI chips that could be exported to certain international markets, including the Middle East and the EU, without federal approval. Announcing the roll back, Commerce Undersecretary Jeffery Kessler said that the Trump administration will work to replace the now-rescinded rule to pursue AI with “trusted foreign countries around the world, while keeping the technology out of the hands of [the US’s] adversaries.” The administration said a replacement rule is coming in the future but hasn’t said what the new rule will say.
US lawmakers intensify scrutiny of DeepSeek
A bipartisan House committee branded Chinese AI company DeepSeek a “profound threat” to US national security, citing evidence of close ties to the Chinese Communist Party, links with state-owned China Mobile, and the alleged siphoning of American user data to Beijing. The committee’s report also raises alarms over DeepSeek’s suspected acquisition of restricted Nvidia GPUs through intermediaries, potentially circumventing US export controls, and its use of US developed AI model architectures.
DOJ launches antitrust case against Google’s search monopoly and AI power
The US Department of Justice launched a major antitrust case against Google, aiming to dismantle its search monopoly amid concerns that Google could leverage its AI products, such as Gemini, to further entrench its dominance. Prosecutors are seeking sweeping remedies, including forcing Google to end exclusive default search agreements with device manufacturers, requiring it to license search data to competitors, and potentially ordering the divestiture of its Chrome browser. The DOJ argues that Google's integration of AI into its search ecosystem creates a feedback loop that consolidates its market power, and warns that without intervention, Google could extend its monopoly into emerging AI-driven search technologies. In response, Google contends that these measures are excessive, arguing that users choose its services freely and that the proposed remedies would harm product quality and innovation.
Trump directs federal push for AI education
President Trump has issued an executive order instructing federal agencies to make AI education a top priority in primary and secondary schools, calling for increased teacher training, new student courses and the expansion of apprenticeship opportunities. The order also established a White House Task Force on AI Education to foster collaboration between public and private sectors and introduces a “Presidential AI Challenge” aimed at inspiring innovation and engagement in AI learning across the nation’s youth.
Elsewhere…
UAE announces world first with integrated regulatory intelligence ecosystem
The UAE Cabinet has approved the creation of the first ever integrated regulatory intelligence ecosystem, powered by advanced AI, designed to modernise and accelerate lawmaking by up to 70%. This system will unify all federal and local laws, judicial rulings and public services into a dynamic national legislative database, enabling real-time tracking of the impact of laws, proactive legislative updates and alignment with leading international practices. Managed by the newly established Regulatory Intelligence Office, the ecosystem will leverage big data and machine learning to create a responsive legal framework, linking to global policy research centres and allowing legislators to benchmark and adapt UAE laws to global standards while maintaining national values.
Report finds AI disproportionately threatens women’s jobs
A recent report warns that AI-driven automation is likely to have a greater negative impact on women’s employment than men’s, with roles predominantly held by women, such as administrative, clerical and customer service positions, being especially vulnerable to replacement by AI technologies. The findings highlight the urgent need for targeted policy interventions and upskilling initiatives to ensure women are not left behind in the rapidly evolving job market.
IBA global report charts AI’s expanding role in the workplace
The latest annual report from the International Bar Association’s Global Employment Institute spotlights how AI is reshaping the world of work, based on research with lawyers from 53 countries. The report finds that the global labour market underwent significant shifts in 2023 and 2024, driven by changing workplace dynamics and rapid technological advancements. AI adoption has become more widespread, bringing notable gains in efficiency, personalisation and decision-making - but also raising new challenges around job displacement and ethics. The report emphasises that as AI becomes increasingly integral to business operations, employers and regulators must focus on upskilling, ethical considerations and legal frameworks to ensure that technology complements human creativity rather than replacing it.
Business announcements
Google releases 2024 Responsible AI Progress Report
Google published its sixth annual Responsible AI Progress Report highlighting major advancements in how the company governs, measures and manages AI risks throughout its development process. The report describes Google’s new governance structures, expanded bias mitigation strategies, and the deployment of privacy-preserving techniques across Google’s AI products as well as the rollout of AI provenance tools to enhance transparency.
Duolingo shifts to AI-first mode
Duolingo announced a sweeping transition to an "AI-first" business model, with CEO Luis von Ahn revealing that the language-learning platform will gradually phase out contract workers in favour of AI for tasks such as content creation and translation. This shift, detailed in an internal memo and shared publicly, means new hires will only be approved if the work cannot be automated, while AI will also play a growing role in recruitment and performance reviews.
Google DeepMind unveils AlphaEvolve AI agent
Google DeepMind introduced AlphaEvolve, an advanced AI coding agent that leverages the creativity of large language models and an evolutionary framework to autonomously discover, optimise and implement novel algorithms across a range of fields. Google says that unlike standard coding assistants, AlphaEvolve generates multiple solutions to a given problem, rigorously evaluates them, and iteratively refines the most promising ones, resulting in breakthroughs such as more efficient data centre management, improved chip design, and even the discovery of new mathematical solutions.
NetApp and NVIDIA join forces to streamline enterprise AI infrastructure
NetApp has partnered with NVIDIA to integrate the NVIDIA AI Data Platform into its AIPod solution, aiming to simplify and accelerate the deployment of agentic AI at scale for enterprises. This collaboration enables businesses to build secure, governed and scalable AI data pipelines for demanding workloads like retrieval-augmented generation and AI inferencing, addressing the growing need for unified infrastructure as AI adoption surges.
OpenAI and Microsoft renegotiate partnership
OpenAI and Microsoft are reported to be renegotiating the terms of their multibillion-dollar partnership to support OpenAI’s transition to a public benefit corporation and pave the way for a potential IPO, while ensuring Microsoft retains long-term access to OpenAI’s advanced AI models and technology beyond their current agreement, which runs through 2030. Central to the talks is the balance between the equity stake Microsoft will hold in the restructured for-profit entity and the extent of its future technology access, with Microsoft reportedly willing to give up some equity in exchange for continued collaboration on new AI developments.
Alibaba’s ZeroSearch lets AI models “google” themselves, slashing training costs
Alibaba unveiled ZeroSearch, a novel AI training method that enables large language models to simulate search engine interactions internally, eliminating the need for costly API calls to commercial search engines like Google and reducing training expenses by up to 88%. Traditionally, training AI models for search tasks required hundreds of thousands of live queries to external search engines, incurring significant costs and introducing unpredictable data quality, but ZeroSearch is reported to address this by having AI models generate simulated search results based on their pre-existing knowledge, providing both cost savings and more consistent training data.
As AI continues to evolve and shape more aspects of our world, staying on top of the latest legal, regulatory and technical changes is more important than ever. If you’d like to chat about what these developments could mean for your business, feel free to get in touch with Tim Wright and Nathan Evans.