top of page
Writer's pictureWomen in AI (WAI)

AI Sandboxes: Where Innovation Meets Regulation


Regulatory sandboxes have evolved over the past decade, originating from the UK Financial Conduct Authority's initiative to foster financial sector innovation post-crisis. These controlled environments allow regulators and innovators to test disruptive technologies like AI, blockchain, and IoT, addressing regulatory gaps. Despite global success, the European Commission's AI Act has faced challenges, including minimal flexibility and high costs. The article highlights the potential of the regulatory sandboxes to impact and facilitate new innovative solutions.


This blog is brought to you by Katerina Yordanova, an experienced lawyer and researcher with over a decade of expertise in IT and human rights law. She is currently affiliated with the KU Leuven Centre for IT & IP Law and iMec. Her work centers on the regulation of disruptive technologies, with a particular focus on artificial intelligence. Katerina is pursuing a PhD, concentrating on regulatory sandboxes for AI, and also contributes as an external consultant to policy lab development at the GATE Institute.


Read below what she writes about this very interesting topic which she is researching for her doctoral degree.


 

Regulatory sandboxes have been evolving for around a decade. Following the financial crisis, the UK Financial Conduct Authority (FCA) shifted from its traditionally hands-off approach to actively fostering innovation in the financial sector. The objective was clear but challenging: to support the introduction of innovative products, services, and business models while ensuring a safe and controlled market entry.

The term “sandbox” originates in computer science, where it describes a secure, controlled environment that restricts access, allowing programs to run isolated from the broader system to avoid unintended impacts. Regulatory sandboxes (RS) similarly allow regulators and innovators to collaborate in testing disruptive technologies—such as AI, blockchain, and IoT—while identifying and addressing potential regulatory gaps. This collaboration gives regulators early insights into new technologies and upcoming challenges they might face.

This model has been warmly received, particularly by startups and SMEs, which benefit from both regulatory guidance and flexibility during RS testing. Within five years, over 70 sandboxes were operating across more than 50 jurisdictions worldwide, expanding beyond FinTech into areas such as privacy, energy, and transportation.

Given this success, stakeholders were surprised when the European Commission’s 2020 White Paper on AI did not prioritize RSs as a tool for supporting innovation and SMEs. In response, the Commission incorporated provisions for AI-specific sandboxes in the proposed AI Act. However, this chapter underwent extensive revisions by both the Council of the European Union and the European Parliament, resulting in a final version almost twice the length of the original, though not as concise or coherent as many had hoped.

Firstly, the Commission retained the authority to adopt an implementation act that would outline specific details around the AI RSs, such as eligibility and selection criteria, application procedures, terms and conditions, and more. This could prove problematic, however, as the AI Act requires Member States to establish or participate in at least one AI RS by 2 August 2026. Realistically, this means that some Member States may have less than a year to do so, which may require changes to their existing legislation—for example, regarding the division of regulatory competences in states with a federal system.

Secondly, opting for technology-specific rather than technology-neutral RSs would lead to a more complex application and testing process, requiring the participation of multiple regulators and at a much higher cost. It may also create a certain level of competition between the different sectors, based on the number of accepted candidates, working on AI products and services, designed for the specific market.

Thirdly, comparing RSs worldwide, it is clear that EU RSs allow minimal flexibility from existing regulatory rules. This lack of leeway raises legitimate concerns about how attractive AI RSs in the common market might be compared to jurisdictions like the UK or Singapore. This limitation stems primarily from the EU’s conservative approach to RSs and its multi-level regulatory framework, which prevents national regulators from granting exemptions from rules set by EU law. While Article 59 of the AI Act attempts to introduce an exemption for processing personal data to develop certain AI systems in the public interest within an AI RS, the article’s text significantly restricts the scope of applicable cases. The conditions in question range from developing the AI system to safeguard substantial public interest—by a public authority or another natural or legal person in five distinct areas—to ensuring necessity, establishing monitoring mechanisms, implementing appropriate technical and organizational measures, and maintaining logs of personal data processing for the duration of participation in the sandbox, among other requirements. As a result, relying on this provision could be challenging, which may deter potential participants in the RS.

Despite the many issues that are yet to be resolved regarding the AI RSs in EU, the experience from testing AI products and services in other types of RSs around the world, demonstrates that the tool does have potential to impact and facilitate new innovative solutions to be brought to the market in a safer manner, benefiting all the involved stakeholders. At the same time, we need to stay clear from the temptation to use the RSs as a compliance tool. The limited scale, high cost, and regulations surrounding the lawful provision of legal counseling, along with the associated liabilities, suggest there may be better options, including those offered under the umbrella of innovation hubs. The key appears to be finding the balance between compliance and innovation in the quest of solving real problems, contrary to an overpriced checkbox exercise.


 

Collaborate with us!

As always, we are grateful for you taking the time to read our blog post.

If you want to share news that is relevant for our community and to those who read us from all corners of the world-wide WAI community, if you have an appropriate background working in the field of AI and law, reach out to Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or via e-mail silvia@womeninai.co ) or to Dina Blikshteyn for the opportunity to be featured in our WAI Legal insights Blog. 


Silvia A. Carretta and Dina Blikshteyn

- Editors

bottom of page