Results for ""
Technology has always been a constant source of uncertainties, risks, change, and, in many cases, disruption. It requires regulators to make numerous complex decisions, such as:
Over the last two decades, lawmakers and regulators have become increasingly interested in adopting experimental, flexible, temporary, and agile regulatory instruments. In 2018, there were more than fifty statutes in the Netherlands allowing regulators to adopt experimental regulations. Experimentalist governance is regarded as an important theory of transnational governance that allows a broadly agreed set of framework goals to be implemented in a diverse and multilevel context.
The choice of approach to AI regulations has been one of the significant challenges regulators faces. Two dominant approaches in the world are the fault-based liability championed by scholars in the USA and the strict liability of the EU across the Atlantic.
Understanding the importance of regulations in AI, a paper published by the Cambridge University Press calls for a Sandbox approach to balance the dual interests of regulation and innovation. AI regulators could borrow from the financial technology (FinTech) playbook, which was the first to apply a sandbox approach in the finance industry to regulate the new technology.
This allows commercial technology to be tested within an experimental phase without the normal regulatory requirements under the supervision of regulators. This reduces the barriers to entry, allows the technology to prove its capabilities and enables the regulator to better understand the technology and business it regulates. AI regulators could also apply the FinTech sandbox regulatory approach to AI regulation.
A sandbox both enables entrepreneurial development and informs regulatory policy. The “sandbox” concept is particularly utilized in financial innovation and FinTech, where a regulator enables experimental innovation within a framework of controlled risks and supervision.
A sandbox is a “controlled environment or safe space in which FinTech start-ups or other entities at the initial stages of developing innovative projects can launch their businesses under the ‘exemption’ regime in the case of activities that would fall under the umbrella of existing regulations or the ‘not subject’ regime in the case of activities that are not expressly regulated on account of their innovative nature, such as initial coin offerings, cryptocurrency transactions, asset tokenization, etc.
Sandboxes offer several advantages to technology developers, including the ability to verify and demonstrate an innovative technology by testing it in a live environment with real consumers. In addition, successive trial-and-error testing within a controlled environment mitigates the risks and unintended consequences, such as unseen security flaws, when a new technology gains market adoption.
Similar reasoning should follow when it comes to AI activities. This allows regulators to develop policies and regulations to accommodate, supervise and control sectoral innovation within and outside the sandbox. Moreover, it helps them understand the product better.
The EC proposal adds sandbox regulations to AI liability regulation with the EU. In the EU, some authors have proposed the creation of an EU-level regulatory sandbox, which would make EU Member States, collectively, a more attractive destination for innovation. Using a regulatory AI sandbox in the EU is not a novel concept.
Outside the EU, the UK and Norway have developed AI sandboxes. The Norwegian Data Protection Agency developed a data-focused sandbox as part of its National Strategy for AI to ensure the ethical and responsible use of data. The aim is to increase supervision of data usage and to inform policy development. Both the EC and the EP had taken matters a step further when recognizing regulatory sandboxing as a highly desirable tool for coping with the regulatory challenges presented by new technologies, especially AI. The EC did so in its Coordinated Plan on Artificial Intelligence, while the EP did so in its Resolution of 12 February 2019 on a comprehensive European industrial policy on AI and robotics (2018/2088(INI)).
The UK ICO developed a regulatory sandbox as a service “to support organizations creating products and services which utilize personal data in innovative and safe ways”. Benefits to organizations taking part in the sandboxes include access to ICO expertise and support, increased confidence in the compliance of your finished product or service, a better understanding of the data protection frameworks and how these affect your business, increased consumer trust, and future ICO guidance.
The ICO, along with the Alan Turing Institute, established guidance providing organizations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them. This guidance was developed as organizations use AI to support or make decisions concerning individuals.
The guidance supports organizations with the “practicalities of explaining AI-assisted decisions and providing explanations to individuals”. Several of those who participated in the ICO’s regulatory sandbox, such as Novartis and Greater London Authority, highlighted great satisfaction.
Authors opine that applying a sandbox regulation to the AI sector is more appropriate than a pure strict liability regime when it comes to high-risk activities. There is a vital need to create a balance between the regulation of the sector to protect citizens and society while fostering innovation, given the constant and fast developments occurring in the AI field. In contrast, the use of sandboxes to regulate AI requires further studies. Despite these challenges, sandbox regulation remains more appropriate to complement the strict liability regime in the context of high-risk AI because the objective of sandbox regulation greatly differs from the one related to strict liability rules.
Sources: