Results for ""
The roll-out of Artificial Intelligent (AI) systems has now moved beyond merely research and development (R&D), as many public and private organisations across the world have begun deploying AI systems in key sectors like healthcare and banking. This has led to the sobering realisation that the availability of data that allows for AI systems to function efficiently and provide valuable insights can also skew in a way that could be harmful, whether intentional or unintentional.Given data is the currency that lends value to AI systems, any manipulation or misuse of data could lead to extremely damaging outcomes. The extent of harm could vary, but even the misuse of AI in social media or advertising (much less healthcare or banking) can have far-reaching effects. This is to say nothing of how AI can be actively used to commit crimes. Ensuring network security or secure systems for the collection, storage, and use of data is essential to the safe and productive use of AI systems in society.
On the other hand, AI may also hold the answers to a more secure internet. A report that surveyed 85 executives across enterprises found that 69% believed that cybersecurity breaches cannot be stopped without the use of AI. Further, 73% of the enterprises are testing the use of AI in cybersecurity. The development of AI systems therefo re, also provides methods to solve the complex problems that cannot be solved by traditional systems based on fixed algorithms.
The policy challenges for regulatory regimes therefore stem from the lack of research in the uses of AI systems and their impact on cybersecurity. However, as cyber-attacks are on the rise, some countries have moved towards considering network security as an issue when regulating the use of AI. One of the main approaches that have emerged is to incorporate security systems right from the design and development of AI – i.e. a ‘security by design’ approach. This approach does not just focus on mitigation or counter-attacks after the fact, but develops the AI system in a way that is prepared to withstand such attacks.
Another policy challenge is concerns the development of standards and certification procedures for AI systems. Many believe that the standards and certification procedures must focus on improving the reliability of the AI systems by guiding users towards in-house development of AI protection systems, training them with adversarial data and constant monitoring. At the global level, the OECD Principles for AI, Robustness, Security and Safety (OECD Principles on AI) mention robustness as a principle, which requires that AI developers work to manage risk at every level of development of AI to make it as secure as possible. Further, they must also ensure that they can trace data sets being used by AI systems, the process for selecting data and the decisions that are being taken by the systems. This will help in ensuring that AI systems can “withstand or overcome adverse conditions, including digital security risks”.
In the light of the above, the instant report captures various regulatory and policy decisions of various countries with respect to the intersection of AI and cybersecurity.