Results for ""
The Fourth Industrial Revolution is transforming everything. Industries are reinventing themselves. New jobs are emerging. The pace of innovation today is higher than ever before. And technologies like cloud computing and artificial intelligence (AI) are enabling collection and processing of data at an unprecedented scale. AI has the potential to do a world of good and help society overcome some of its biggest challenges. That potential can only be harnessed if data is collected, aggregated, shared and analyzed at scale. And this is when the complexity kicks in – because the conversation around AI and ethics begins. There are issues and valid concerns around ownership of data, security, privacy, and transparency of algorithms.
It’s not just enterprises, academia and governments but also individuals who need to be involved in analyzing and resolving these issues as AI’s growth accelerates and its impact deepens. As a society, we have a shared responsibility of creating trusted AI systems. We will have to collaborate and reach a consensus on the principles and values that should govern the development and application of AI. We must ensure that AI-based technologies are designed and deployed in a way that will earn the trust of both the people who use them and people whose data is being gathered. Ultimately, we need to ensure that applications of AI and technology are improving the world.
Considering the risks, benefits and effects of these technologies, it is imperative that they are aligned with our society’s moral values and ethical principles. At Microsoft, we have identified a core set of six recommended principles that should guide the ethical framework for AI development – AI systems should be designed with protections leading with the core principles of (1) Fairness, (2) Reliability & Safety, (3) Privacy & Security, (4) Inclusiveness, and the foundational principles of (5) Transparency, and (6) Accountability.
"At Microsoft, we have identified a core set of six principles that guide our work – AI systems should be designed with protections for fairness, reliability and safety, transparency and accountability, privacy and security, and be inclusive."
India has a substantial stake in the AI revolution. To be a leader in AI and reap its social and economic benefits, we require more than just great tech momentum. A mature, balanced and progressive legal framework for data protection that will be agreed upon and adopted by all is the need of the hour. Everyone’s looking for an approach that promotes the development of technologies and policies that protect privacy while facilitating access to the data that AI systems require to operate effectively.
Data is essential for AI to help informed decisions– and people will not share their data unless they are confident that their privacy is protected, and their data is secure. Privacy isn’t just a pillar of trust – it is a business imperative. Privacy and data-driven innovation are compatible. While we achieve the balance between AI-led innovation and protecting the fundamental right of individuals, we need to have a principled approach and adequate law enforcement to keep nations safe from cyber-crime or any unwanted consequences. A few key aspects need to be considered here:
Manage Data, Manage Security: Data-led innovation cannot happen without adequate security controls. Security must follow data flows and meet international standards. Data security is more importantly a function of the processes in place, access controls granted, and data classification implemented, as opposed to where the data is located. Major cyber-attacks are largely global and cyber criminals look for vulnerable and unmanaged IT environments for data breaches. On the other hand, the security and privacy advancements of hyperscale cloud services and globally distributed data centers are meeting the highest international, regional and national standards and regulations, giving companies the peace of mind and assurance of standing by their commitment and compliance towards protecting and safeguarding their customers’ data.
Enable AI and Security with Cross-Border Data Flow: India’s IT sector companies, tech startups and developers may need access to data not just within the country but also beyond geopolitical borders to build robust solutions and algorithms. As an example, building an algorithm to detect and prevent diabetic retinopathy requires studying data from various ethnicities and locations. This predicates the need for a regime that supports cross-border data flows. At the same time, it is important to protect these flows through appropriate international measures and standards – both legal and technical. Local laws will also need to be interoperable with global standards or contracts that protect personal data regardless of its location. This will thus make it incumbent on data processing companies to make sure that the personal data they process is managed according to a high level of data protection, regardless of the location to which the data is transferred, and provide citizens recourse to the law in case of a breach of trust. Responding proactively or reactively to cybersecurity issues calls for protocols based on globalized and not localized data. And restricting cross-border data flows can only impede effective and timely response.
Contextualize Data Sensitivity: The definition of what may constitute ‘sensitive personal data’ should be aligned to international norms. Sensitive personal data could include personal data revealing racial or ethnic origin, financial information, political opinions, religious or philosophical beliefs, sexual orientation, trade union membership, genetic or health information or biometric data for the purpose of uniquely identifying a natural person. However, instead of blanket restrictions, restrictions on the processing of personal data should correspond to the context in which the data is processed. For example, an employee’s name in an organization’s internal directory would typically not be considered sensitive and would require less privacy protection than when it appears on a list related to credit ratings.
Promote Corporate Responsibility Through Documented Risk Assessments: Regulators, advocates, academics, and consumers around the world are increasingly skeptical and believe that the shortcomings associated with the notice and choice model of privacy protection (i.e., consent, opt-in and opt-out approaches) are enabling companies to do what they want with personal data without sufficiently protecting privacy. The implication is that companies are creating user interfaces that influence individuals’ decisions and that the sheer volume of processing decisions and lack of information about the processing that takes place behind the scenes make it difficult for individuals to effectively protect themselves. It is important to counter the perception that companies are seeking to exploit consumers in this way by requiring companies to protect consumers on an ongoing basis through continuous risk-based analyses designed to ensure that individuals will be protected throughout the online experience. Conducting rigorous and documented risk assessments, which can be reviewed upon request by relevant government authorities, is fundamental to ensuring that consumers are protected and safeguarded. Identified risks should be mitigated through documented safeguards, such that the benefits of processing personal data outweigh the residual risks.
Given the pace at which AI technologies are evolving, simple steps can guide us into a safer and smarter world. Trusted systems can help the country get access to vast pools of data, leading to new AI solutions and make an impact on a global scale. This also ensures that our startup ecosystem grows faster and provide solutions not just for India but for the world. However, we must continue to be vigilant about assessing and addressing potential risks for the responsible design and deployment of AI technologies. At the end of the day, what matters is not what AI can do but what AI should do.