Image

Privacy

With the widespread use of social media and the Internet of Things (IoT), there have been increasing concerns over data leaks, control over content and political influence of social networks. This has led investigations into how social media platforms collect and use personal data, which in turn has reduced the level of trust users have in such platforms and digital services.


While privacy has been universally acknowledged as a fundamental right, one of the foremost regulatory challenges is to protect individual privacy. Increasingly, big data analytics and machine learning techniques are being used to draw insights using the vast amount of unstructured data available on the internet. At present, most jurisdictions recognise the right to privacy as being an offshoot from the right to life and dignity. Any discourse on right to privacy though must also factor in the possible benefits to the society from AI, in terms of greater convenience, tailored solutions and efficiency gains in business.


AI systems can now engage in automated decision-making that have wide-spread political, economic and social impact. The ‘echo chamber’ effect on social media platforms has come to prominence recently, and is a product of profiling through the scrutiny of personal data and preferences, with AI technologies. Therefore, regulatory regimes across the globe have attempted to find a balance between the need to protect/uphold the right to privacy while ensuring that the potential benefits of big data analytics and AI does not get negated. 


Most data protection legislations define personal information as data points that identify an individual or a device, which may include identifiers such as biometric data, address, bank account details, location, government identification numbers, and genetic data. In this context, data protection regulations in various jurisdictions have charted out a series of data protection rights, which include the right to transparent communication and information, the right of access, the right to rectification, the right to erasure, the right to restriction of processing, the obligation to notify recipients, the right to data portability, the right to object and the right to not be subject to automated decision-making.


To address the potential erosion of privacy by AI, there have been calls for data privacy laws to buttress provisions relating to informed consent, perceptibility or explainability of decision-making by machines and the enforcement of protections against creating or exacerbating bias. In most cases, regulators and governments have sought to encourage companies developing AI to account for these data protection rights in the design of the AI system itself. Given the nature of AI and increasingly limited human intervention in AI systems, it would be difficult to regulate the outcomes (whether intended or not) of AI once these systems are in place. There also appears to be wide spread consensus on prioritizing the use of anonymised data, where possible. Data privacy regulations and policy statements also recommend conducting regular privacy impact assessments of AI systems, to maintain oversight of the functioning of AI and allowing for course-correction where required. Some countries have also provided model impact assessment tools, and examples of less intrusive machine learning models. However, it remains to be seen as to how far countries uphold the principles of privacy in the face of the significant potential benefits that could arise from widespread adoption of AI systems.

ALSO EXPLORE