Why all this fuss about data?

The internet revolution has engulfed all facets of our life and driven us into the smart age – also the age of information wherein almost every activity that we undertake involves transaction of data in some form. That data is, indeed, the new oil can be justified by underlining that some of the biggest companies today don’t hold many tangible assets – it is data that’s their biggest asset, think Facebook, Uber, Airbnb or Alibaba, for instance. Hence, while it is essential to promote innovation and entrepreneurship in the digital sphere, it is equally crucial to take steps to protect the privacy of our citizens at the same time. The government, too, has embraced tech-driven ways of administration as is evident from the e-governance initiatives that aim to provide public services through online mediums. The combined implication that there are increasing quantities of data – big data – pertaining to individuals that is stored in online servers and cloud locations. 

I get that digital world is data driven, but where’s AI in the picture?

Now, while human beings may be the most evolved species with boundless potential, even their ability to process data is limited. That’s where AI comes into play, which, with the help of its sophisticated algorithms can derive useful insights from this big data, giving it the value that it rightly demands in today’s times. Conversely, the availability of open data is key to exploit the full potential of AI, because algorithms are extremely data hungry. This brings with it, along with the unquestionable advantages of efficiency and convenience, concerns of privacy and data protection.

Why should I care about data protection?

The challenge of security is bigger than one can fathom. An excellent example to illustrate the lack of anonymity in today’s online realm is the above photo, an alteration of the popular New Yorker cartoon of 1993, by the political advertising company Campaign Grid. It shows all the information that Campaign Grid can know about an internet user, including accurate personal details such as age, address, profession, economic status, political affiliation and potential shopping and travelling plans. This draws a deep contrast with the initial years of the internet when the identity of the users could easily be concealed. The Cambridge Analytica and Facebook scandal explains that there need to be stricter laws for social media companies in their data handling and sharing practices, because they maintain a repository of personal data of millions of users, and unethical use of such data can give some stakeholders undue influence on moulding public opinion.

So what are we doing about it?

Governments across the globe are rising up and responding to this challenge of data protection. The EU, in May 2018, ruled in favour of stricter protection for the personal data of its citizens through the General Data Protection Regulation which harmonises the data protection laws across EU while still giving scope to each member country to make its own alterations. By widening the definition of personal data and tightening the rules for obtaining valid consent for processing such data, it puts the users in greater control of their confidentiality. It is a promising step towards encouraging responsible data handling practices by all business and government institutions. The GDPR is applicable to all companies that operate in the European Union irrespective of their location. Furthermore, the Article 17 of GDPR extends its governing scope beyond the collection and processing to the erasure of such data, hence giving its beneficiaries what is called the ‘Right to be forgotten’. To ensure thorough understanding and observance of the norms, certain big organisations are mandated to employ a Data Protection Officer (DPO), a part of whose job is to conduct a privacy impact assessment (PIA) before processing high-risk data. The all-encompassing concept of ‘privacy by design’ minimises the risk of data breaches; nevertheless, should any breach of personal data occur, there is a strict 72-hour deadline for issuing a notification.

How about India?

In the absence of an express law, there are various piecemeal judicial and legislative developments that dictate data protection in India. The IT Act, 2000, which deals with cybercrime and e-commerce, under its section 43A details the Information Technology (IT) Rules, 2011, for ‘reasonable security practices’ for the handling of ‘sensitive personal data or information’. These rules impose limits on the scope of organisations to collect, use, retain and disclose the personal data of individuals, and require them to have a privacy policy. These rules are not without its loopholes, though, for they apply only to corporate entities and the disclosure-with-consent clause is rendered powerless when it is a government agency that seeks the information. Hence, by leaving the government out of its ambit, the IT Rules only give marginal control to citizens over their personal information. A relook at the efficiency of our laws was mandated following the apprehensions and a subsequent petition in 2012 accusing the Aadhar scheme of violating the right to privacy. In the landmark Puttaswamy judgement of May 2017, the Supreme Court declared the ‘right to privacy’ as a fundamental right of all citizens. The government then recognised that informational privacy was an intrinsic element of privacy and hence set up an expert committee under the chairmanship of Justice BN Srikrishna, to draw out a data protection framework for India. 

Tell me more about India's data protection laws?

The Indian version of the law borrows heavily from other laws around the world, particularly the rights-based European model and the American model based on liberty protection. It clearly defines personal data as any information that can reveal the identity of an individual, and sets apart certain classes such as financial data, biometric data, caste, religious or political beliefs as sensitive personal data. A defining principle of this framework is that of holistic application which recognises the right to protect data from both the state and non-state actors, hence finally bringing the government and its bodies, too, within the ambit of a data protection law. The law gives the citizens (data principals) a right to be forgotten by withholding the organisations (data fiduciaries) from storing their personal information beyond a certain specified time. The law also minimises data processing by all organisations, who enjoy access to personal data of individuals, to a bare minimum and subjects it to an informed consent, hence upholding autonomy of the data principals. While there are some specified conditions where the fiduciaries are empowered to process data without consent, the sensitive personal data is excluded from this scope. It lays down strict guidelines for the transfer of data across national boundaries, hence providing for data localisation. Some organisations which enjoy a greater access to personal data, such as social media intermediaries, have been classified as significant data fiduciaries and are required to appoint a DPO to place data protection at par with other core functions of an organisation. The bill also proposes to set up a Data Protection Authority at the central level, for ensuring compliance and providing a grief redressal mechanism. Most importantly, the proposed law is required to be agnostic to technology in order to adapt to the complex requirements of a dynamic digital world.

The expert report, after taking into consideration public comments and recommendations, was made into the Personal Data Protection (PDP) Bill, 2019, which was introduced in the parliament in December 2019 and has been referred to a standing committee, whose approval is awaited.

What does it mean for the AI community?

There may be a trade-off between privacy and utility, but data protection laws are not intended to stifle the AI ecosystem. In order to keep innovation and research thriving, the way forward for the AI community is to find the right balance between compliance and data usage. For instance, there are very efficient ways for data anonymisation which allow to preserve the identity of individuals. Techniques like differential privacy, federated learning and split learning have made it possible to train AI models on vast swathes of data without endangering privacy. On the other hand, there are also algorithmic ways to protect privacy, through the use of AI models to detect data leaks and cyber security breaches. Therefore, AI is also beneficial to privacy in certain ways. 

Sources of Article

Image from Flickr

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in