Results for ""
E-commerce has in many ways empowered consumers, through easy access to competitively priced goods and services from a wide range of online businesses, both within India and abroad. This abundance of choices available online has positively translated into greater transparency on price, quality and reputation of providers. The traditional information asymmetries between consumers and businesses are now being reduced, empowering consumers to make better decisions. However, this increased transparency has also exposed consumers to new challenges and risks. In the recent past, commercial platforms have transformed the way they transfer services with AI by gathering vast amounts of user data, tracking digital presence of users and profiling them based on their behavioral patterns. This is due to AI's capability to analyse extensive data and generate personalised user experience, raising major concerns about consumer and data protection rights.
One of the biggest threats among them is targeted advertisements to consumers. Companies use big data analytics to delve into individuals’ behaviour patterns, their personalities, collecting data from various sources such as purchase history, website visits, social media likes and third party “cookies”. This leads to a highly customised advertising strategy that impacts consumer choices and personalises pricing and contract terms based on individual profiles. Based on the principles of behavioral economics, such profiling perpetuates consumer biases and poses a significant threat to individuals’ right of privacy and freedom of choice. In critical sectors like credit or insurance, where prices correlate with risk profiles, the rationalisation of different prices for different consumers could further deepen existing inequalities. Discrimination fueled by biases within data persists in AI systems, resulting in biased outcomes in AI-driven advertising. The best example we see nowadays is when certain demographics are shown ads based on their past behaviour, such as beauty brands targeting women aged between 25-35 for their anti-ageing products.
Another significant concern caused by the use of AI in e-commerce and digital platforms is the use of dark practices which undermine a consumer’s decision-making ability through manipulative tactics of user interface. Dark patterns are commonly found in browsers, search engines, apps, and many cookie consent notices that frequently lead to violations of data protection laws. The relatively more commonly encountered types of dark patterns on e-commerce websites and apps involve creating a false sense of urgency, generating social proof, preselection of options by default, disguising advertisements, forced registration for a purchase, or frequent nagging to purchase items in cart.
To counter these threats, the Department of Consumer Affairs (DoCA) issued Guidelines for Prevention and Regulation of Dark Patterns under Section 18 of the Consumer Protection Act of 2019. These guidelines aim to safeguard consumers from unfair trade practices in e-commerce and curb use of deceptive design patterns that violate consumer rights. This is in line with the government's broader efforts to limit data usage for specific purposes, as detailed in the Digital Personal Data Protection (DPDP) Act of 2023 and its supplementary rules.
However, in the ever-evolving landscape of technological innovation, integration of Artificial Intelligence necessitates vigilant regulatory oversight to safeguard consumer rights. Industry, along with the government, must ensure responsible adoption of AI in business practices to maintain trust and growth in the ecosystem. The Consumer Protection Act of 2019 allows Central Consumer Protection Authority (CCPA) to regulate unfair trade practices and all matters relating to violation of rights of consumers. This empowers CCPA to proactively issue guidelines that promote and enforce the rights of consumers to establish ethical standards among industry stakeholders. The industry should come out with a code of ethics upholding principles of fairness, accountability, transparency, safety, and privacy.
To build accountability, companies must ensure that AI-generated content is distinguishable from human-generated content, empowering consumers to make informed choices. Transparency in algorithms ensures that the generated outcomes, capabilities and limitations are clearly communicated to end users.
Transparency becomes especially crucial in the realm of AI chatbots, when product recommendations and grievance redressal are based on purchase history, preferences, or positive reviews of products. Moreover, safety measures must be ingrained in AI advertising practices, requiring rigorous testing of algorithms and systems to pre-empt discrimination or biases upon consumer rights before adopting them in business practices. Companies must proactively obtain explicit consent by adhering to the data protection regulations such as the DPDP Act of 2023 before leveraging personal information for targeted advertising.
In addition to the regulatory measures required for responsible adoption of AI, and to curb the proliferation of unfair business practices employing elements of digital choice architecture that subvert consumer decision-making, stricter monitoring measures are required from industry and government. The government must consider the establishment of a Centre of Excellence (CoE) in collaboration with industry to conduct in-depth research on using AI to detect dark patterns and unfair trade practices. This should be developed in line with the IndiaAI Mission to bolster India’s AI innovation ecosystem and create a safer online universe for consumers. Furthermore, to raise awareness among consumers on dark patterns and unfair trade practices, companies and the Department of Consumer Affairs must coordinate efforts to come out with campaigns under the 'Jaago Grahak Jaago' programme that will help identify dark patterns and uphold consumer autonomy.
Consumer Protection Act