Results for ""
As governments around the world grapple with the complexity of regulating artificial intelligence (AI), there is much focus and discussion today on formulating AI policy frameworks that can keep the technology in check. Amidst stark warnings from global stalwarts such as tech titan Elon Musk and historian Yuval Noah Harari on the dangers of AI, there is an important societal element of trust that deserves much more attention within the AI Policy discourse. Trust is vital for human civilisation, cardinal to democracy and organic in media. It now faces a damming breach in the form of new AI tools and technologies that run rampant on social media, oftentimes masquerading as human beings or ‘human generated content’. In the digital space, the difference between what or ‘who’ is human and ‘what’ is artificial is slowly disappearing, opening a new Pandora’s box.
Unlike any previous technology in history, AI is fundamentally different as it is the first technology that can make decisions by itself and it is the first tool that can create/ generate new ideas by itself, be it text, music or paintings. It will be the first technology that will take power away from human beings and we have not even begun to understand the psychological impact of AI on human beings both at an individual level and at a larger societal scale. Today users of social media and the larger Internet, have no idea whether the other ‘person’ they are communicating with online is actually a human being or an AI bot. The proliferation of numerous AI tools and their capability to generate life-like images, text, content and videos has upended the long held strategic status quo of societal trust. Using deepfake technology, which is a type of synthetic media that uses AI to manipulate or generate content that appears to be authentic, AI tools today can masquerade as any politician, world leader, film star or person. Malicious actors have already used deepfakes to create viral videos of Former US President Barack Obama or of Meta Owner Mark Zukerberg. If such malicious use of AI tools remains unchecked, it can and will destroy public trust in institutions and in democracy itself. The world has witnessed early signs of this in the 2020 US Presidential Elections, but what of the upcoming US Presidential Elections scheduled for 2024 or the Indian General Elections 2024? On a larger scale unregulated AI tools can cause tremendous harm, especially as digitisation progressively permeates democratic institutions, making core functions such as elections vulnerable to inauthenticity and disinformation.
A breach of societal trust on this scale is the new Pandora’s box. Trust underpins almost all human societal functions and it is the embodiment of the social contract that citizens formulate with their governments worldwide. This social contract is valid as long as there is societal trust. Preventing a breach of this trust should be the main focus of AI policymaking.
AI has already captured global attention. The new shift in technology is to now capture global imagination. And in this new AI arms race, societal trust is being offered up as a willing sacrifice.
Original Content.