A group of researchers gathered to look for how AI technologies for drug discovery could be misused. In their experiment, the concerns came out to be true. To their surprise, AI suggested 40,000 potentially toxic molecules, some of them similar to VX, that too in less than six hours. VX is short for "venomous agent X" is the most potent of all nerve agents, and a few salt-sized grains of VX is sufficient to kill a person. Time to put off the rose-tinted glasses, right! 

Researchers including Fabio Urbina, Filippa Lentzos, Cédric Invernizzi and Sean Ekins were shocked to see the results and published their findings in the paper titled ‘Dual use of artificial-intelligence-powered drug discovery.’ As per researchers, this was unexpected because the datasets used for training the AI did not include these nerve agents. The authors of the paper also talked to The Verge.

“In the process, the AI designed not only VX but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases,” as stated in the paper. This is a wake-up call for the AI community in the drug discovery domain.

Risks with AI

The comment from high-profile physicist Stephen Hawking during Web Summit Technology in Lisbon, Portugal seems apt to go with. 

As AI becomes more sophisticated and pervasive, the voices warning of its current and potential risks become increasingly loud. Whether it's the increasing automation of specific jobs, gender and racial bias, or the development of autonomous weapons are some of the many risks associated with AI. Multiple other threats include:

Deep Fakes: Imagine a future, where people remain sceptical of whatever they read, listen to, or see online? Is that situation going to shake the roots of democracy? Well, today an audiotape of any given politician might be modified using machine learning, a subset of AI involved in natural language processing, to make it appear as if that person spouted racist or sexist beliefs when, in reality, they did not. The synthesised video created by artificial intelligence can confuse the general public with ease and is capable of totally destroying the entire political campaign of any leader or political party.

Privacy concerns: Companies feed massive data into AI-driven algorithms that remain vulnerable to data breaches. Personal data that has been created without the consent of the individual may be generated by AI. Facial recognition systems, on the other hand, are breaching privacy. As a result, many countries and California, Oregon, and New Hampshire in the USA are among the states that have passed legislation prohibiting the use of facial recognition cameras.

Widening socioeconomic inequality: AI-driven job loss may result in widening the already existing socioeconomic inequality. Work is a driver of social mobility, but the takeover of repetitive and low-skill work by AI will impact the workforce in the low-skilled, or manual work. On the other hand, people in higher positions or well-paid jobs can easily retrain themselves to meet the requirements for future jobs.

AI Bias: Some of the recent examples of cultural and gender algorithmic bias in AI technologies highlight the dangers of how AI is abandoning the principles of trustworthiness and inclusiveness. Take, for instance, face identification systems from big tech are seriously defective in plainly identifiable ways, according to research from the University of Maryland. When compared to their younger and whiter peers, companies like Amazon, Google, and Microsoft are more likely to fail with older and darker-skinned workers. This bias, according to the study, is not confined to skin colour but also extends to general physical appearance. 

Conclusion

Technology is here to augment without any intention to replace humans from earth, hence the future of AI must follow a human-centric approach. In the words of Fei-Fei Li and John Etchemendy, AI Researchers: “Our future depends on the ability of social- and computer scientists to work side-by-side with people from multiple backgrounds — a significant shift from today’s computer science-centric model.”

“The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer interaction, psychology, and Science and Technology Studies (STS). This collaboration should run throughout an application’s lifecycle — from the earliest stages of inception through to market introduction and as its usage scales,” they wrote in a blog. There is no doubt AI has the potential to realise the shared dreams of a better future for all of humanity but it needs to be channelised in the right direction. 

Want to publish your content?

Get Published Icon
ALSO EXPLORE