Results for ""
The world has always estimated the threat of an accidental AI takeover. We tend to conflate intelligence with the drive to achieve dominance. This confusion is understandable. During evolutionary history, intelligence was always the primary key to social dominance. Hence, intelligence capable of outperforming humans is indeed terrifying.
The Future of Life Institute has issued an open letter asking AI labs to pause research on AI systems more powerful than GPT-4 until more guardrails can be put around them.
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs", stated the report.
The report highlighted the widely endorsed Asilomar AI Principles, marking that advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. According to the Future of Life Institute, this level of planning and management is not happening.
"Should we let machines flood our information channels with propaganda and untruth?", asks the letter.
The Future of Life Institute is a non-profit organization that works to reduce global catastrophic and existential risks facing humanity, particularly the existential risk from advanced AI. They are currently focusing on "major risks" in the development, use and governance of the transformative impact of technologies.
The institute believes that the way powerful technology is developed and used will be the most important factor in determining the prospects for the future of life. Hence, their mission is to ensure that technology continues to improve these prospects.
The open letter has received criticisms from leading researchers like Yann LeCun. 'Nope. I did not sign this letter. I disagree with its premise', said LeCun in his Twitter post.
'The year is 1440, and the Catholic Church has called for a six months moratorium on the use of the printing press and the movable type. Imagine what could happen if commoners get access to books’, asks LeCun. He added that they could read the Bible for themselves, and society would be destroyed.
Expressing his discontent, Yann LeCun reposted that the letter is such a mess of scary rhetoric and ineffective/non-existent policy prescriptions. There are important technical and policy issues, and many of us are working on them.
According to Princeton's computer science professor, Arvind Narayan, this open letter — ironically but unsurprisingly — further fuels AI hype and makes it harder to tackle real, already occurring AI harms. Moreover, he suspects it will benefit the companies it is supposed to regulate, not society.
Timnit Gebru, who was pushed out of Google over a paper criticizing the capabilities of AI, and is currently working at the DAIR institute, strongly criticized the letter. Opposing the letter, she released 'Statement from the listed authors of Stochastic Parrots on the "AI pause" letter'.
Calling the letter "horrible", Gebru tweeted “while there are a number of recommendations in the letter that we agree with, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined "powerful digital minds" with "human-competitive intelligence."
Gebru argued that the "hypothetical risks" mentioned in the letter are the focus of a dangerous ideology called longtermism.
The Future of Life Institute's open letter currently has more than 1100 signatories, including the owner of Twitter and founder of SpaceX and Tesla, Elon Musk; Apple Co-founder Steve Wozniak; writer and historian Yuval Noah Harari; and Tristan Harris of the Centre of Human Technology. The letter's signatories say the pause they are asking for should be "public and verifiable and include all key actors”.
Although AI has the potential to revolutionize the world as we know it, experts are concerned about the "Significant risks to humanity" that could result from uncontrolled AI. Even the head of OpenAI, Sam Altman, who created ChatGPT, has expressed a hint of concern that his creation will be used for large-scale disinformation or cyberattacks. As a result, he has asked for time to adapt to the potential risks associated with powerful AI.
The petition gained significant support from the members of Google's DeepMind, Stability AI director Emad Mostaque as well as US AI experts and academics, and executive engineers from OpenAI partner Microsoft.