Results for ""
The Future of Life Institute recently issued an open letter asking AI labs to pause research on AI systems more powerful than GPT-4 until more guardrails can be put around them. "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs", stated the report.
The letter was supported by Elon Musk, owner of Twitter and founder of SpaceX and Tesla; Steve Wozniak, Apple Co-founder; Yuval Noah Harari, writer and historian and many more. At the same time, it faced criticism from researchers Yann LeCun, VP & Chief AI Scientist at Meta and Timnit Gebru, researcher at DAIR institute.
Andrew Ng, Founder of DeepLearning.AI organized a fireside chat between researchers Yann LeCun. They shared thoughts on why the six-months pause is a bad idea.
"Calling for a delay in research and development smacks me of a new wave of obscurantism essentially", said Yann LeCun sharing his first thought when he heard about the open letter. "Why slow down the progress of knowledge and science?" he asks.
According to Andrew Ng, the recent developments in AI and Deep Learning, has aided the growth of numerous products. The slowing down of the research will affect the overall progress of the products.
The letter's signatories are worried that current AGIs will wipe out humanity. However, Yann LeCun believes that years from now, AI models will not be built on the same blueprint. Instead, it will gradually develop, and our researchers are intelligent to create AI models that align with human values.
Andrew Ng believes that unrealistic expectations over AI models are the primary reason for the current hype around AI solutions. He pointed out the optimism he once had over self-driving cars.
"We humans are language oriented. We believe that when something is fluent, it is also intelligent", said Yann LeCun speaking about present AI models. He added that AI systems currently available need a more superficial understanding of reality. "The lack of this understanding is why these models can essentially produce nonsense that sounds convincing", said LeCun.
Andrew Ng opines that some of the recommendations or suggestions in the open letter are not implementable, especially asking AI labs to slow down. LeCun believes there is no point in regulating technology and its R&D.
LeCun explained his thought, which he had previously shared through his tweets- The year is 1440, and the Catholic Church has called for a six months moratorium on the use of the printing press and movable type. So, imagine what could happen if commoners get access to books.
He added that by regulating R&D, we are taking a step back in terms of the progress of humanity. AI can provide us a Renaissance. "Why would we want to stop that?" he asks.
Yann LeCun quoted his story, published in Scientific American four years ago. The article "Don't Fear the Terminator" spoke about the AI apocalypse as unrealistic. "You need to have the motivation to dominate to actually dominate", said LeCun.
"And it exists in humans who are social species. But non-social species do not have this urge. We have designed AI systems in such a manner that they are submissive", he added.