Results for ""
The term AI winter first appeared in a public debate at the AAAI (American Association of Artificial Intelligence) annual meeting in 1984.
At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had exploded in the 1980s and that disappointment was unavoidable. After three years, the billion-dollar AI industry began to crumble.
Hype
The AI winter happened because developers made promises too good to be true, end users had unrealistically high expectations, and the media spread the word a lot. Yet, even though AI's reputation has gone up and down, it has kept making new and valuable technologies. In 2002, AI researcher Rodney Brooks said, "There's this stupid myth that AI has failed, but AI is all around you all the time."
In 2005, Ray Kurzweil agreed: "Many people still believe that the AI winter was the end of the story and that nothing has happened since then in the field of AI. Still, many thousands of AI applications are now deeply built into the infrastructure of every industry." Since the early 1990s, when enthusiasm and hope about AI were at their lowest, they have mostly gone up. Then, around 2012, research and business groups became very interested in artificial intelligence, especially the subfield of machine learning. This transmission led to a massive increase in funding and investment.
AI integration
AI technology became widely used as a component of larger systems in the late 1990s and early 2000s. However, according to Nick Bostrom, "a lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful and common enough, it's not labelled AI anymore." Around the same time, Rodney Brooks stated, "There's this stupid myth out there that AI has failed, but AI is all around you every second of the day." Furthermore, machine translation, data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosis, and Google's search engine are a few examples of the commercial success of AI-related technologies.
AI funding
Researchers and economists often figured out how bad an AI winter was by looking at which AI projects were getting money, how much money they were getting, and who gave it to them. Major funding agencies in the developed world often set funding trends. Currently, most of the funds for AI research in the US and EU come from DARPA and a civilian funding program called EU-FP7.
Institutional factors
AI research is often a mix of research from different fields. AI has the same problems as other types of interdisciplinary research because it is of various fields. Funding goes through the established departments, and when there are cuts to the budget, there will be a tendency to protect each department's "core contents" at the expense of less traditional and cross-disciplinary research projects.
Economic factors
The effect on AI research is worsened by the "core contents" tendency, and during a crisis, investors are more likely to put their money into less risky endeavours. All of this put together could intensify a recession into an AI winter. It is essential to remember that the Lighthill report was during an economic crisis in the UK when universities were to make budget cuts and only had to decide which programs to eliminate.
Conclusion
Many philosophers, cognitive scientists, and computer scientists have made predictions about AI and where it may have failed in the past. For example, Hubert Dreyfus knew in 1966 that the first wave of AI research wouldn't live up to the public's promises because of false assumptions.
Furthermore, People think that the AI winter is over because funding, development, deployment, and commercial use of AI have all grown a lot in recent years.