Results for ""
The terms neats and scruffies are used in AI to refer to one of the field's ongoing philosophical debates. This conflict concerns a severe issue: how should an intelligent system be designed optimally?
History
Roger Schank coined the distinction between neat and scruffy in the mid-1970s. Schank coined the terms to distinguish his work on natural language processing (which represented commonsense knowledge as large amorphous semantic networks).
Nils Nilsson addressed the issue in his presidential address to the Association for the Advancement of Artificial Intelligence in 1983, arguing that "the field needed both." "Much of the knowledge we want our programmes to have some kind of declarative, logic-like formalism," he wrote. Ad hoc structures have their place, but the domain generates most of them.
Neat Vs scruffies
Algorithms based on formal paradigms such as logic, mathematical optimization, and neural networks are used by "neats."
To achieve intelligent behaviour, "Scruffies" employ a variety of algorithms and methods. Scratchy programmes may necessitate a significant amount of hand-coding or knowledge engineering. General intelligence, according to Scruffies, can only be achieved by solving a large number of seemingly unrelated problems. Moreover, the neat method is similar to physics because of its simple mathematical models.
Breakthroughs of neat and scruffies in AI
In the twenty-first century, clever solutions to machine learning and computer vision have been highly successful. However, these solutions are to solve specific problems with specific solutions, leaving the issue of artificial general intelligence (AGI) unsolved. Furthermore, Karl Friston talks about the Free energy principle, where he refers to physicists as "Neats" and AI researchers as "Scruffies".
Conclusion
In AI research, the "neat" and "scruffy" portraits have long been used to describe viewpoints, reasoning styles, and methodologies. The recent success of deep learning has reignited the debate between these two AI approaches; some natural questions arise in this context. Given the history of AI, how can we characterize and classify these positions?
What's more, what are the implications of these positions for AI's future? How should AI research be conducted in the future, neatly or haphazardly? These are the issues that researchers are still debating.