History of artificial intelligence (AI) is more recent and not as advanced as many politicians assume. The actual history is rather messy, but it tends to work out as follows. In the 1950s, a visionary mathematician named Norbert Wiener was testing a version of the Turing machine with a chess program. It worked — Turing called it "computers that think." The machine became known as the first "universal" machine, and it allowed theorists to explore the ramifications of the computational universe. But it was too slow to accomplish its full promise. What makes the Turing machine so fascinating is that in theory the only reason it can achieve human-level intelligence is because of a very specific device it uses to store its data. A personal computer could do the same thing, but it would need

< History of artificial intelligence (AI)>

The mass production of computer programs was not as simple as it looks today. In the 1930s and 1940s there was a vast amount of investment by universities, businesses and government research institutes into building specialized equipment, including specialized software for a computer. Unfortunately, the supply chain had very limited knowledge of the operations of a computer and the company or firm which actually built the machines was often the one who paid for the initial development costs of these devices. By a curious coincidence, it was also through the selling of subscription libraries of software to universities and companies in those years that the prevailing view of the economics of computer programs emerged. At that time many firms complained that the difficulty of employing programmers and maintaining their existing software was preventing them from hiring computer programmers.

Dartmouth workshop on artificial intelligence in 1956, which became widely cited as a forerunner of today's AI research and various AI advances. The earliest form of AI, to the Turing Test and beyond, is born, as well as a critical tool in modern AI research: heuristics and biases. IBM's John McCarthy coined the term "parametric" to summarize artificial intelligence researchers' efforts toward capturing artificial reasoning as a set of rules that is transparent and separable from context. (You are like a coin flipping to various algorithms; these assumptions or heuristics are added to the combination of states and outcomes of the coin's flipping.) This process-based model of reasoning, devised by a number of researcher, helped give rise to the first general purpose computer algorithms for managing everything from hospital care to automated contract administration and video game design.

Recent advances in AI began[when the] number of problems training a virtual computer grew and then exploded. Those computational technologies—robot controllers, speech recognition systems, and training algorithms—became such complex that they couldn't feasibly support training simulations of human players. In 2012, researchers rolled out DeepMind, a Google project to mine the world's vast pool of data to conduct some of the earliest research on virtualized artificial intelligence. The first fruit of the project, the company announced last month, was AlphaGo, a winnable program that surpassed the human world champion at Go, the ancient Chinese game. recent advances in AI began when the task of matching the script of a mobile game to the agent's actions was taken over by an AI called AlphaGo, with assistance from human experts.

Soon machines began to beat humans at games like Go, they rapidly began to analyze their moves and decide if they could be improved. During this process, a binary decision-making algorithm for Go emerged that served as a gold standard. The idea was simple: let the computer make all of the decisions for a particular game.

Alpha go actually proved to be nearly unbeatable; a statistical draw victory was made impossible by an AI that could devise strategies that would trap the entire human population in a sort of chess/marble scoring error matrix. That almost brought down the rest of the SCAI program. The most interesting result from that defeat came from the partnership of IBM's Deep Blue and IBM's Watson AI machine. The supercomputer decided to take an elegant playing style not suited to human opponents and once it took the game of Go on a path similar to that of AlphaGo, it beat human masters 2:1 in a tournament. Human can't beat AIs

Now, researchers are exploring what AI can do without human help. In a study published in Science, researchers suggest that the human brain can be trained to carry out simple tasks that it could not perform by itself. After making one important discovery, the researchers argue that new AI techniques might not have to be developed from scratch. Instead, they could be accessed from existing technology that already exists and could be used to create machines that can do virtually anything humans can do.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in