In the 1960s, the artificial intelligence community, which was still young at the time, had a lot of hopes. Of course, AI wasn't ready for use in business yet, but it still caught the attention of both scientists and the general public. 

It is during this time that Terry Winograd created SHRDLU, pioneering computer software for natural language comprehension, between 1968 and 1970 at MIT. SHRDLU was an early computer program for natural-language understanding.

SHRDLU had a simple conversation with a user via teletype about a small world of objects, the BLOCKS world, shown on an early computer screen. The user interacts with the application by moving objects, naming collections, and asking questions about the condition of a condensed "blocks world," a virtual box filled with various blocks.

SHRDLU's Origin

SHRDLU, a DEC PDP-6 computer and a DEC graphics terminal using Micro Planner and Lisp programming languages. Later, in the computer graphics labs at the University of Utah, SHRDLU's "world" got a full 3D rendering.

The name SHRDLU comes from ETAOIN SHRDLU, which is the order in which the letter keys on a Linotype machine are set up, based on how often they are in English.

Purpose of SHRDLU

SHRDLU mainly was a language parser that let users talk to each other using English words. Then, the user told SHRDLU to move different things around in the "blocks world," made up of simple things like blocks, cones, balls, etc. SHRDLU special was how the four simple ideas worked together to make the simulation of "understanding" seem much more accurate.

One was that SHRDLU's world was so simple that it could describe all the objects and places in it with maybe 50 words: nouns like "block" and "cone," verbs like "place on" and "move to," and adjectives like "big" and "blue." So it could put the basic building blocks of language together in simple ways, and the program was pretty good at figuring out what the user meant.

Image source: freecodecamp

How about the performance?

SHRDLU also had a primary memory to give some background. For example, one could tell SHRDLU to "put the green cone on the red block" and "take the cone off." "The cone" would mean the green cone that the researchers just mentioned. Most of the time, when the researchers added more adjectives, SHRDLU could go back through the interactions to find the proper context. One could also ask about the past, like, "Did you pick up anything before the cone?"

SHRDLU would figure out that blocks could be stacked by looking for examples, but after trying it, it would figure out that It couldn't stack triangles. Independent of the language parser, the "world" had fundamental physics that made blocks fall over. In addition, SHRDLU could also remember the names of things and how they were together. For example, someone could say, "A steeple is a small triangle on top of a tall rectangle." SHRDLU could then answer questions about steeples in the blocks world and build new ones.

Conclusion

Chatbots and voice-activated technology have reignited interest in natural language processing (NLP), and natural language understanding (NLU) approaches that can generate meaningful human-computer conversations.

SHRDLU-like projects that came after, like Cyc, focused on giving the program much more information from which it could conclude. The user provides simple commands to move things around in a virtual world, but there isn't a straightforward story like there is in interactive fiction. Most people agree that the game Colossal Cave Adventure, which came out between 1976 and 1977, was the first actual piece of interactive fiction.

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in