LMQL (Language Model Query Language) is a programming language for large language models. In addition, it is a programming language for interacting with language models. 

"A computer would deserve to be called intelligent if it could deceive a human into believing that it was human."- Alan Mathison Turing.

Researchers introduced LMQL, an open-source programming language and language model interaction platform. LMQL is available as a web-based Playground IDE and through Python package management. Natural language prompting is generalised in LMQL, making it more expressive while maintaining accessibility. LMQL is built on Python and allows users to express natural language prompts containing code. The generated queries can be conducted directly on language models like OpenAI's GPT models. In addition, the user can direct the LLM's reasoning process using fixed answer templates and intermediary instructions.

In LMQL, users can set high-level logical constraints on the output of the language model. These limitations are then automatically transformed into token-level prediction masks that can be applied early in the text production process. It enables the stringent enforcement of various constraints, making it difficult for the model to generate material that does not meet the criteria. In addition, it facilitates multi-part prompting and integration by providing greater output format assurances.

Coding criteria

LMQL enhances the capabilities of Large Language Models (LLMs) by combining prompts, constraints, and scripting. Python-based declarative, SQL-like LMQL adds control flow, constraint-guided decoding, and tool augmentation to static text prompting. LMQL streamlines multi-part prompting processes using this form of scripting by using a relatively tiny bit of code.

The researchers have enabled LMP (Language Model Programming) using LMQL, which generalises language model prompting from text prompts to a combination of text and scripting. LMQL modifies the constraints and control flow of an LMP prompt to produce an effective inference procedure. These hyper-logical and high-level constraints are translated into token masks using evaluation semantics strictly enforced during generation. 

Benefits

To prevent the significant expense of re-querying and validating generated text, the team implemented LMQL. It can assist LMQL in producing text that is closer to the desired output on the first try without requiring additional rounds. Furthermore, LMQL restrictions enable users to control or steer the text creation process based on their intended specifications, such as ensuring that the created text follows certain grammatical or syntactic norms or that certain words or phrases are avoided.

Researchers have noted that LMQL can capture a variety of advanced prompting techniques, such as interactive flows, that are challenging to implement using existing APIs. The evaluation demonstrates that LMQL maintains or improves the accuracy of various downstream tasks while substantially reducing computation or cost in pay-to-use APIs, resulting in cost savings ranging from 13 to 85 per cent. 

Conclusion

LMQL enables users to convey a vast array of standard and advanced prompting techniques concisely and straightforwardly. It's compatible with Hugging Face's Transformers, the OpenAI API, and Langchain. The corresponding developer resources are available at lmql.ai, along with a browser-based Playground IDE for experimentation. 

Furthermore, LMQL is a promising innovation, as the evaluation demonstrates that it is a potent tool that can enhance the efficacy and precision of language model programming. As a result, it can facilitate attaining desired outcomes with reduced resources.

Sources of Article

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in