Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a new algorithm called “Co-LLM,” that can pair a general-purpose base LLM with a more specialized model and help them work together.  

As the former crafts an answer, Co-LLM reviews each word (or token) within its response to see where it can call upon a more accurate answer from the expert model. This process leads to more accurate replies to medical prompts and math and reasoning problems. Since the expert model is not needed at each iteration, this also leads to more efficient response generation.  

A collaborative approach for improved accuracy

The researchers opined that such a collaborative process can help large language models (LLMs) improve accuracy. It’s been difficult to teach LLMs to recognize when they should collaborate with another model on an answer. Instead of using complex formulas or large amounts of labelled data to spell out where models should work together, these researchers envisioned a more organic approach through Co-LLM.  

To decide when a base model needs help from an expert model, the framework uses machine learning to train a “switch variable,” or a tool that can indicate the competence of each word within the two LLMs’ responses. The switch is like a project manager finding areas where it should call in a specialist.

If you asked Co-LLM to name some examples of extinct bear species, two models would draft answers together. The general-purpose LLM begins to put together a reply, with the switch variable intervening at the parts where it can slot in a better token from the expert model, such as adding the year when the bear species became extinct.  

“With Co-LLM, we’re essentially training a general-purpose LLM to ‘phone’ an expert model when needed,” says Shannon Shen, an MIT PhD student in electrical engineering and computer science and CSAIL affiliate who’s a lead author on a new paper about the approach.  

“We use domain-specific data to teach the base model about its counterpart’s expertise in biomedical tasks and math and reasoning questions. This process automatically finds the parts of the data that are hard for the base model to generate, and then it instructs the base model to switch to the expert LLM, which was pre-trained on data from a similar field.  

The general-purpose model provides the ‘scaffolding’ generation, and when it calls on the specialized LLM, it prompts the expert to generate the desired tokens. Our findings indicate that the LLMs learn patterns of collaboration organically, resembling how humans recognize when to call upon an expert to fill in the blanks.”  

A combination of flexibility and factuality  

To showcase Co-LLM’s flexibility, the researchers used data like the BioASQ medical set to couple a base LLM with expert LLMs in different domains, like the Meditron model, which is pre-trained on unlabeled medical data. This enabled the algorithm to help answer inquiries a biomedical expert would typically receive, such as naming the mechanisms causing a particular disease.  

Co-LLM gave more accurate replies than fine-tuned simple LLMs and untuned specialized models working independently. Co-LLM can guide two models that were trained differently to work together. In contrast, other effective LLM collaboration approaches, such as “Proxy Tuning,” need their component models to be trained similarly. Additionally, this baseline requires each model to be used simultaneously to produce the answer, whereas MIT’s algorithm simply activates its expert model for particular tokens, leading to more efficient generation.  

When to ask the expert  

The MIT researchers’ algorithm highlights that imitating human teamwork more closely can increase accuracy in multi-LLM collaboration. The team may draw from human self-correction to further elevate its factual precision: They’re considering a more robust deferral approach that can backtrack when the expert model doesn’t give a correct response. This upgrade would allow Co-LLM to course-correct so the algorithm can still reply satisfactorily.  

The team would also like to update the expert model (by only training the base model) when new information is available, keeping answers as current as possible. This would allow Co-LLM to pair the most up-to-date information with strong reasoning power. Eventually, the model could assist with enterprise documents, using the latest information to update them accordingly. Co-LLM could also train small, private models to work with a more powerful LLM to improve documents that must remain within the server.  

Sources of Article

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE