Modular Reasoning, Knowledge, and Language (MRKL)
IBM watsonx
A recolored leadspace using the Watson for Customer Care leadspace as its basis.
Modular Reasoning, Knowledge, and Language (MRKL)

Modular reasoning, knowledge, and language (MRKL, pronounced "miracle") systems expand the utility of large language models (LLMs) by giving them access to current information and to proprietary information and systems, and adding expertise for specific tasks or types of tasks such as performing mathematical calculations.

MRKL systems are comprised of two major components:

  • An extendable set of expert modules, along with a large language model (LLM), that are specialized for specific tasks such as performing math, retrieving current information via an API call, or accessing a priorietary system to retrieve, say, customer profile information.
     

  • A router that routes every user query to a module that can best respond to user's question. Module outputs can be returned directly to the user or used as inputs to other modules.
     

MRKL systems provide a number of benefits over fine-tuned, multi-task LLMs on their own:

  1. Safe fallback. If a user query doesn't match any expert module route the query can be handled directly by the LLM.
     

  2. Extensibility. The system can be easily and inexpensively extended to new tasks and capabilities through the addition of new modules and routes. Similarly, existing modules can be extended without impact to the overall system.
     

  3. Interpretability. Routing decisions and the operation of the modules produce events that can be logged and later used to show a model's rationale for an answer.
     

  4. Information currency. Modules that integrate with external APIs can provide the model with up to date information that enables system to answer questions, for example "What is the weather in Paris?", that a static model cannot.
     

  5. Proprietary knowledge. Modules that integrate with internal systems and APIs can provide the model with proprietary information, eg. "What is the balance of the customer's credit account?", that an isolated model cannot.
     

  6. Composability. By composing modules in mult-input/output chains the model can correctly respond to complex questions and inputs.

     

    Conceptual Walkthrough

A more detailed architectural view of a MRKL system is shown in the diagram above.

  1. A user submits a query to a generative AI application (for example, a chatbot, or a query interface within an enterprise application)
     

  2. The generative AI application passes the user's query to the MRKL router. The router is shown here as an independent component invocable through an API to promote re-use of the service across applications and to decouple the MRKL system from the consuming applications but the router could be embedded within the application for solutions such as prototypes, or integrated chatbots that provide only a 'thin' user interface in front of the MRKL system.
     

  3. The router uses a tuned LLM to break the user query down into a series of actions, or steps, necessary to arrive at an answer. For example, to answer the query "What is the current temperature in Winnipeg, Manitoba, Canada? How does that compare to the historical average for this time of year?" the LLM may respond with the following conceptual list of actions:

    • Look up the current temperate for Winnipeg using the Weather API
    • Look up the current date using the Calendar
    • Look up the average temperature in Winnipeg on this date using the Search API
    • Find the difference between current temperature and the historical average using the Calculator
       
  4. The router then invokes the appropriate expert module for each action in the list. Continuing with the example from Step 3:

    • The router invokes the Weather API module to retrieve the current temperate for Winnipeg, -1°C.
    • The router invokes the Calendar module to get the current date, November 9, 2023.
    • The router uses the Search API to find the normal temperature in Winnipeg on November 9, 1.4°C.
    • The router using the Calculator to find the difference between the two temperatures, -1 - 1.4 = -2.4
       
  5. The router uses the LLM to formulate a response; in our example "The current temperature in Winnipeg is -1°C. That is 2.4°C cooler than the historical norm of 1.4°C".
     

  6. The formulated response is passed back to the generative AI application and to the user.

Next steps

Talk to our experts about implementing a hybrid cloud deployment pattern.

More ways to explore Hybrid Cloud Architecture Center Diagram tools and templates IBM Well-Architected Framework
Contributors

Chris Kirby

Updated: November 30, 2023