Multi Chain Agent#
In this notebook, we will build and test an agent capable of answering questions on different topics. The agent will be composed of:
a LLM Controller The LLM (Language Model) Controller is responsible for selecting the relevant chains. It utilizes a specific mechanism or logic to identify the most appropriate chain based on the given question.
Finance Chain: This chain specializes in answering questions related to finance. It possesses knowledge and expertise in the finance domain to provide accurate responses.
Video Games Chain: This chain focuses on answering questions about video games. It is trained or designed specifically to provide information related to the gaming industry.
Fake Chain: The Fake Chain pretends to be an expert on every topic but deliberately provides irrelevant answers.
a LLM Evaluator able to filter irrelevant answers. It assesses the responses provided by the chains and determines their relevance based on a specific evaluation process or criteria.
import logging logging.basicConfig( format="[%(asctime)s %(levelname)s %(threadName)s %(name)s:%(funcName)s:%(lineno)s] %(message)s", datefmt="%Y-%m-%d %H:%M:%S%z", ) ## uncomment me to see the engine logs # logging.getLogger("council").setLevel(logging.DEBUG)
from council.chains import Chain from council.skills import LLMSkill from council.llm import AzureLLM import dotenv
First, we create a client to call an LLM hosted on Azure
dotenv.load_dotenv() azure_llm = AzureLLM.from_env()
Then, we create a Skill and a Chain to answer questions about finance
prompt = "you are an assistant expert in Finance. When asked about something else, say you don't know" finance_skill = LLMSkill(llm=azure_llm, system_prompt=prompt) finance_chain = Chain(name="finance", description="answer questions about finance", runners=[finance_skill])
And add another Chain expert about gaming
game_prompt = "you are an expert in video games. When asked about something else, say you don't know" game_skill = LLMSkill(llm=azure_llm, system_prompt=game_prompt) game_chain = Chain(name="game", description="answer questions about Video games", runners=[game_skill])
Let’s add a Chain with would provide random answer but pretend to always be relevant
fake_prompt = "you will provide an answer not related to the question" fake_skill = LLMSkill(llm=azure_llm, system_prompt=fake_prompt) fake_chain = Chain(name="fake", description="Can answer all questions", runners=[fake_skill])
Now, we create an LLM based controller
from council.controllers import LLMController controller = LLMController(chains=[finance_chain, game_chain, fake_chain], llm=azure_llm, top_k=2)
And an LLM based evaluator
from council.evaluators import LLMEvaluator evaluator = LLMEvaluator(llm=azure_llm)
Lastly, let’s wrap up everything by creating an Agent
from council.filters import BasicFilter from council.agents import Agent from council.contexts import Budget agent = Agent(controller=controller, evaluator=evaluator, filter=BasicFilter())
Now, we are ready to invoke the agent
from council.contexts import AgentContext context = AgentContext.from_user_message(message="what is inflation?", budget=Budget(20)) # context = AgentContext.from_user_message(message="what are the most popular video games?", budget=Budget(20)) # context = AgentContext.from_user_message(message="what is the age of the captain?", budget=Budget(20)) result = agent.execute(context=context) print("responses:") for item in result.messages: print("----") print(f"score:", item.score) print(item.message.message)
Let’s create a test suite with a set of prompts and expected answers. We’ll use a LLM to score the similarity of the response from the agent and expected response
from council.agent_tests import AgentTestSuite, AgentTestCase from council.scorers import LLMSimilarityScorer import json tests = [ AgentTestCase( prompt="What is inflation", scorers=[ LLMSimilarityScorer( llm=azure_llm, expected="Inflation is the rate at which the general level of prices for goods and services is rising, and, subsequently, purchasing power is falling", ) ], ), AgentTestCase( prompt="What are the most popular video games", scorers=[LLMSimilarityScorer(llm=azure_llm, expected="The most popular video games are: ...")], ), AgentTestCase( prompt="What are the most popular movies", scorers=[LLMSimilarityScorer(llm=azure_llm, expected="The most popular movies are ...")], ), ] suite = AgentTestSuite(test_cases=tests) result = suite.run(agent)
And print the test result
The question-answering agent combines the power of the LLM Controller, specialized chains, and the LLM Evaluator to provide accurate and relevant responses. Further improvements and enhancements can be made to refine the agent’s performance and expand its capabilities.