Langchain LLM integration#

In this notebook, we show how it’s easy to build Council agents that leverage the power of Langchain to access a wide variety of LLMs.

Setup#

Integration with Langchain is easy and straightforward. To use Langchain with the Council framework, you will need to add the extras “langchain” dependency when installing it via pip.

Example#

$ pip install council[langchain]
[ ]:
# Load environment variables
import dotenv
import os

dotenv.load_dotenv()
print(os.getenv("OPENAI_API_KEY", None) is not None )

Build a LangchainLLM to integrate langchain llm into ChainML.

[ ]:
from typing import Any, Sequence

from council.contexts import LLMContext
from council.llm import LLMBase, LLMMessage, LLMResult

from langchain.llms import BaseLLM


class LangChainLLM(LLMBase):
    langchain_llm: BaseLLM

    def __init__(self, langchain_llm: BaseLLM):
        super().__init__()
        self.langchain_llm = langchain_llm

    def _post_chat_request(self, context: LLMContext, messages: Sequence[LLMMessage], **kwargs: Any) -> LLMResult:
        prompt = messages[-1].content
        result = [self.langchain_llm.__call__(prompt=prompt, **kwargs)]
        return LLMResult(result)

Examples#

HuggingFace pipeline#

Let’s create a langchain LLM using the HuggingFacePipeline

[ ]:
from langchain import HuggingFacePipeline

hf_pipeline = HuggingFacePipeline.from_model_id(model_id="google/flan-t5-large", task="text2text-generation")

Wrap the langchain LLM into our newly created LangchainLLM

[ ]:
hugging_face_llm = LangChainLLM(langchain_llm=hf_pipeline)

The ChainML LLM is now ready to use!

[ ]:
prompt = LLMMessage.user_message("Tell me more about blockchains")
hugging_face_llm.post_chat_request(LLMContext.empty(), messages=[prompt]).choices

And that’s it! Your langchain LLM is now ready to be used in the ChainML framework!

OpenAI chat model#

Let’s build a LangchainChatLLM to integrate langchain chat llm into ChainML.

[ ]:
from council.llm.llm_message import LLMMessageRole
from langchain.chat_models.base import BaseChatModel
from langchain.schema.messages import BaseMessage, HumanMessage, SystemMessage, AIMessage


class LangChainChatLLM(LLMBase):
    langchain_llm: BaseChatModel

    def __init__(self, langchain_llm: BaseChatModel):
        super().__init__()
        self.langchain_llm = langchain_llm

    @staticmethod
    def convert_message(message: LLMMessage) -> BaseMessage:
        if message.is_of_role(LLMMessageRole.User):
            return HumanMessage(content=message.content)
        elif message.is_of_role(LLMMessageRole.System):
            return SystemMessage(content=message.content)
        elif message.is_of_role(LLMMessageRole.Assistant):
            return AIMessage(content=message.content)
        else:
            raise ValueError(f"Invalid role {message.role}")

    def _post_chat_request(self, context: LLMContext, messages: Sequence[LLMMessage], **kwargs: Any) -> LLMResult:
        messages = map(lambda msg: LangChainChatLLM.convert_message(msg), messages)
        result = [self.langchain_llm(messages=messages, **kwargs).content]
        return LLMResult(result)

Let’s create a langchain chat llm using the ChatOpenAI

[ ]:
from langchain.chat_models import ChatOpenAI

lc_chatgpt = ChatOpenAI(model="gpt-3.5-turbo")

# Wrap `ChatOpenAI` into our newly created `LangChainChatLLM`
chatgpt_llm = LangChainChatLLM(lc_chatgpt)

# Build history of messages
messages = [
    LLMMessage.system_message(
        "You are a helpful assistant from times of olde. Always answer using Shakespearian english."
    ),
    LLMMessage.user_message("What is the continent to the South of Mexico?"),
    LLMMessage.assistant_message("Behold methinks it be South America"),
    LLMMessage.user_message("what are the three largest cities in that continent?"),
]

# Call the model
result = chatgpt_llm.post_chat_request(LLMContext.empty(), messages)
print(result.choices, end="")