LangChian (4 Part Series)
1 Mastering LangChain: Part 1 – Introduction to LangChain and Its Key Components
2 Part 2: Mastering Prompts and Language Models with LangChain
3 Part 3: Building Powerful Chains and Agents in LangChain
4 LangChain Part 4 – Leveraging Memory and Storage in LangChain: A Comprehensive Guide
Building Powerful Chains and Agents in LangChain
In this comprehensive guide, we’ll dive deep into the world of LangChain, focusing on constructing powerful chains and agents. We’ll cover everything from understanding the fundamentals of chains to combining them with large language models (LLMs) and introducing sophisticated agents for autonomous decision-making.
1. Understanding Chains
1.1 What are Chains in LangChain?
Chains in LangChain are sequences of operations or tasks that process data in a specific order. They allow for modular and reusable workflows, making it easier to handle complex data processing and language tasks. Chains are the building blocks for creating sophisticated AI-driven systems.
1.2 Types of Chains
LangChain offers several types of chains, each suited for different scenarios:
-
Sequential Chains: These chains process data in a linear order, where the output of one step serves as the input for the next. They’re ideal for straightforward, step-by-step processes.
-
Map/Reduce Chains: These chains involve mapping a function over a set of data and then reducing the results to a single output. They’re great for parallel processing of large datasets.
-
Router Chains: These chains direct inputs to different sub-chains based on certain conditions, allowing for more complex, branching workflows.
1.3 Creating Custom Chains
Creating custom chains involves defining specific operations or functions that will be part of the chain. Here’s an example of a custom sequential chain:
<span>from</span> <span>langchain.chains</span> <span>import</span> <span>LLMChain</span><span>from</span> <span>langchain.llms</span> <span>import</span> <span>OpenAI</span><span>from</span> <span>langchain.prompts</span> <span>import</span> <span>PromptTemplate</span><span>class</span> <span>CustomChain</span><span>:</span><span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>llm</span><span>):</span><span>self</span><span>.</span><span>llm</span> <span>=</span> <span>llm</span><span>self</span><span>.</span><span>steps</span> <span>=</span> <span>[]</span><span>def</span> <span>add_step</span><span>(</span><span>self</span><span>,</span> <span>prompt_template</span><span>):</span><span>prompt</span> <span>=</span> <span>PromptTemplate</span><span>(</span><span>template</span><span>=</span><span>prompt_template</span><span>,</span> <span>input_variables</span><span>=</span><span>[</span><span>"</span><span>input</span><span>"</span><span>])</span><span>chain</span> <span>=</span> <span>LLMChain</span><span>(</span><span>llm</span><span>=</span><span>self</span><span>.</span><span>llm</span><span>,</span> <span>prompt</span><span>=</span><span>prompt</span><span>)</span><span>self</span><span>.</span><span>steps</span><span>.</span><span>append</span><span>(</span><span>chain</span><span>)</span><span>def</span> <span>execute</span><span>(</span><span>self</span><span>,</span> <span>input_text</span><span>):</span><span>for</span> <span>step</span> <span>in</span> <span>self</span><span>.</span><span>steps</span><span>:</span><span>input_text</span> <span>=</span> <span>step</span><span>.</span><span>run</span><span>(</span><span>input_text</span><span>)</span><span>return</span> <span>input_text</span><span># Initialize the chain </span><span>llm</span> <span>=</span> <span>OpenAI</span><span>(</span><span>temperature</span><span>=</span><span>0.7</span><span>)</span><span>chain</span> <span>=</span> <span>CustomChain</span><span>(</span><span>llm</span><span>)</span><span># Add steps to the chain </span><span>chain</span><span>.</span><span>add_step</span><span>(</span><span>"</span><span>Summarize the following text in one sentence: {input}</span><span>"</span><span>)</span><span>chain</span><span>.</span><span>add_step</span><span>(</span><span>"</span><span>Translate the following English text to French: {input}</span><span>"</span><span>)</span><span># Execute the chain </span><span>result</span> <span>=</span> <span>chain</span><span>.</span><span>execute</span><span>(</span><span>"</span><span>LangChain is a powerful framework for building AI applications.</span><span>"</span><span>)</span><span>print</span><span>(</span><span>result</span><span>)</span><span>from</span> <span>langchain.chains</span> <span>import</span> <span>LLMChain</span> <span>from</span> <span>langchain.llms</span> <span>import</span> <span>OpenAI</span> <span>from</span> <span>langchain.prompts</span> <span>import</span> <span>PromptTemplate</span> <span>class</span> <span>CustomChain</span><span>:</span> <span>def</span> <span>__init__</span><span>(</span><span>self</span><span>,</span> <span>llm</span><span>):</span> <span>self</span><span>.</span><span>llm</span> <span>=</span> <span>llm</span> <span>self</span><span>.</span><span>steps</span> <span>=</span> <span>[]</span> <span>def</span> <span>add_step</span><span>(</span><span>self</span><span>,</span> <span>prompt_template</span><span>):</span> <span>prompt</span> <span>=</span> <span>PromptTemplate</span><span>(</span><span>template</span><span>=</span><span>prompt_template</span><span>,</span> <span>input_variables</span><span>=</span><span>[</span><span>"</span><span>input</span><span>"</span><span>])</span> <span>chain</span> <span>=</span> <span>LLMChain</span><span>(</span><span>llm</span><span>=</span><span>self</span><span>.</span><span>llm</span><span>,</span> <span>prompt</span><span>=</span><span>prompt</span><span>)</span> <span>self</span><span>.</span><span>steps</span><span>.</span><span>append</span><span>(</span><span>chain</span><span>)</span> <span>def</span> <span>execute</span><span>(</span><span>self</span><span>,</span> <span>input_text</span><span>):</span> <span>for</span> <span>step</span> <span>in</span> <span>self</span><span>.</span><span>steps</span><span>:</span> <span>input_text</span> <span>=</span> <span>step</span><span>.</span><span>run</span><span>(</span><span>input_text</span><span>)</span> <span>return</span> <span>input_text</span> <span># Initialize the chain </span><span>llm</span> <span>=</span> <span>OpenAI</span><span>(</span><span>temperature</span><span>=</span><span>0.7</span><span>)</span> <span>chain</span> <span>=</span> <span>CustomChain</span><span>(</span><span>llm</span><span>)</span> <span># Add steps to the chain </span><span>chain</span><span>.</span><span>add_step</span><span>(</span><span>"</span><span>Summarize the following text in one sentence: {input}</span><span>"</span><span>)</span> <span>chain</span><span>.</span><span>add_step</span><span>(</span><span>"</span><span>Translate the following English text to French: {input}</span><span>"</span><span>)</span> <span># Execute the chain </span><span>result</span> <span>=</span> <span>chain</span><span>.</span><span>execute</span><span>(</span><span>"</span><span>LangChain is a powerful framework for building AI applications.</span><span>"</span><span>)</span> <span>print</span><span>(</span><span>result</span><span>)</span>from langchain.chains import LLMChain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate class CustomChain: def __init__(self, llm): self.llm = llm self.steps = [] def add_step(self, prompt_template): prompt = PromptTemplate(template=prompt_template, input_variables=["input"]) chain = LLMChain(llm=self.llm, prompt=prompt) self.steps.append(chain) def execute(self, input_text): for step in self.steps: input_text = step.run(input_text) return input_text # Initialize the chain llm = OpenAI(temperature=0.7) chain = CustomChain(llm) # Add steps to the chain chain.add_step("Summarize the following text in one sentence: {input}") chain.add_step("Translate the following English text to French: {input}") # Execute the chain result = chain.execute("LangChain is a powerful framework for building AI applications.") print(result)
Enter fullscreen mode Exit fullscreen mode
This example creates a custom chain that first summarizes an input text and then translates it to French.
2. Combining Chains and LLMs
2.1 Integrating Chains with Prompts and LLMs
Chains can be seamlessly integrated with prompts and LLMs to create more powerful and flexible systems. Here’s an example:
<span>from</span> <span>langchain</span> <span>import</span> <span>PromptTemplate</span><span>,</span> <span>LLMChain</span><span>from</span> <span>langchain.llms</span> <span>import</span> <span>OpenAI</span><span>from</span> <span>langchain.chains</span> <span>import</span> <span>SimpleSequentialChain</span><span>llm</span> <span>=</span> <span>OpenAI</span><span>(</span><span>temperature</span><span>=</span><span>0.7</span><span>)</span><span># First chain: Generate a topic </span><span>first_prompt</span> <span>=</span> <span>PromptTemplate</span><span>(</span><span>input_variables</span><span>=</span><span>[</span><span>"</span><span>subject</span><span>"</span><span>],</span><span>template</span><span>=</span><span>"</span><span>Generate a random {subject} topic:</span><span>"</span><span>)</span><span>first_chain</span> <span>=</span> <span>LLMChain</span><span>(</span><span>llm</span><span>=</span><span>llm</span><span>,</span> <span>prompt</span><span>=</span><span>first_prompt</span><span>)</span><span># Second chain: Write a paragraph about the topic </span><span>second_prompt</span> <span>=</span> <span>PromptTemplate</span><span>(</span><span>input_variables</span><span>=</span><span>[</span><span>"</span><span>topic</span><span>"</span><span>],</span><span>template</span><span>=</span><span>"</span><span>Write a short paragraph about {topic}:</span><span>"</span><span>)</span><span>second_chain</span> <span>=</span> <span>LLMChain</span><span>(</span><span>llm</span><span>=</span><span>llm</span><span>,</span> <span>prompt</span><span>=</span><span>second_prompt</span><span>)</span><span># Combine the chains </span><span>overall_chain</span> <span>=</span> <span>SimpleSequentialChain</span><span>(</span><span>chains</span><span>=</span><span>[</span><span>first_chain</span><span>,</span> <span>second_chain</span><span>],</span> <span>verbose</span><span>=</span><span>True</span><span>)</span><span># Run the chain </span><span>result</span> <span>=</span> <span>overall_chain</span><span>.</span><span>run</span><span>(</span><span>"</span><span>science</span><span>"</span><span>)</span><span>print</span><span>(</span><span>result</span><span>)</span><span>from</span> <span>langchain</span> <span>import</span> <span>PromptTemplate</span><span>,</span> <span>LLMChain</span> <span>from</span> <span>langchain.llms</span> <span>import</span> <span>OpenAI</span> <span>from</span> <span>langchain.chains</span> <span>import</span> <span>SimpleSequentialChain</span> <span>llm</span> <span>=</span> <span>OpenAI</span><span>(</span><span>temperature</span><span>=</span><span>0.7</span><span>)</span> <span># First chain: Generate a topic </span><span>first_prompt</span> <span>=</span> <span>PromptTemplate</span><span>(</span> <span>input_variables</span><span>=</span><span>[</span><span>"</span><span>subject</span><span>"</span><span>],</span> <span>template</span><span>=</span><span>"</span><span>Generate a random {subject} topic:</span><span>"</span> <span>)</span> <span>first_chain</span> <span>=</span> <span>LLMChain</span><span>(</span><span>llm</span><span>=</span><span>llm</span><span>,</span> <span>prompt</span><span>=</span><span>first_prompt</span><span>)</span> <span># Second chain: Write a paragraph about the topic </span><span>second_prompt</span> <span>=</span> <span>PromptTemplate</span><span>(</span> <span>input_variables</span><span>=</span><span>[</span><span>"</span><span>topic</span><span>"</span><span>],</span> <span>template</span><span>=</span><span>"</span><span>Write a short paragraph about {topic}:</span><span>"</span> <span>)</span> <span>second_chain</span> <span>=</span> <span>LLMChain</span><span>(</span><span>llm</span><span>=</span><span>llm</span><span>,</span> <span>prompt</span><span>=</span><span>second_prompt</span><span>)</span> <span># Combine the chains </span><span>overall_chain</span> <span>=</span> <span>SimpleSequentialChain</span><span>(</span><span>chains</span><span>=</span><span>[</span><span>first_chain</span><span>,</span> <span>second_chain</span><span>],</span> <span>verbose</span><span>=</span><span>True</span><span>)</span> <span># Run the chain </span><span>result</span> <span>=</span> <span>overall_chain</span><span>.</span><span>run</span><span>(</span><span>"</span><span>science</span><span>"</span><span>)</span> <span>print</span><span>(</span><span>result</span><span>)</span>from langchain import PromptTemplate, LLMChain from langchain.llms import OpenAI from langchain.chains import SimpleSequentialChain llm = OpenAI(temperature=0.7) # First chain: Generate a topic first_prompt = PromptTemplate( input_variables=["subject"], template="Generate a random {subject} topic:" ) first_chain = LLMChain(llm=llm, prompt=first_prompt) # Second chain: Write a paragraph about the topic second_prompt = PromptTemplate( input_variables=["topic"], template="Write a short paragraph about {topic}:" ) second_chain = LLMChain(llm=llm, prompt=second_prompt) # Combine the chains overall_chain = SimpleSequentialChain(chains=[first_chain, second_chain], verbose=True) # Run the chain result = overall_chain.run("science") print(result)
Enter fullscreen mode Exit fullscreen mode
This example creates a chain that generates a random science topic and then writes a paragraph about it.
2.2 Debugging and Optimizing Chain-LLM Interactions
To debug and optimize chain-LLM interactions, you can use the verbose
parameter and custom callbacks:
<span>from</span> <span>langchain.callbacks</span> <span>import</span> <span>StdOutCallbackHandler</span><span>from</span> <span>langchain.chains</span> <span>import</span> <span>LLMChain</span><span>from</span> <span>langchain.llms</span> <span>import</span> <span>OpenAI</span><span>from</span> <span>langchain.prompts</span> <span>import</span> <span>PromptTemplate</span><span>class</span> <span>CustomHandler</span><span>(</span><span>StdOutCallbackHandler</span><span>):</span><span>def</span> <span>on_llm_start</span><span>(</span><span>self</span><span>,</span> <span>serialized</span><span>,</span> <span>prompts</span><span>,</span> <span>**</span><span>kwargs</span><span>):</span><span>print</span><span>(</span><span>f</span><span>"</span><span>LLM started with prompt: </span><span>{</span><span>prompts</span><span>[</span><span>0</span><span>]</span><span>}</span><span>"</span><span>)</span><span>def</span> <span>on_llm_end</span><span>(</span><span>self</span><span>,</span> <span>response</span><span>,</span> <span>**</span><span>kwargs</span><span>):</span><span>print</span><span>(</span><span>f</span><span>"</span><span>LLM finished with response: </span><span>{</span><span>response</span><span>.</span><span>generations</span><span>[</span><span>0</span><span>][</span><span>0</span><span>].</span><span>text</span><span>}</span><span>"</span><span>)</span><span>llm</span> <span>=</span> <span>OpenAI</span><span>(</span><span>temperature</span><span>=</span><span>0.7</span><span>,</span> <span>callbacks</span><span>=</span><span>[</span><span>CustomHandler</span><span>()])</span><span>template</span> <span>=</span> <span>"</span><span>Tell me a {adjective} joke about {subject}.</span><span>"</span><span>prompt</span> <span>=</span> <span>PromptTemplate</span><span>(</span><span>input_variables</span><span>=</span><span>[</span><span>"</span><span>adjective</span><span>"</span><span>,</span> <span>"</span><span>subject</span><span>"</span><span>],</span> <span>template</span><span>=</span><span>template</span><span>)</span><span>chain</span> <span>=</span> <span>LLMChain</span><span>(</span><span>llm</span><span>=</span><span>llm</span><span>,</span> <span>prompt</span><span>=</span><span>prompt</span><span>,</span> <span>verbose</span><span>=</span><span>True</span><span>)</span><span>result</span> <span>=</span> <span>chain</span><span>.</span><span>run</span><span>(</span><span>adjective</span><span>=</span><span>"</span><span>funny</span><span>"</span><span>,</span> <span>subject</span><span>=</span><span>"</span><span>programming</span><span>"</span><span>)</span><span>print</span><span>(</span><span>result</span><span>)</span><span>from</span> <span>langchain.callbacks</span> <span>import</span> <span>StdOutCallbackHandler</span> <span>from</span> <span>langchain.chains</span> <span>import</span> <span>LLMChain</span> <span>from</span> <span>langchain.llms</span> <span>import</span> <span>OpenAI</span> <span>from</span> <span>langchain.prompts</span> <span>import</span> <span>PromptTemplate</span> <span>class</span> <span>CustomHandler</span><span>(</span><span>StdOutCallbackHandler</span><span>):</span> <span>def</span> <span>on_llm_start</span><span>(</span><span>self</span><span>,</span> <span>serialized</span><span>,</span> <span>prompts</span><span>,</span> <span>**</span><span>kwargs</span><span>):</span> <span>print</span><span>(</span><span>f</span><span>"</span><span>LLM started with prompt: </span><span>{</span><span>prompts</span><span>[</span><span>0</span><span>]</span><span>}</span><span>"</span><span>)</span> <span>def</span> <span>on_llm_end</span><span>(</span><span>self</span><span>,</span> <span>response</span><span>,</span> <span>**</span><span>kwargs</span><span>):</span> <span>print</span><span>(</span><span>f</span><span>"</span><span>LLM finished with response: </span><span>{</span><span>response</span><span>.</span><span>generations</span><span>[</span><span>0</span><span>][</span><span>0</span><span>].</span><span>text</span><span>}</span><span>"</span><span>)</span> <span>llm</span> <span>=</span> <span>OpenAI</span><span>(</span><span>temperature</span><span>=</span><span>0.7</span><span>,</span> <span>callbacks</span><span>=</span><span>[</span><span>CustomHandler</span><span>()])</span> <span>template</span> <span>=</span> <span>"</span><span>Tell me a {adjective} joke about {subject}.</span><span>"</span> <span>prompt</span> <span>=</span> <span>PromptTemplate</span><span>(</span><span>input_variables</span><span>=</span><span>[</span><span>"</span><span>adjective</span><span>"</span><span>,</span> <span>"</span><span>subject</span><span>"</span><span>],</span> <span>template</span><span>=</span><span>template</span><span>)</span> <span>chain</span> <span>=</span> <span>LLMChain</span><span>(</span><span>llm</span><span>=</span><span>llm</span><span>,</span> <span>prompt</span><span>=</span><span>prompt</span><span>,</span> <span>verbose</span><span>=</span><span>True</span><span>)</span> <span>result</span> <span>=</span> <span>chain</span><span>.</span><span>run</span><span>(</span><span>adjective</span><span>=</span><span>"</span><span>funny</span><span>"</span><span>,</span> <span>subject</span><span>=</span><span>"</span><span>programming</span><span>"</span><span>)</span> <span>print</span><span>(</span><span>result</span><span>)</span>from langchain.callbacks import StdOutCallbackHandler from langchain.chains import LLMChain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate class CustomHandler(StdOutCallbackHandler): def on_llm_start(self, serialized, prompts, **kwargs): print(f"LLM started with prompt: {prompts[0]}") def on_llm_end(self, response, **kwargs): print(f"LLM finished with response: {response.generations[0][0].text}") llm = OpenAI(temperature=0.7, callbacks=[CustomHandler()]) template = "Tell me a {adjective} joke about {subject}." prompt = PromptTemplate(input_variables=["adjective", "subject"], template=template) chain = LLMChain(llm=llm, prompt=prompt, verbose=True) result = chain.run(adjective="funny", subject="programming") print(result)
Enter fullscreen mode Exit fullscreen mode
This example uses a custom callback handler to provide detailed information about the LLM’s input and output.
3. Introducing Agents
3.1 What are Agents in LangChain?
Agents in LangChain are autonomous entities that can use tools and make decisions to accomplish tasks. They combine LLMs with external tools to solve complex problems, allowing for more dynamic and adaptable AI systems.
3.2 Built-in Agents and Their Capabilities
LangChain provides several built-in agents, such as the zero-shot-react-description agent:
<span>from</span> <span>langchain.agents</span> <span>import</span> <span>load_tools</span><span>,</span> <span>initialize_agent</span><span>,</span> <span>AgentType</span><span>from</span> <span>langchain.llms</span> <span>import</span> <span>OpenAI</span><span>llm</span> <span>=</span> <span>OpenAI</span><span>(</span><span>temperature</span><span>=</span><span>0</span><span>)</span><span>tools</span> <span>=</span> <span>load_tools</span><span>([</span><span>"</span><span>wikipedia</span><span>"</span><span>,</span> <span>"</span><span>llm-math</span><span>"</span><span>],</span> <span>llm</span><span>=</span><span>llm</span><span>)</span><span>agent</span> <span>=</span> <span>initialize_agent</span><span>(</span><span>tools</span><span>,</span><span>llm</span><span>,</span><span>agent</span><span>=</span><span>AgentType</span><span>.</span><span>ZERO_SHOT_REACT_DESCRIPTION</span><span>,</span><span>verbose</span><span>=</span><span>True</span><span>)</span><span>result</span> <span>=</span> <span>agent</span><span>.</span><span>run</span><span>(</span><span>"</span><span>What is the square root of the year Plato was born?</span><span>"</span><span>)</span><span>print</span><span>(</span><span>result</span><span>)</span><span>from</span> <span>langchain.agents</span> <span>import</span> <span>load_tools</span><span>,</span> <span>initialize_agent</span><span>,</span> <span>AgentType</span> <span>from</span> <span>langchain.llms</span> <span>import</span> <span>OpenAI</span> <span>llm</span> <span>=</span> <span>OpenAI</span><span>(</span><span>temperature</span><span>=</span><span>0</span><span>)</span> <span>tools</span> <span>=</span> <span>load_tools</span><span>([</span><span>"</span><span>wikipedia</span><span>"</span><span>,</span> <span>"</span><span>llm-math</span><span>"</span><span>],</span> <span>llm</span><span>=</span><span>llm</span><span>)</span> <span>agent</span> <span>=</span> <span>initialize_agent</span><span>(</span> <span>tools</span><span>,</span> <span>llm</span><span>,</span> <span>agent</span><span>=</span><span>AgentType</span><span>.</span><span>ZERO_SHOT_REACT_DESCRIPTION</span><span>,</span> <span>verbose</span><span>=</span><span>True</span> <span>)</span> <span>result</span> <span>=</span> <span>agent</span><span>.</span><span>run</span><span>(</span><span>"</span><span>What is the square root of the year Plato was born?</span><span>"</span><span>)</span> <span>print</span><span>(</span><span>result</span><span>)</span>from langchain.agents import load_tools, initialize_agent, AgentType from langchain.llms import OpenAI llm = OpenAI(temperature=0) tools = load_tools(["wikipedia", "llm-math"], llm=llm) agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) result = agent.run("What is the square root of the year Plato was born?") print(result)
Enter fullscreen mode Exit fullscreen mode
This example creates an agent that can use Wikipedia and perform mathematical calculations to answer complex questions.
3.3 Creating Custom Agents
You can create custom agents by defining your own tools and agent classes. This allows for highly specialized agents tailored to specific tasks or domains.
Here’s an example of a custom agent:
<span>from</span> <span>langchain.agents</span> <span>import</span> <span>Tool</span><span>,</span> <span>AgentExecutor</span><span>,</span> <span>LLMSingleActionAgent</span><span>from</span> <span>langchain.prompts</span> <span>import</span> <span>StringPromptTemplate</span><span>from</span> <span>langchain</span> <span>import</span> <span>OpenAI</span><span>,</span> <span>SerpAPIWrapper</span><span>,</span> <span>LLMChain</span><span>from</span> <span>typing</span> <span>import</span> <span>List</span><span>,</span> <span>Union</span><span>from</span> <span>langchain.schema</span> <span>import</span> <span>AgentAction</span><span>,</span> <span>AgentFinish</span><span>import</span> <span>re</span><span># Define custom tools </span><span>search</span> <span>=</span> <span>SerpAPIWrapper</span><span>()</span><span>tools</span> <span>=</span> <span>[</span><span>Tool</span><span>(</span><span>name</span><span>=</span><span>"</span><span>Search</span><span>"</span><span>,</span><span>func</span><span>=</span><span>search</span><span>.</span><span>run</span><span>,</span><span>description</span><span>=</span><span>"</span><span>Useful for answering questions about current events</span><span>"</span><span>)</span><span>]</span><span># Define a custom prompt template </span><span>template</span> <span>=</span> <span>"""</span><span>Answer the following questions as best you can: {input} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action </span><span>...</span> <span>(</span><span>this</span> <span>Thought</span><span>/</span><span>Action</span><span>/</span><span>Action</span> <span>Input</span><span>/</span><span>Observation</span> <span>can</span> <span>repeat</span> <span>N</span> <span>times</span><span>)</span><span>Thought</span><span>:</span> <span>I</span> <span>now</span> <span>know</span> <span>the</span> <span>final</span> <span>answer</span><span>Final</span> <span>Answer</span><span>:</span> <span>the</span> <span>final</span> <span>answer</span> <span>to</span> <span>the</span> <span>original</span> <span>input</span> <span>question</span><span>Begin! Question: {input} Thought: To answer this question, I need to search for current information. {agent_scratchpad}</span><span>"""</span><span>class</span> <span>CustomPromptTemplate</span><span>(</span><span>StringPromptTemplate</span><span>):</span><span>template</span><span>:</span> <span>str</span><span>tools</span><span>:</span> <span>List</span><span>[</span><span>Tool</span><span>]</span><span>def</span> <span>format</span><span>(</span><span>self</span><span>,</span> <span>**</span><span>kwargs</span><span>)</span> <span>-></span> <span>str</span><span>:</span><span>intermediate_steps</span> <span>=</span> <span>kwargs</span><span>.</span><span>pop</span><span>(</span><span>"</span><span>intermediate_steps</span><span>"</span><span>)</span><span>thoughts</span> <span>=</span> <span>""</span><span>for</span> <span>action</span><span>,</span> <span>observation</span> <span>in</span> <span>intermediate_steps</span><span>:</span><span>thoughts</span> <span>+=</span> <span>action</span><span>.</span><span>log</span><span>thoughts</span> <span>+=</span> <span>f</span><span>"</span><span>\n</span><span>Observation: </span><span>{</span><span>observation</span><span>}</span><span>\n</span><span>Thought: </span><span>"</span><span>kwargs</span><span>[</span><span>"</span><span>agent_scratchpad</span><span>"</span><span>]</span> <span>=</span> <span>thoughts</span><span>kwargs</span><span>[</span><span>"</span><span>tool_names</span><span>"</span><span>]</span> <span>=</span> <span>"</span><span>, </span><span>"</span><span>.</span><span>join</span><span>([</span><span>tool</span><span>.</span><span>name</span> <span>for</span> <span>tool</span> <span>in</span> <span>self</span><span>.</span><span>tools</span><span>])</span><span>return</span> <span>self</span><span>.</span><span>template</span><span>.</span><span>format</span><span>(</span><span>**</span><span>kwargs</span><span>)</span><span>prompt</span> <span>=</span> <span>CustomPromptTemplate</span><span>(</span><span>template</span><span>=</span><span>template</span><span>,</span><span>tools</span><span>=</span><span>tools</span><span>,</span><span>input_variables</span><span>=</span><span>[</span><span>"</span><span>input</span><span>"</span><span>,</span> <span>"</span><span>intermediate_steps</span><span>"</span><span>]</span><span>)</span><span># Define a custom output parser </span><span>class</span> <span>CustomOutputParser</span><span>:</span><span>def</span> <span>parse</span><span>(</span><span>self</span><span>,</span> <span>llm_output</span><span>:</span> <span>str</span><span>)</span> <span>-></span> <span>Union</span><span>[</span><span>AgentAction</span><span>,</span> <span>AgentFinish</span><span>]:</span><span>if</span> <span>"</span><span>Final Answer:</span><span>"</span> <span>in</span> <span>llm_output</span><span>:</span><span>return</span> <span>AgentFinish</span><span>(</span><span>return_values</span><span>=</span><span>{</span><span>"</span><span>output</span><span>"</span><span>:</span> <span>llm_output</span><span>.</span><span>split</span><span>(</span><span>"</span><span>Final Answer:</span><span>"</span><span>)[</span><span>-</span><span>1</span><span>].</span><span>strip</span><span>()},</span><span>log</span><span>=</span><span>llm_output</span><span>,</span><span>)</span><span>action_match</span> <span>=</span> <span>re</span><span>.</span><span>search</span><span>(</span><span>r</span><span>"</span><span>Action: (\w+)</span><span>"</span><span>,</span> <span>llm_output</span><span>,</span> <span>re</span><span>.</span><span>DOTALL</span><span>)</span><span>action_input_match</span> <span>=</span> <span>re</span><span>.</span><span>search</span><span>(</span><span>r</span><span>"</span><span>Action Input: (.*)</span><span>"</span><span>,</span> <span>llm_output</span><span>,</span> <span>re</span><span>.</span><span>DOTALL</span><span>)</span><span>if</span> <span>not</span> <span>action_match</span> <span>or</span> <span>not</span> <span>action_input_match</span><span>:</span><span>raise</span> <span>ValueError</span><span>(</span><span>f</span><span>"</span><span>Could not parse LLM output: `</span><span>{</span><span>llm_output</span><span>}</span><span>`</span><span>"</span><span>)</span><span>action</span> <span>=</span> <span>action_match</span><span>.</span><span>group</span><span>(</span><span>1</span><span>).</span><span>strip</span><span>()</span><span>action_input</span> <span>=</span> <span>action_input_match</span><span>.</span><span>group</span><span>(</span><span>1</span><span>).</span><span>strip</span><span>(</span><span>"</span><span> </span><span>"</span><span>).</span><span>strip</span><span>(</span><span>'"'</span><span>)</span><span>return</span> <span>AgentAction</span><span>(</span><span>tool</span><span>=</span><span>action</span><span>,</span> <span>tool_input</span><span>=</span><span>action_input</span><span>,</span> <span>log</span><span>=</span><span>llm_output</span><span>)</span><span># Create the custom output parser </span><span>output_parser</span> <span>=</span> <span>CustomOutputParser</span><span>()</span><span># Define the LLM chain </span><span>llm</span> <span>=</span> <span>OpenAI</span><span>(</span><span>temperature</span><span>=</span><span>0</span><span>)</span><span>llm_chain</span> <span>=</span> <span>LLMChain</span><span>(</span><span>llm</span><span>=</span><span>llm</span><span>,</span> <span>prompt</span><span>=</span><span>prompt</span><span>)</span><span># Define the custom agent </span><span>agent</span> <span>=</span> <span>LLMSingleActionAgent</span><span>(</span><span>llm_chain</span><span>=</span><span>llm_chain</span><span>,</span><span>output_parser</span><span>=</span><span>output_parser</span><span>,</span><span>stop</span><span>=</span><span>[</span><span>"</span><span>\n</span><span>Observation:</span><span>"</span><span>],</span><span>allowed_tools</span><span>=</span><span>[</span><span>tool</span><span>.</span><span>name</span> <span>for</span> <span>tool</span> <span>in</span> <span>tools</span><span>]</span><span>)</span><span># Create an agent executor </span><span>agent_executor</span> <span>=</span> <span>AgentExecutor</span><span>.</span><span>from_agent_and_tools</span><span>(</span><span>agent</span><span>=</span><span>agent</span><span>,</span> <span>tools</span><span>=</span><span>tools</span><span>,</span> <span>,</span> <span>verbose</span><span>=</span><span>True</span><span>)</span><span># Run the agent </span><span>result</span> <span>=</span> <span>agent_executor</span><span>.</span><span>run</span><span>(</span><span>“</span><span>What</span><span>’</span><span>s</span> <span>the</span> <span>latest</span> <span>news</span> <span>about</span> <span>AI</span><span>?”</span><span>)</span><span>print</span><span>(</span><span>result</span><span>)</span><span>from</span> <span>langchain.agents</span> <span>import</span> <span>Tool</span><span>,</span> <span>AgentExecutor</span><span>,</span> <span>LLMSingleActionAgent</span> <span>from</span> <span>langchain.prompts</span> <span>import</span> <span>StringPromptTemplate</span> <span>from</span> <span>langchain</span> <span>import</span> <span>OpenAI</span><span>,</span> <span>SerpAPIWrapper</span><span>,</span> <span>LLMChain</span> <span>from</span> <span>typing</span> <span>import</span> <span>List</span><span>,</span> <span>Union</span> <span>from</span> <span>langchain.schema</span> <span>import</span> <span>AgentAction</span><span>,</span> <span>AgentFinish</span> <span>import</span> <span>re</span> <span># Define custom tools </span><span>search</span> <span>=</span> <span>SerpAPIWrapper</span><span>()</span> <span>tools</span> <span>=</span> <span>[</span> <span>Tool</span><span>(</span> <span>name</span><span>=</span><span>"</span><span>Search</span><span>"</span><span>,</span> <span>func</span><span>=</span><span>search</span><span>.</span><span>run</span><span>,</span> <span>description</span><span>=</span><span>"</span><span>Useful for answering questions about current events</span><span>"</span> <span>)</span> <span>]</span> <span># Define a custom prompt template </span><span>template</span> <span>=</span> <span>"""</span><span>Answer the following questions as best you can: {input} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action </span><span>...</span> <span>(</span><span>this</span> <span>Thought</span><span>/</span><span>Action</span><span>/</span><span>Action</span> <span>Input</span><span>/</span><span>Observation</span> <span>can</span> <span>repeat</span> <span>N</span> <span>times</span><span>)</span> <span>Thought</span><span>:</span> <span>I</span> <span>now</span> <span>know</span> <span>the</span> <span>final</span> <span>answer</span> <span>Final</span> <span>Answer</span><span>:</span> <span>the</span> <span>final</span> <span>answer</span> <span>to</span> <span>the</span> <span>original</span> <span>input</span> <span>question</span> <span>Begin! Question: {input} Thought: To answer this question, I need to search for current information. {agent_scratchpad}</span><span>"""</span> <span>class</span> <span>CustomPromptTemplate</span><span>(</span><span>StringPromptTemplate</span><span>):</span> <span>template</span><span>:</span> <span>str</span> <span>tools</span><span>:</span> <span>List</span><span>[</span><span>Tool</span><span>]</span> <span>def</span> <span>format</span><span>(</span><span>self</span><span>,</span> <span>**</span><span>kwargs</span><span>)</span> <span>-></span> <span>str</span><span>:</span> <span>intermediate_steps</span> <span>=</span> <span>kwargs</span><span>.</span><span>pop</span><span>(</span><span>"</span><span>intermediate_steps</span><span>"</span><span>)</span> <span>thoughts</span> <span>=</span> <span>""</span> <span>for</span> <span>action</span><span>,</span> <span>observation</span> <span>in</span> <span>intermediate_steps</span><span>:</span> <span>thoughts</span> <span>+=</span> <span>action</span><span>.</span><span>log</span> <span>thoughts</span> <span>+=</span> <span>f</span><span>"</span><span>\n</span><span>Observation: </span><span>{</span><span>observation</span><span>}</span><span>\n</span><span>Thought: </span><span>"</span> <span>kwargs</span><span>[</span><span>"</span><span>agent_scratchpad</span><span>"</span><span>]</span> <span>=</span> <span>thoughts</span> <span>kwargs</span><span>[</span><span>"</span><span>tool_names</span><span>"</span><span>]</span> <span>=</span> <span>"</span><span>, </span><span>"</span><span>.</span><span>join</span><span>([</span><span>tool</span><span>.</span><span>name</span> <span>for</span> <span>tool</span> <span>in</span> <span>self</span><span>.</span><span>tools</span><span>])</span> <span>return</span> <span>self</span><span>.</span><span>template</span><span>.</span><span>format</span><span>(</span><span>**</span><span>kwargs</span><span>)</span> <span>prompt</span> <span>=</span> <span>CustomPromptTemplate</span><span>(</span> <span>template</span><span>=</span><span>template</span><span>,</span> <span>tools</span><span>=</span><span>tools</span><span>,</span> <span>input_variables</span><span>=</span><span>[</span><span>"</span><span>input</span><span>"</span><span>,</span> <span>"</span><span>intermediate_steps</span><span>"</span><span>]</span> <span>)</span> <span># Define a custom output parser </span><span>class</span> <span>CustomOutputParser</span><span>:</span> <span>def</span> <span>parse</span><span>(</span><span>self</span><span>,</span> <span>llm_output</span><span>:</span> <span>str</span><span>)</span> <span>-></span> <span>Union</span><span>[</span><span>AgentAction</span><span>,</span> <span>AgentFinish</span><span>]:</span> <span>if</span> <span>"</span><span>Final Answer:</span><span>"</span> <span>in</span> <span>llm_output</span><span>:</span> <span>return</span> <span>AgentFinish</span><span>(</span> <span>return_values</span><span>=</span><span>{</span><span>"</span><span>output</span><span>"</span><span>:</span> <span>llm_output</span><span>.</span><span>split</span><span>(</span><span>"</span><span>Final Answer:</span><span>"</span><span>)[</span><span>-</span><span>1</span><span>].</span><span>strip</span><span>()},</span> <span>log</span><span>=</span><span>llm_output</span><span>,</span> <span>)</span> <span>action_match</span> <span>=</span> <span>re</span><span>.</span><span>search</span><span>(</span><span>r</span><span>"</span><span>Action: (\w+)</span><span>"</span><span>,</span> <span>llm_output</span><span>,</span> <span>re</span><span>.</span><span>DOTALL</span><span>)</span> <span>action_input_match</span> <span>=</span> <span>re</span><span>.</span><span>search</span><span>(</span><span>r</span><span>"</span><span>Action Input: (.*)</span><span>"</span><span>,</span> <span>llm_output</span><span>,</span> <span>re</span><span>.</span><span>DOTALL</span><span>)</span> <span>if</span> <span>not</span> <span>action_match</span> <span>or</span> <span>not</span> <span>action_input_match</span><span>:</span> <span>raise</span> <span>ValueError</span><span>(</span><span>f</span><span>"</span><span>Could not parse LLM output: `</span><span>{</span><span>llm_output</span><span>}</span><span>`</span><span>"</span><span>)</span> <span>action</span> <span>=</span> <span>action_match</span><span>.</span><span>group</span><span>(</span><span>1</span><span>).</span><span>strip</span><span>()</span> <span>action_input</span> <span>=</span> <span>action_input_match</span><span>.</span><span>group</span><span>(</span><span>1</span><span>).</span><span>strip</span><span>(</span><span>"</span><span> </span><span>"</span><span>).</span><span>strip</span><span>(</span><span>'"'</span><span>)</span> <span>return</span> <span>AgentAction</span><span>(</span><span>tool</span><span>=</span><span>action</span><span>,</span> <span>tool_input</span><span>=</span><span>action_input</span><span>,</span> <span>log</span><span>=</span><span>llm_output</span><span>)</span> <span># Create the custom output parser </span><span>output_parser</span> <span>=</span> <span>CustomOutputParser</span><span>()</span> <span># Define the LLM chain </span><span>llm</span> <span>=</span> <span>OpenAI</span><span>(</span><span>temperature</span><span>=</span><span>0</span><span>)</span> <span>llm_chain</span> <span>=</span> <span>LLMChain</span><span>(</span><span>llm</span><span>=</span><span>llm</span><span>,</span> <span>prompt</span><span>=</span><span>prompt</span><span>)</span> <span># Define the custom agent </span><span>agent</span> <span>=</span> <span>LLMSingleActionAgent</span><span>(</span> <span>llm_chain</span><span>=</span><span>llm_chain</span><span>,</span> <span>output_parser</span><span>=</span><span>output_parser</span><span>,</span> <span>stop</span><span>=</span><span>[</span><span>"</span><span>\n</span><span>Observation:</span><span>"</span><span>],</span> <span>allowed_tools</span><span>=</span><span>[</span><span>tool</span><span>.</span><span>name</span> <span>for</span> <span>tool</span> <span>in</span> <span>tools</span><span>]</span> <span>)</span> <span># Create an agent executor </span><span>agent_executor</span> <span>=</span> <span>AgentExecutor</span><span>.</span><span>from_agent_and_tools</span><span>(</span><span>agent</span><span>=</span><span>agent</span><span>,</span> <span>tools</span><span>=</span><span>tools</span><span>,</span> <span>,</span> <span>verbose</span><span>=</span><span>True</span><span>)</span> <span># Run the agent </span><span>result</span> <span>=</span> <span>agent_executor</span><span>.</span><span>run</span><span>(</span><span>“</span><span>What</span><span>’</span><span>s</span> <span>the</span> <span>latest</span> <span>news</span> <span>about</span> <span>AI</span><span>?”</span><span>)</span> <span>print</span><span>(</span><span>result</span><span>)</span>from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent from langchain.prompts import StringPromptTemplate from langchain import OpenAI, SerpAPIWrapper, LLMChain from typing import List, Union from langchain.schema import AgentAction, AgentFinish import re # Define custom tools search = SerpAPIWrapper() tools = [ Tool( name="Search", func=search.run, description="Useful for answering questions about current events" ) ] # Define a custom prompt template template = """Answer the following questions as best you can: {input} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: {input} Thought: To answer this question, I need to search for current information. {agent_scratchpad}""" class CustomPromptTemplate(StringPromptTemplate): template: str tools: List[Tool] def format(self, **kwargs) -> str: intermediate_steps = kwargs.pop("intermediate_steps") thoughts = "" for action, observation in intermediate_steps: thoughts += action.log thoughts += f"\nObservation: {observation}\nThought: " kwargs["agent_scratchpad"] = thoughts kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools]) return self.template.format(**kwargs) prompt = CustomPromptTemplate( template=template, tools=tools, input_variables=["input", "intermediate_steps"] ) # Define a custom output parser class CustomOutputParser: def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]: if "Final Answer:" in llm_output: return AgentFinish( return_values={"output": llm_output.split("Final Answer:")[-1].strip()}, log=llm_output, ) action_match = re.search(r"Action: (\w+)", llm_output, re.DOTALL) action_input_match = re.search(r"Action Input: (.*)", llm_output, re.DOTALL) if not action_match or not action_input_match: raise ValueError(f"Could not parse LLM output: `{llm_output}`") action = action_match.group(1).strip() action_input = action_input_match.group(1).strip(" ").strip('"') return AgentAction(tool=action, tool_input=action_input, log=llm_output) # Create the custom output parser output_parser = CustomOutputParser() # Define the LLM chain llm = OpenAI(temperature=0) llm_chain = LLMChain(llm=llm, prompt=prompt) # Define the custom agent agent = LLMSingleActionAgent( llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=[tool.name for tool in tools] ) # Create an agent executor agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, , verbose=True) # Run the agent result = agent_executor.run(“What’s the latest news about AI?”) print(result)
Enter fullscreen mode Exit fullscreen mode
Conclusion
LangChain’s chains and agents offer robust capabilities for constructing sophisticated AI-driven systems. When integrated with large language models (LLMs), they enable the creation of adaptable, smart applications designed to tackle a variety of tasks. As you progress through your LangChain journey, feel free to experiment with diverse chain types, agent setups, and custom modules to fully harness the framework’s potential.
LangChian (4 Part Series)
1 Mastering LangChain: Part 1 – Introduction to LangChain and Its Key Components
2 Part 2: Mastering Prompts and Language Models with LangChain
3 Part 3: Building Powerful Chains and Agents in LangChain
4 LangChain Part 4 – Leveraging Memory and Storage in LangChain: A Comprehensive Guide
原文链接:Part 3: Building Powerful Chains and Agents in LangChain
暂无评论内容