龙空技术网

LangChain手册(Python版)25模块:Agent代理2

AI让生活更美好 322

前言:

此时大家对“python27goagent”大约比较注重,各位老铁们都需要分析一些“python27goagent”的相关文章。那么小编也在网摘上搜集了一些有关“python27goagent””的相关内容,希望朋友们能喜欢,你们一起来了解一下吧!

代理

在文档的这一部分中,我们涵盖了不同类型的代理,而不管它们与哪些特定工具一起使用。

有关不同类型代理的高级概述,请参阅以下文档。

代理类型

有关如何创建自定义代理的文档,请参阅下文。

定制代理自定义 LLM 代理自定义 LLM 代理(带有 ChatModel)自定义 MRKL 代理自定义多动作代理具有工具检索功能的自定义代理

我们还提供了深入探讨每种代理类型的文档。

对话代理(用于聊天模型)对话代理MRKLMRKL聊天室反应通过搜索自我询问结构化工具聊天代理代理类型

代理使用 LLM 来确定采取哪些行动以及采取何种顺序。动作可以是使用工具并观察其输出,也可以是向用户返回响应。以下是 LangChain 中可用的代理。

zero-shot-react-description

该代理使用 ReAct 框架来仅根据工具的描述来确定要使用的工具。可以提供任意数量的工具。此代理要求为每个工具提供描述。

react-docstore

该代理使用 ReAct 框架与文档库进行交互。必须提供两个工具:一个Search工具和一个Lookup工具(它们必须准确命名)。该Search工具应搜索文档,而该Lookup工具应在最近找到的文档中查找术语。这个代理相当于原始的ReAct 论文,具体是维基百科的例子。

self-ask-with-search

该代理使用一个应该命名为 的工具。该工具应该能够查找问题的事实答案。这个代理相当于原来的self ask with search paper,其中提供了一个谷歌搜索 API 作为工具。Intermediate Answer

conversational-react-description

该代理旨在用于会话设置。提示旨在使座席提供帮助和对话。它使用 ReAct 框架来决定使用哪个工具,并使用内存来记住之前的对话交互。

自定义代理

此笔记本介绍了如何创建您自己的自定义代理。

代理由两部分组成:

- Tools: The tools the agent has available to use.- The agent class itself: this decides which action to take.

在本笔记本中,我们将介绍如何创建自定义代理。

from langchain.agents import Tool, AgentExecutor, BaseSingleActionAgentfrom langchain import OpenAI, SerpAPIWrapper
search = SerpAPIWrapper()tools = [    Tool(        name = "Search",        func=search.run,        description="useful for when you need to answer questions about current events",        return_direct=True    )]
from typing import List, Tuple, Any, Unionfrom langchain.schema import AgentAction, AgentFinishclass FakeAgent(BaseSingleActionAgent):    """Fake Custom Agent."""        @property    def input_keys(self):        return ["input"]        def plan(        self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any    ) -> Union[AgentAction, AgentFinish]:        """Given input, decided what to do.        Args:            intermediate_steps: Steps the LLM has taken to date,                along with observations            **kwargs: User inputs.        Returns:            Action specifying what tool to use.        """        return AgentAction(tool="Search", tool_input=kwargs["input"], log="")    async def aplan(        self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any    ) -> Union[AgentAction, AgentFinish]:        """Given input, decided what to do.        Args:            intermediate_steps: Steps the LLM has taken to date,                along with observations            **kwargs: User inputs.        Returns:            Action specifying what tool to use.        """        return AgentAction(tool="Search", tool_input=kwargs["input"], log="")
agent = FakeAgent()
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("How many people live in canada as of 2023?")
> Entering new AgentExecutor chain...The current population of Canada is 38,669,152 as of Monday, April 24, 2023, based on Worldometer elaboration of the latest United Nations data.> Finished chain.
'The current population of Canada is 38,669,152 as of Monday, April 24, 2023, based on Worldometer elaboration of the latest United Nations data.'
自定义 LLM 代理

本笔记本介绍了如何创建您自己的自定义 LLM 代理。

LLM代理由三部分组成:

PromptTemplate:这是提示模板,可用于指示语言模型做什么LLM:这是为代理提供支持的语言模型stopsequence:指示LLM一旦找到这个字符串就停止生成OutputParser:这决定了如何将 LLMOutput 解析为 AgentAction 或 AgentFinish 对象

LLMAgent 在 AgentExecutor 中使用。这个 AgentExecutor 在很大程度上可以被认为是一个循环:

将用户输入和任何先前的步骤传递给代理(在本例中为 LLMAgent)如果代理返回一个AgentFinish,则直接将其返回给用户如果 Agent 返回一个AgentAction,则使用它来调用一个工具并获得一个Observation重复,将AgentAction和传递Observation回代理,直到AgentFinish发出 。

AgentActionaction是由和组成的响应action_input。action指的是使用哪个工具,并action_input指的是该工具的输入。log也可以作为更多上下文提供(可用于日志记录、跟踪等)。

AgentFinish是包含要发送回用户的最终消息的响应。这应该用于结束代理运行。

在本笔记本中,我们将介绍如何创建自定义 LLM 代理。

设置环境

做必要的进口等。

from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParserfrom langchain.prompts import StringPromptTemplatefrom langchain import OpenAI, SerpAPIWrapper, LLMChainfrom typing import List, Unionfrom langchain.schema import AgentAction, AgentFinishimport re
设置工具

设置代理可能想要使用的任何工具。这可能需要放入提示中(以便代理知道使用这些工具)。

# Define which tools the agent can use to answer user queriessearch = SerpAPIWrapper()tools = [    Tool(        name = "Search",        func=search.run,        description="useful for when you need to answer questions about current events"    )]
提示模板

这指示代理做什么。一般来说,模板应包括:

tools:代理可以访问哪些工具以及调用它们的方式和时间。intermediate_steps:这些是先前 ( AgentAction, Observation) 对的元组。这些一般不会直接传递给模型,而是提示模板以特定的方式对它们进行格式化。input: 通用用户输入

# Set up the base templatetemplate = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:{tools}Use the following format:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [{tool_names}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input questionBegin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"sQuestion: {input}{agent_scratchpad}"""
# Set up a prompt templateclass CustomPromptTemplate(StringPromptTemplate):    # The template to use    template: str    # The list of tools available    tools: List[Tool]        def format(self, **kwargs) -> str:        # Get the intermediate steps (AgentAction, Observation tuples)        # Format them in a particular way        intermediate_steps = kwargs.pop("intermediate_steps")        thoughts = ""        for action, observation in intermediate_steps:            thoughts += action.log            thoughts += f"\nObservation: {observation}\nThought: "        # Set the agent_scratchpad variable to that value        kwargs["agent_scratchpad"] = thoughts        # Create a tools variable from the list of tools provided        kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])        # Create a list of tool names for the tools provided        kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])        return self.template.format(**kwargs)
prompt = CustomPromptTemplate(    template=template,    tools=tools,    # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically    # This includes the `intermediate_steps` variable because that is needed    input_variables=["input", "intermediate_steps"])
输出解析器

输出解析器负责将 LLM 输出解析为AgentAction和AgentFinish。这通常在很大程度上取决于所使用的提示。

这是您可以更改解析以进行重试、处理空白等的地方

class CustomOutputParser(AgentOutputParser):        def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:        # Check if agent should finish        if "Final Answer:" in llm_output:            return AgentFinish(                # Return values is generally always a dictionary with a single `output` key                # It is not recommended to try anything else at the moment :)                return_values={"output": llm_output.split("Final Answer:")[-1].strip()},                log=llm_output,            )        # Parse out the action and action input        regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"        match = re.search(regex, llm_output, re.DOTALL)        if not match:            raise ValueError(f"Could not parse LLM output: `{llm_output}`")        action = match.group(1).strip()        action_input = match.group(2)        # Return the action and action input        return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
output_parser = CustomOutputParser()
设置 LLM

选择您要使用的 LLM!

llm = OpenAI(temperature=0)
定义停止序列

这很重要,因为它告诉 LLM 何时停止生成。

这在很大程度上取决于您使用的提示和模型。通常,您希望这是您在提示中使用的任何标记来表示 an 的开始Observation(否则,LLM 可能会为您产生幻觉观察)。

设置代理

我们现在可以结合一切来设置我们的代理

# LLM chain consisting of the LLM and a promptllm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]agent = LLMSingleActionAgent(    llm_chain=llm_chain,     output_parser=output_parser,    stop=["\nObservation:"],     allowed_tools=tool_names)
使用代理

现在我们可以使用它了!

agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("How many people live in canada as of 2023?")
> Entering new AgentExecutor chain...Thought: I need to find out the population of Canada in 2023Action: SearchAction Input: Population of Canada in 2023Observation:The current population of Canada is 38,658,314 as of Wednesday, April 12, 2023, based on Worldometer elaboration of the latest United Nations data. I now know the final answerFinal Answer: Arrr, there be 38,658,314 people livin' in Canada as of 2023!> Finished chain.
"Arrr, there be 38,658,314 people livin' in Canada as of 2023!"
添加内存

如果要向代理添加内存,则需要:

在 chat_history 的自定义提示中添加一个位置向代理执行器添加一个内存对象。

# Set up the base templatetemplate_with_history = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:{tools}Use the following format:Question: the input question you must answerThought: you should always think about what to doAction: the action to take, should be one of [{tool_names}]Action Input: the input to the actionObservation: the result of the action... (this Thought/Action/Action Input/Observation can repeat N times)Thought: I now know the final answerFinal Answer: the final answer to the original input questionBegin! Remember to speak as a pirate when giving your final answer. Use lots of "Arg"sPrevious conversation history:{history}New question: {input}{agent_scratchpad}"""
prompt_with_history = CustomPromptTemplate(    template=template_with_history,    tools=tools,    # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically    # This includes the `intermediate_steps` variable because that is needed    input_variables=["input", "intermediate_steps", "history"])
llm_chain = LLMChain(llm=llm, prompt=prompt_with_history)
tool_names = [tool.name for tool in tools]agent = LLMSingleActionAgent(    llm_chain=llm_chain,     output_parser=output_parser,    stop=["\nObservation:"],     allowed_tools=tool_names)
from langchain.memory import ConversationBufferWindowMemory
memory=ConversationBufferWindowMemory(k=2)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)
agent_executor.run("How many people live in canada as of 2023?")
> Entering new AgentExecutor chain...Thought: I need to find out the population of Canada in 2023Action: SearchAction Input: Population of Canada in 2023Observation:The current population of Canada is 38,658,314 as of Wednesday, April 12, 2023, based on Worldometer elaboration of the latest United Nations data. I now know the final answerFinal Answer: Arrr, there be 38,658,314 people livin' in Canada as of 2023!> Finished chain.
"Arrr, there be 38,658,314 people livin' in Canada as of 2023!"
agent_executor.run("how about in mexico?")
> Entering new AgentExecutor chain...Thought: I need to find out how many people live in Mexico.Action: SearchAction Input: How many people live in Mexico as of 2023?Observation:The current population of Mexico is 132,679,922 as of Tuesday, April 11, 2023, based on Worldometer elaboration of the latest United Nations data. Mexico 2020 ... I now know the final answer.Final Answer: Arrr, there be 132,679,922 people livin' in Mexico as of 2023!> Finished chain.
"Arrr, there be 132,679,922 people livin' in Mexico as of 2023!"
对话代理(用于聊天模型)

本笔记本介绍了如何使用聊天模型为对话优化的代理。其他代理通常针对使用工具找出最佳响应进行了优化,这在您可能希望代理也能够与用户聊天的对话设置中并不理想。

chat-conversational-react-description这是通过预期与内存组件一起使用的特定类型的代理 ( ) 来实现的。

!pip install langchain!pip install google-search-results!pip install openai
from langchain.agents import Toolfrom langchain.memory import ConversationBufferMemoryfrom langchain.chat_models import ChatOpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypefrom getpass import getpass
SERPAPI_API_KEY = getpass()
search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)tools = [    Tool(        name = "Current Search",        func=search.run,        description="useful for when you need to answer questions about current events or the current state of the world. the input to this should be a single search term."    ),]
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
OPENAI_API_KEY = getpass()
llm=ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
agent_chain.run(input="hi, i am bob")
> Entering new AgentExecutor chain...{    "action": "Final Answer",    "action_input": "Hello Bob! How can I assist you today?"}> Finished chain.
'Hello Bob! How can I assist you today?'
agent_chain.run(input="what's my name?")
> Entering new AgentExecutor chain...{    "action": "Final Answer",    "action_input": "Your name is Bob."}> Finished chain.
'Your name is Bob.'
agent_chain.run("what are some good dinners to make this week, if i like thai food?")
> Entering new AgentExecutor chain...{    "action": "Current Search",    "action_input": "Thai food dinner recipes"}Observation: 64 easy Thai recipes for any night of the week · Thai curry noodle soup · Thai yellow cauliflower, snake bean and tofu curry · Thai-spiced chicken hand pies · Thai ...Thought:{    "action": "Final Answer",    "action_input": "Here are some Thai food dinner recipes you can try this week: Thai curry noodle soup, Thai yellow cauliflower, snake bean and tofu curry, Thai-spiced chicken hand pies, and many more. You can find the full list of recipes at the source I found earlier."}> Finished chain.
'Here are some Thai food dinner recipes you can try this week: Thai curry noodle soup, Thai yellow cauliflower, snake bean and tofu curry, Thai-spiced chicken hand pies, and many more. You can find the full list of recipes at the source I found earlier.'
agent_chain.run(input="tell me the last letter in my name, and also tell me who won the world cup in 1978?")
> Entering new AgentExecutor chain...{    "action": "Final Answer",    "action_input": "The last letter in your name is 'b'. Argentina won the World Cup in 1978."}> Finished chain.
"The last letter in your name is 'b'. Argentina won the World Cup in 1978."
agent_chain.run(input="whats the weather like in pomfret?")
> Entering new AgentExecutor chain...{    "action": "Current Search",    "action_input": "weather in pomfret"}Observation: Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.Thought:{    "action": "Final Answer",    "action_input": "Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%."}> Finished chain.
'Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.'
对话代理

本笔记本介绍了如何使用针对对话优化的代理。其他代理通常针对使用工具找出最佳响应进行了优化,这在您可能希望代理也能够与用户聊天的对话设置中并不理想。

conversational-react-description这是通过预期与内存组件一起使用的特定类型的代理 ( ) 来实现的。

from langchain.agents import Toolfrom langchain.agents import AgentTypefrom langchain.memory import ConversationBufferMemoryfrom langchain import OpenAIfrom langchain.utilities import SerpAPIWrapperfrom langchain.agents import initialize_agent
search = SerpAPIWrapper()tools = [    Tool(        name = "Current Search",        func=search.run,        description="useful for when you need to answer questions about current events or the current state of the world"    ),]
memory = ConversationBufferMemory(memory_key="chat_history")
llm=OpenAI(temperature=0)agent_chain = initialize_agent(tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
agent_chain.run(input="hi, i am bob")
> Entering new AgentExecutor chain...Thought: Do I need to use a tool? NoAI: Hi Bob, nice to meet you! How can I help you today?> Finished chain.
'Hi Bob, nice to meet you! How can I help you today?'
agent_chain.run(input="what's my name?")
> Entering new AgentExecutor chain...Thought: Do I need to use a tool? NoAI: Your name is Bob!> Finished chain.
'Your name is Bob!'
agent_chain.run("what are some good dinners to make this week, if i like thai food?")
> Entering new AgentExecutor chain...Thought: Do I need to use a tool? YesAction: Current SearchAction Input: Thai food dinner recipesObservation: 59 easy Thai recipes for any night of the week · Marion Grasby's Thai spicy chilli and basil fried rice · Thai curry noodle soup · Marion Grasby's Thai Spicy ...Thought: Do I need to use a tool? NoAI: Here are some great Thai dinner recipes you can try this week: Marion Grasby's Thai Spicy Chilli and Basil Fried Rice, Thai Curry Noodle Soup, Thai Green Curry with Coconut Rice, Thai Red Curry with Vegetables, and Thai Coconut Soup. I hope you enjoy them!> Finished chain.
"Here are some great Thai dinner recipes you can try this week: Marion Grasby's Thai Spicy Chilli and Basil Fried Rice, Thai Curry Noodle Soup, Thai Green Curry with Coconut Rice, Thai Red Curry with Vegetables, and Thai Coconut Soup. I hope you enjoy them!"
agent_chain.run(input="tell me the last letter in my name, and also tell me who won the world cup in 1978?")
> Entering new AgentExecutor chain...Thought: Do I need to use a tool? YesAction: Current SearchAction Input: Who won the World Cup in 1978Observation: Argentina national football teamThought: Do I need to use a tool? NoAI: The last letter in your name is "b" and the winner of the 1978 World Cup was the Argentina national football team.> Finished chain.
'The last letter in your name is "b" and the winner of the 1978 World Cup was the Argentina national football team.'
agent_chain.run(input="whats the current temperature in pomfret?")
> Entering new AgentExecutor chain...Thought: Do I need to use a tool? YesAction: Current SearchAction Input: Current temperature in PomfretObservation: Partly cloudy skies. High around 70F. Winds W at 5 to 10 mph. Humidity41%.Thought: Do I need to use a tool? NoAI: The current temperature in Pomfret is around 70F with partly cloudy skies and winds W at 5 to 10 mph. The humidity is 41%.> Finished chain.
'The current temperature in Pomfret is around 70F with partly cloudy skies and winds W at 5 to 10 mph. The humidity is 41%.'
通过搜索自我询问

该笔记本展示了 Self Ask With Search 链。

from langchain import OpenAI, SerpAPIWrapperfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypellm = OpenAI(temperature=0)search = SerpAPIWrapper()tools = [    Tool(        name="Intermediate Answer",        func=search.run,        description="useful for when you need to ask with search"    )]self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)self_ask_with_search.run("What is the hometown of the reigning men's U.S. Open champion?")
> Entering new AgentExecutor chain... Yes.Follow up: Who is the reigning men's U.S. Open champion?Intermediate answer: Carlos Alcaraz GarfiaFollow up: Where is Carlos Alcaraz Garfia from?Intermediate answer: El Palmar, SpainSo the final answer is: El Palmar, Spain> Finished chain.
'El Palmar, Spain'
结构化工具聊天代理

本笔记本介绍如何使用能够使用多输入工具的聊天代理。

较旧的代理被配置为将操作输入指定为单个字符串,但此代理可以使用提供的工具args_schema来填充操作输入。

structured-chat-zero-shot-react-description此功能在 (或)中原生可用AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION。

import osos.environ["LANGCHAIN_TRACING"] = "true" # If you want to trace the execution of the program, set to "true"
from langchain.agents import AgentTypefrom langchain.chat_models import ChatOpenAIfrom langchain.agents import initialize_agent
初始化工具

我们将使用网络浏览器测试代理。

from langchain.agents.agent_toolkits import PlayWrightBrowserToolkitfrom langchain.tools.playwright.utils import (    create_async_playwright_browser,    create_sync_playwright_browser, # A synchronous browser is available, though it isn't compatible with jupyter.)# This import is required only for jupyter notebooks, since they have their own eventloopimport nest_asyncionest_asyncio.apply()
async_browser = create_async_playwright_browser()browser_toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)tools = browser_toolkit.get_tools()
llm = ChatOpenAI(temperature=0) # Also works well with Anthropic modelsagent_chain = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
response = await agent_chain.arun(input="Hi I'm Erica.")print(response)
> Entering new AgentExecutor chain...Action:```{  "action": "Final Answer",  "action_input": "Hello Erica, how can I assist you today?"}```> Finished chain.Hello Erica, how can I assist you today?
response = await agent_chain.arun(input="Don't need help really just chatting.")print(response)
> Entering new AgentExecutor chain...> Finished chain.I'm here to chat! How's your day going?
response = await agent_chain.arun(input="Browse to blog.langchain.dev and summarize the text, please.")print(response)
> Entering new AgentExecutor chain...Action:```{  "action": "navigate_browser",  "action_input": {    "url": ";  }}```Observation: Navigating to  returned status code 200Thought:I need to extract the text from the webpage to summarize it.Action:```{  "action": "extract_text",  "action_input": {}}```Observation: LangChain LangChain Home About GitHub Docs LangChain The official LangChain blog. Auto-Evaluator Opportunities Editor's Note: this is a guest blog post by Lance Martin.TL;DRWe recently open-sourced an auto-evaluator tool for grading LLM question-answer chains. We are now releasing an open source, free to use hosted app and API to expand usability. Below we discuss a few opportunities to further improve May 1, 2023 5 min read Callbacks Improvements TL;DR: We're announcing improvements to our callbacks system, which powers logging, tracing, streaming output, and some awesome third-party integrations. This will better support concurrent runs with independent callbacks, tracing of deeply nested trees of LangChain components, and callback handlers scoped to a single request (which is super useful for May 1, 2023 3 min read Unleashing the power of AI Collaboration with Parallelized LLM Agent Actor Trees Editor's note: the following is a guest blog post from Cyrus at Shaman AI. We use guest blog posts to highlight interesting and novel applciations, and this is certainly that. There's been a lot of talk about agents recently, but most have been discussions around a single agent. If multiple Apr 28, 2023 4 min read Gradio & LLM Agents Editor's note: this is a guest blog post from Freddy Boulton, a software engineer at Gradio. We're excited to share this post because it brings a large number of exciting new tools into the ecosystem. Agents are largely defined by the tools they have, so to be able to equip Apr 23, 2023 4 min read RecAlign - The smart content filter for social media feed [Editor's Note] This is a guest post by Tian Jin. We are highlighting this application as we think it is a novel use case. Specifically, we think recommendation systems are incredibly impactful in our everyday lives and there has not been a ton of discourse on how LLMs will impact Apr 22, 2023 3 min read Improving Document Retrieval with Contextual Compression Note: This post assumes some familiarity with LangChain and is moderately technical. TL;DR: We’ve introduced a new abstraction and a new document Retriever to facilitate the post-processing of retrieved documents. Specifically, the new abstraction makes it easy to take a set of retrieved documents and extract from them Apr 20, 2023 3 min read Autonomous Agents & Agent Simulations Over the past two weeks, there has been a massive increase in using LLMs in an agentic manner. Specifically, projects like AutoGPT, BabyAGI, CAMEL, and Generative Agents have popped up. The LangChain community has now implemented some parts of all of those projects in the LangChain framework. While researching and Apr 18, 2023 7 min read AI-Powered Medical Knowledge: Revolutionizing Care for Rare Conditions [Editor's Note]: This is a guest post by Jack Simon, who recently participated in a hackathon at Williams College. He built a LangChain-powered chatbot focused on appendiceal cancer, aiming to make specialized knowledge more accessible to those in need. If you are interested in building a chatbot for another rare Apr 17, 2023 3 min read Auto-Eval of Question-Answering Tasks By Lance MartinContextLLM ops platforms, such as LangChain, make it easy to assemble LLM components (e.g., models, document retrievers, data loaders) into chains. Question-Answering is one of the most popular applications of these chains. But it is often not always obvious to determine what parameters (e.g. Apr 15, 2023 3 min read Announcing LangChainJS Support for Multiple JS Environments TLDR: We're announcing support for running LangChain.js in browsers, Cloudflare Workers, Vercel/Next.js, Deno, Supabase Edge Functions, alongside existing support for Node.js ESM and CJS. See install/upgrade docs and breaking changes list.ContextOriginally we designed LangChain.js to run in Node.js, which is the Apr 11, 2023 3 min read LangChain x Supabase Supabase is holding an AI Hackathon this week. Here at LangChain we are big fans of both Supabase and hackathons, so we thought this would be a perfect time to highlight the multiple ways you can use LangChain and Supabase together.The reason we like Supabase so much is that Apr 8, 2023 2 min read Announcing our $10M seed round led by Benchmark It was only six months ago that we released the first version of LangChain, but it seems like several years. When we launched, generative AI was starting to go mainstream: stable diffusion had just been released and was captivating people’s imagination and fueling an explosion in developer activity, Jasper Apr 4, 2023 4 min read Custom Agents One of the most common requests we've heard is better functionality and documentation for creating custom agents. This has always been a bit tricky - because in our mind it's actually still very unclear what an "agent" actually is, and therefor what the "right" abstractions for them may be. Recently, Apr 3, 2023 3 min read Retrieval TL;DR: We are adjusting our abstractions to make it easy for other retrieval methods besides the LangChain VectorDB object to be used in LangChain. This is done with the goals of (1) allowing retrievers constructed elsewhere to be used more easily in LangChain, (2) encouraging more experimentation with alternative Mar 23, 2023 4 min read LangChain + Zapier Natural Language Actions (NLA) We are super excited to team up with Zapier and integrate their new Zapier NLA API into LangChain, which you can now use with your agents and chains. With this integration, you have access to the 5k+ apps and 20k+ actions on Zapier's platform through a natural language API interface. Mar 16, 2023 2 min read Evaluation Evaluation of language models, and by extension applications built on top of language models, is hard. With recent model releases (OpenAI, Anthropic, Google) evaluation is becoming a bigger and bigger issue. People are starting to try to tackle this, with OpenAI releasing OpenAI/evals - focused on evaluating OpenAI models. Mar 14, 2023 3 min read LLMs and SQL Francisco Ingham and Jon Luo are two of the community members leading the change on the SQL integrations. We’re really excited to write this blog post with them going over all the tips and tricks they’ve learned doing so. We’re even more excited to announce that we’ Mar 13, 2023 8 min read Origin Web Browser [Editor's Note]: This is the second of hopefully many guest posts. We intend to highlight novel applications building on top of LangChain. If you are interested in working with us on such a post, please reach out to harrison@langchain.dev.Authors: Parth Asawa (pgasawa@), Ayushi Batwara (ayushi.batwara@), Jason Mar 8, 2023 4 min read Prompt Selectors One common complaint we've heard is that the default prompt templates do not work equally well for all models. This became especially pronounced this past week when OpenAI released a ChatGPT API. This new API had a completely new interface (which required new abstractions) and as a result many users Mar 8, 2023 2 min read Chat Models Last week OpenAI released a ChatGPT endpoint. It came marketed with several big improvements, most notably being 10x cheaper and a lot faster. But it also came with a completely new API endpoint. We were able to quickly write a wrapper for this endpoint to let users use it like Mar 6, 2023 6 min read Using the ChatGPT API to evaluate the ChatGPT API OpenAI released a new ChatGPT API yesterday. Lots of people were excited to try it. But how does it actually compare to the existing API? It will take some time before there is a definitive answer, but here are some initial thoughts. Because I'm lazy, I also enrolled the help Mar 2, 2023 5 min read Agent Toolkits Today, we're announcing agent toolkits, a new abstraction that allows developers to create agents designed for a particular use-case (for example, interacting with a relational database or interacting with an OpenAPI spec). We hope to continue developing different toolkits that can enable agents to do amazing feats. Toolkits are supported Mar 1, 2023 3 min read TypeScript Support It's finally here... TypeScript support for LangChain.What does this mean? It means that all your favorite prompts, chains, and agents are all recreatable in TypeScript natively. Both the Python version and TypeScript version utilize the same serializable format, meaning that artifacts can seamlessly be shared between languages. As an Feb 17, 2023 2 min read Streaming Support in LangChain We’re excited to announce streaming support in LangChain. There's been a lot of talk about the best UX for LLM applications, and we believe streaming is at its core. We’ve also updated the chat-langchain repo to include streaming and async execution. We hope that this repo can serve Feb 14, 2023 2 min read LangChain + Chroma Today we’re announcing LangChain's integration with Chroma, the first step on the path to the Modern A.I Stack.LangChain - The A.I-native developer toolkitWe started LangChain with the intent to build a modular and flexible framework for developing A.I-native applications. Some of the use cases Feb 13, 2023 2 min read Page 1 of 2 Older Posts → LangChain © 2023 Sign up Powered by GhostThought:> Finished chain.The LangChain blog has recently released an open-source auto-evaluator tool for grading LLM question-answer chains and is now releasing an open-source, free-to-use hosted app and API to expand usability. The blog also discusses various opportunities to further improve the LangChain platform.
response = await agent_chain.arun(input="What's the latest xkcd comic about?")print(response)
> Entering new AgentExecutor chain...Thought: I can navigate to the xkcd website and extract the latest comic title and alt text to answer the question.Action:```{  "action": "navigate_browser",  "action_input": {    "url": ";  }}```Observation: Navigating to  returned status code 200Thought:I can extract the latest comic title and alt text using CSS selectors.Action:```{  "action": "get_elements",  "action_input": {    "selector": "#ctitle, #comic img",    "attributes": ["alt", "src"]  }}``` Observation: [{"alt": "Tapetum Lucidum", "src": "//imgs.xkcd.com/comics/tapetum_lucidum.png"}]Thought:> Finished chain.The latest xkcd comic is titled "Tapetum Lucidum" and the image can be found at .
添加内存

以下是您如何向此代理添加内存

from langchain.prompts import MessagesPlaceholderfrom langchain.memory import ConversationBufferMemory
chat_history = MessagesPlaceholder(variable_name="chat_history")memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent_chain = initialize_agent(    tools,     llm,     agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,     verbose=True,     memory=memory,     agent_kwargs = {        "memory_prompts": [chat_history],        "input_variables": ["input", "agent_scratchpad", "chat_history"]    })
response = await agent_chain.arun(input="Hi I'm Erica.")print(response)
> Entering new AgentExecutor chain...Action:```{  "action": "Final Answer",  "action_input": "Hi Erica! How can I assist you today?"}```> Finished chain.Hi Erica! How can I assist you today?
response = await agent_chain.arun(input="whats my name?")print(response)
> Entering new AgentExecutor chain...Your name is Erica.> Finished chain.Your name is Erica.

标签: #python27goagent