開發 LangChain 代理程式

本頁說明如何使用特定架構的 LangChain 範本 (Vertex AI SDK for Python 中的 LangchainAgent 類別) 開發代理程式。這個代理程式會傳回指定日期兩種貨幣之間的匯率。步驟如下:

  1. 定義及設定模型
  2. 定義及使用工具
  3. (選用) 儲存即時通訊記錄
  4. (選用) 自訂提示範本
  5. (選用) 自訂指揮功能

事前準備

請按照「設定環境」一節中的步驟,確認環境已設定完成。

步驟 1:定義及設定模型

定義要使用的模型版本

model = "gemini-2.0-flash"

(選用) 設定模型的安全性設定。如要���一步瞭解 Gemini 中可用的安全設定選項,請參閱「設定安全性屬性」。以下是安全性設定的設定範例:

from langchain_google_vertexai import HarmBlockThreshold, HarmCategory

safety_settings = {
    HarmCategory.HARM_CATEGORY_UNSPECIFIED: HarmBlockThreshold.BLOCK_NONE,
    HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
    HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH,
    HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
    HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
}

(選用) 如要指定模型參數,請按照下列方式操作:

model_kwargs = {
    # temperature (float): The sampling temperature controls the degree of
    # randomness in token selection.
    "temperature": 0.28,
    # max_output_tokens (int): The token limit determines the maximum amount of
    # text output from one prompt.
    "max_output_tokens": 1000,
    # top_p (float): Tokens are selected from most probable to least until
    # the sum of their probabilities equals the top-p value.
    "top_p": 0.95,
    # top_k (int): The next token is selected from among the top-k most
    # probable tokens. This is not supported by all model versions. See
    # https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/image-understanding#valid_parameter_values
    # for details.
    "top_k": None,
    # safety_settings (Dict[HarmCategory, HarmBlockThreshold]): The safety
    # settings to use for generating content.
    # (you must create your safety settings using the previous step first).
    "safety_settings": safety_settings,
}

使用模型設定建立 LangchainAgent

from vertexai import agent_engines

agent = agent_engines.LangchainAgent(
    model=model,                # Required.
    model_kwargs=model_kwargs,  # Optional.
)

如果您是在互動式環境 (例如終端機或 Colab 筆記本) 中執行,可以將查詢做為中間測試步驟執行:

response = agent.query(input="What is the exchange rate from US dollars to SEK today?")

print(response)

回應是 Python 字典,類似下列範例:

{"input": "What is the exchange rate from US dollars to Swedish currency?",
 "output": """I cannot provide the live exchange rate from US dollars to Swedish currency (Swedish krona, SEK).

**Here's why:**

* **Exchange rates constantly fluctuate.** Factors like global economics, interest rates, and political events cause
  these changes throughout the day.
* **Providing inaccurate information would be misleading.**

**How to find the current exchange rate:**

1. **Use a reliable online converter:** Many websites specialize in live exchange rates. Some popular options include:
   * Google Finance (google.com/finance)
   * XE.com
   * Bank websites (like Bank of America, Chase, etc.)
2. **Contact your bank or financial institution:** They can give you the exact exchange rate they are using.

Remember to factor in any fees or commissions when exchanging currency.
"""}

(選用) 進階自訂

LangchainAgent 範本預設會使用 ChatVertexAI,因為它可提供 Google Cloud中所有可用的基礎模型。如要使用無法透過 ChatVertexAI 取得的模型,您可以使用以下簽章的 Python 函式指定 model_builder= 引數:

from typing import Optional

def model_builder(
    *,
    model_name: str,                      # Required. The name of the model
    model_kwargs: Optional[dict] = None,  # Optional. The model keyword arguments.
    **kwargs,                             # Optional. The remaining keyword arguments to be ignored.
):

如需 LangChain 支援的即時通訊模型清單及其功能,請參閱「即時通訊模型」。model=model_kwargs= 支援的值組合會因聊天機器而異,因此您必須參閱對應的說明文件,瞭解詳細資訊。

ChatVertexAI

已安裝 (預設)。

當您省略 model_builder 引數時,會在 LangchainAgent 範本中使用此引數,例如

from vertexai import agent_engines

agent = agent_engines.LangchainAgent(
    model=model,                # Required.
    model_kwargs=model_kwargs,  # Optional.
)

ChatAnthropic

首先,請按照說明文件設定帳戶並安裝套件。

接著,定義會傳回 ChatAnthropicmodel_builder

def model_builder(*, model_name: str, model_kwargs = None, **kwargs):
    from langchain_anthropic import ChatAnthropic
    return ChatAnthropic(model_name=model_name, **model_kwargs)

最後,請在 LangchainAgent 範本中使用以下程式碼:

from vertexai import agent_engines

agent = agent_engines.LangchainAgent(
    model="claude-3-opus-20240229",                       # Required.
    model_builder=model_builder,                          # Required.
    model_kwargs={
        "api_key": "ANTHROPIC_API_KEY",  # Required.
        "temperature": 0.28,                              # Optional.
        "max_tokens": 1000,                               # Optional.
    },
)

ChatOpenAI

您可以將 ChatOpenAI 與 Gemini 的 ChatCompletions API 搭配使用。

首先,請按照說明文件安裝套件。

接著,定義會傳回 ChatOpenAImodel_builder

def model_builder(
    *,
    model_name: str,
    model_kwargs = None,
    project: str,   # Specified via vertexai.init
    location: str,  # Specified via vertexai.init
    **kwargs,
):
    import google.auth
    from langchain_openai import ChatOpenAI

    # Note: the credential lives for 1 hour by default.
    # After expiration, it must be refreshed.
    creds, _ = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
    auth_req = google.auth.transport.requests.Request()
    creds.refresh(auth_req)

    if model_kwargs is None:
        model_kwargs = {}

    endpoint = f"https://{location}-aiplatform.googleapis.com"
    base_url = f'{endpoint}/v1beta1/projects/{project}/locations/{location}/endpoints/openapi'

    return ChatOpenAI(
        model=model_name,
        base_url=base_url,
        api_key=creds.token,
        **model_kwargs,
    )

最後,請在 LangchainAgent 範本中使用以下程式碼:

from vertexai import agent_engines

agent = agent_engines.LangchainAgent(
    model="google/gemini-2.0-flash",  # Or "meta/llama3-405b-instruct-maas"
    model_builder=model_builder,        # Required.
    model_kwargs={
        "temperature": 0,               # Optional.
        "max_retries": 2,               # Optional.
    },
)

步驟 2:定義及使用工具

定義模型後,下一步就是定義模型用於推論的工具。工具可以是 LangChain 工具或 Python 函式。您也可以將定義的 Python 函式轉換為 LangChain 工具

定義函式時,請務必加入註解,完整且清楚描述函式的參數、函式的作用,以及函式傳回的內容。模型會使用這項資訊來判斷要使用哪個函式。您也必須在本機測試函式,確認其運作正常。

使用以下程式碼定義傳回匯率的函式:

def get_exchange_rate(
    currency_from: str = "USD",
    currency_to: str = "EUR",
    currency_date: str = "latest",
):
    """Retrieves the exchange rate between two currencies on a specified date.

    Uses the Frankfurter API (https://api.frankfurter.app/) to obtain
    exchange rate data.

    Args:
        currency_from: The base currency (3-letter currency code).
            Defaults to "USD" (US Dollar).
        currency_to: The target currency (3-letter currency code).
            Defaults to "EUR" (Euro).
        currency_date: The date for which to retrieve the exchange rate.
            Defaults to "latest" for the most recent exchange rate data.
            Can be specified in YYYY-MM-DD format for historical rates.

    Returns:
        dict: A dictionary containing the exchange rate information.
            Example: {"amount": 1.0, "base": "USD", "date": "2023-11-24",
                "rates": {"EUR": 0.95534}}
    """
    import requests
    response = requests.get(
        f"https://api.frankfurter.app/{currency_date}",
        params={"from": currency_from, "to": currency_to},
    )
    return response.json()

如要在使用函式前先測試,請執行下列指令:

get_exchange_rate(currency_from="USD", currency_to="SEK")

回應應類似於以下內容:

{'amount': 1.0, 'base': 'USD', 'date': '2024-02-22', 'rates': {'SEK': 10.3043}}

如要在 LangchainAgent 範本中使用這項工具,請將該工具新增至 tools= 引數下的工具清單:

from vertexai import agent_engines

agent = agent_engines.LangchainAgent(
    model=model,                # Required.
    tools=[get_exchange_rate],  # Optional.
    model_kwargs=model_kwargs,  # Optional.
)

您可以對代理程式執行測試查詢,在本機測試代理程式。執行下列指令,即可在本機測試使用美元和瑞典克朗的代理程式:

response = agent.query(
    input="What is the exchange rate from US dollars to Swedish currency?"
)

回應會是類似以下的字典:

{"input": "What is the exchange rate from US dollars to Swedish currency?",
 "output": "For 1 US dollar you will get 10.7345 Swedish Krona."}

(選用) 多個工具

LangchainAgent 的工具可透過其他方式定義及建立。

接地工具

首先,匯入 generate_models 套件並建立工具

from vertexai.generative_models import grounding, Tool

grounded_search_tool = Tool.from_google_search_retrieval(
    grounding.GoogleSearchRetrieval()
)

接著,請在 LangchainAgent 範本中使用這項工具:

from vertexai import agent_engines

agent = agent_engines.LangchainAgent(
    model=model,
    tools=[grounded_search_tool],
)
agent.query(input="When is the next total solar eclipse in US?")

回應會是類似以下的字典:

{"input": "When is the next total solar eclipse in US?",
 "output": """The next total solar eclipse in the U.S. will be on August 23, 2044.
 This eclipse will be visible from three states: Montana, North Dakota, and
 South Dakota. The path of totality will begin in Greenland, travel through
 Canada, and end around sunset in the United States."""}

詳情請參閱「接地」。

LangChain 工具

首先,請安裝定義工具的套件。

pip install langchain-google-community

接著,請匯入套件並建立工具。

from langchain_google_community import VertexAISearchRetriever
from langchain.tools.retriever import create_retriever_tool

retriever = VertexAISearchRetriever(
    project_id="PROJECT_ID",
    data_store_id="DATA_STORE_ID",
    location_id="DATA_STORE_LOCATION_ID",
    engine_data_type=1,
    max_documents=10,
)
movie_search_tool = create_retriever_tool(
    retriever=retriever,
    name="search_movies",
    description="Searches information about movies.",
)

最後,請在 LangchainAgent 範本中使用這項工具:

from vertexai import agent_engines

agent = agent_engines.LangchainAgent(
    model=model,
    tools=[movie_search_tool],
)
response = agent.query(
    input="List some sci-fi movies from the 1990s",
)

應該會傳回類似下方的回應:

{"input": "List some sci-fi movies from the 1990s",
 "output": """Here are some sci-fi movies from the 1990s:
    * The Matrix (1999): A computer hacker learns from mysterious rebels about the true nature of his reality and his role in the war against its controllers.
    * Star Wars: Episode I - The Phantom Menace (1999): Two Jedi Knights escape a hostile blockade to find a queen and her protector, and come across a young boy [...]
    * Men in Black (1997): A police officer joins a secret organization that monitors extraterrestrial interactions on Earth.
    [...]
 """}

如需完整範例,請參閱筆記本

如要進一步瞭解 LangChain 提供的工具,請參閱 Google 工具

Vertex AI 擴充功能

首先,請匯入擴充功能套件並建立工具

from typing import Optional

def generate_and_execute_code(
    query: str,
    files: Optional[list[str]] = None,
    file_gcs_uris: Optional[list[str]] = None,
) -> str:
    """Get the results of a natural language query by generating and executing
    a code snippet.

    Example queries: "Find the max in [1, 2, 5]" or "Plot average sales by
    year (from data.csv)". Only one of `file_gcs_uris` and `files` field
    should be provided.

    Args:
        query:
            The natural language query to generate and execute.
        file_gcs_uris:
            Optional. URIs of input files to use when executing the code
            snippet. For example, ["gs://input-bucket/data.csv"].
        files:
            Optional. Input files to use when executing the generated code.
            If specified, the file contents are expected be base64-encoded.
            For example: [{"name": "data.csv", "contents": "aXRlbTEsaXRlbTI="}].
    Returns:
        The results of the query.
    """
    operation_params = {"query": query}
    if files:
        operation_params["files"] = files
    if file_gcs_uris:
        operation_params["file_gcs_uris"] = file_gcs_uris

    from vertexai.preview import extensions

    # If you have an existing extension instance, you can get it here
    # i.e. code_interpreter = extensions.Extension(resource_name).
    code_interpreter = extensions.Extension.from_hub("code_interpreter")
    return extensions.Extension.from_hub("code_interpreter").execute(
        operation_id="generate_and_execute",
        operation_params=operation_params,
    )

接著,請在 LangchainAgent 範本中使用這項工具:

from vertexai import agent_engines

agent = agent_engines.LangchainAgent(
    model=model,
    tools=[generate_and_execute_code],
)
agent.query(
    input="""Using the data below, construct a bar chart that includes only the height values with different colors for the bars:

    tree_heights_prices = {
      \"Pine\": {\"height\": 100, \"price\": 100},
      \"Oak\": {\"height\": 65, \"price\": 135},
      \"Birch\": {\"height\": 45, \"price\": 80},
      \"Redwood\": {\"height\": 200, \"price\": 200},
      \"Fir\": {\"height\": 180, \"price\": 162},
    }
    """
)

應該會傳回類似下方的回應:

{"input": """Using the data below, construct a bar chart that includes only the height values with different colors for the bars:

 tree_heights_prices = {
    \"Pine\": {\"height\": 100, \"price\": 100},
    \"Oak\": {\"height\": 65, \"price\": 135},
    \"Birch\": {\"height\": 45, \"price\": 80},
    \"Redwood\": {\"height\": 200, \"price\": 200},
    \"Fir\": {\"height\": 180, \"price\": 162},
 }
 """,
 "output": """Here's the generated bar chart:
 ```python
 import matplotlib.pyplot as plt

 tree_heights_prices = {
    "Pine": {"height": 100, "price": 100},
    "Oak": {"height": 65, "price": 135},
    "Birch": {"height": 45, "price": 80},
    "Redwood": {"height": 200, "price": 200},
    "Fir": {"height": 180, "price": 162},
 }

 heights = [tree["height"] for tree in tree_heights_prices.values()]
 names = list(tree_heights_prices.keys())

 plt.bar(names, heights, color=['red', 'green', 'blue', 'purple', 'orange'])
 plt.xlabel('Tree Species')
 plt.ylabel('Height')
 plt.title('Tree Heights')
 plt.show()
 ```
 """}

如要讓已部署的代理程式存取 Code Interpreter 擴充功能,您必須將 Vertex AI 使用者角色 (roles/aiplatform.user) 新增至 AI Platform Reasoning Engine Service Agent 服務帳戶。詳情請參閱「管理存取權」。

詳情請參閱「Vertex AI 擴充功能」。

您可以使用 LangchainAgent 中建立的所有工具 (或部分工具):

from vertexai import agent_engines

agent = agent_engines.LangchainAgent(
    model=model,
    tools=[
        get_exchange_rate,         # Optional (Python function)
        grounded_search_tool,      # Optional (Grounding Tool)
        movie_search_tool,         # Optional (Langchain Tool)
        generate_and_execute_code, # Optional (Vertex Extension)
    ],
)

agent.query(input="When is the next total solar eclipse in US?")

(選用) 工具設定

有了 Gemini,您就能限制工具的使用方式。舉例來說,您可以強制模型只產生函式呼叫 (「強制函式呼叫」),而非允許模型產生自然語言回應。

from vertexai import agent_engines
from vertexai.preview.generative_models import ToolConfig

agent = agent_engines.LangchainAgent(
    model="gemini-2.0-flash",
    tools=[search_arxiv, get_exchange_rate],
    model_tool_kwargs={
        "tool_config": {  # Specify the tool configuration here.
            "function_calling_config": {
                "mode": ToolConfig.FunctionCallingConfig.Mode.ANY,
                "allowed_function_names": ["search_arxiv", "get_exchange_rate"],
            },
        },
    },
)

agent.query(
    input="Explain the Schrodinger equation in a few sentences",
)

詳情請參閱「工具設定」。

步驟 3:儲存即時通訊記錄

如要追蹤即時通訊訊息並附加至資料庫,請定義 get_session_history 函式,並在建立代理程式時傳入該函式。這個函式應採用 session_id,並傳回 BaseChatMessageHistory 物件。

  • session_id 是這些輸入訊息所屬工作階段的 ID。這樣一來,您就能同時進行多個對話。
  • BaseChatMessageHistory 是可載入及儲存訊息物件的類別介面。

設定資料庫

如需 LangChain 支援的 Google ChatMessageHistory 提供者清單,請參閱「記憶體」一文。

首先,請按照 LangChain 的說明文件安裝並使用相關套件,設定所需資料庫 (例如 Firestore、Bigtable 或 Spanner):

接著,請按照下列方式定義 get_session_history 函式:

Firestore (原生模式)

def get_session_history(session_id: str):
    from langchain_google_firestore import FirestoreChatMessageHistory
    from google.cloud import firestore

    client = firestore.Client(project="PROJECT_ID")
    return FirestoreChatMessageHistory(
        client=client,
        session_id=session_id,
        collection="TABLE_NAME",
        encode_message=False,
    )

Bigtable

def get_session_history(session_id: str):
    from langchain_google_bigtable import BigtableChatMessageHistory

    return BigtableChatMessageHistory(
        instance_id="INSTANCE_ID",
        table_id="TABLE_NAME",
        session_id=session_id,
    )

Spanner

def get_session_history(session_id: str):
    from langchain_google_spanner import SpannerChatMessageHistory

    return SpannerChatMessageHistory(
        instance_id="INSTANCE_ID",
        database_id="DATABASE_ID",
        table_name="TABLE_NAME",
        session_id=session_id,
    )

最後,請建立代理程式,並將其傳遞為 chat_history

from vertexai import agent_engines

agent = agent_engines.LangchainAgent(
    model=model,
    chat_history=get_session_history,  # <- new
)

查詢服務機器人時,請務必傳入 session_id,讓機器人「記住」先前的問題和答案:

agent.query(
    input="What is the exchange rate from US dollars to Swedish currency?",
    config={"configurable": {"session_id": "SESSION_ID"}},
)

您可以確認後續查詢是否會保留工作階段的記憶體:

response = agent.query(
    input="How much is 100 USD?",
    config={"configurable": {"session_id": "SESSION_ID"}},
)

print(response)

步驟 4:自訂提示範本

提示範本可將使用者輸入內容轉譯為模型的指示,並用於引導模型的回應,協助模型瞭解脈絡,並產生相關且連貫的語言輸出內容。詳情請參閱 ChatPromptTemplates

預設提示範本會依序分為多個區段。

區段 說明
(選用) 系統指示 服務專員在所有查詢中要套用的操作說明。
(選用) 即時通訊記錄 與過去工作階段的即時通訊記錄相對應的訊息。
使用者輸入內容 使用者提出的查詢,供代理程式回應。
服務專員便條簿 代理程式在執行時使用工具和推理功能,以便向使用者回覆訊息 (例如透過函式呼叫)。

如果您在建立對話方塊時未指定自己的提示範本,系統會產生預設提示範本,完整範本如下所示:

from langchain_core.prompts import ChatPromptTemplate
from langchain.agents.format_scratchpad.tools import format_to_tool_messages

prompt_template = {
    "user_input": lambda x: x["input"],
    "history": lambda x: x["history"],
    "agent_scratchpad": lambda x: format_to_tool_messages(x["intermediate_steps"]),
} | ChatPromptTemplate.from_messages([
    ("system", "{system_instruction}"),
    ("placeholder", "{history}"),
    ("user", "{user_input}"),
    ("placeholder", "{agent_scratchpad}"),
])

在下列範例中,您會在將服務代理程式例項化時,隱含使用完整提示範本:

from vertexai import agent_engines

system_instruction = "I help look up the rate between currencies"

agent = agent_engines.LangchainAgent(
    model=model,
    system_instruction=system_instruction,
    chat_history=get_session_history,
    tools=[get_exchange_rate],
)

您可以使用自己的提示範本覆寫預設提示範本,並在建構代理程式時使用該範本,例如:


from vertexai import agent_engines

custom_prompt_template = {
    "user_input": lambda x: x["input"],
    "history": lambda x: x["history"],
    "agent_scratchpad": lambda x: format_to_tool_messages(x["intermediate_steps"]),
} | ChatPromptTemplate.from_messages([
    ("placeholder", "{history}"),
    ("user", "{user_input}"),
    ("placeholder", "{agent_scratchpad}"),
])

agent = agent_engines.LangchainAgent(
    model=model,
    prompt=custom_prompt_template,
    chat_history=get_session_history,
    tools=[get_exchange_rate],
)

agent.query(
    input="What is the exchange rate from US dollars to Swedish currency?",
    config={"configurable": {"session_id": "SESSION_ID"}},
)

步驟 5:自訂指揮

所有 LangChain 元件都會���� Runnable 介面,為指揮作業提供輸入和輸出結構定義。LangchainAgent 需要建構可回應查詢的可執行項目。根據預設,LangchainAgent將模型與工具繫結,並在啟用即時通訊記錄時,使用包裝在 RunnableWithMessageHistory 中的 AgentExecutor 來建構此可執行項目。

如果您想 (i) 實作執行一組確定步驟的代理程式 (而非執行開放式推理),或 (ii) 以 ReAct 類似方式提示代理程式,在每個步驟加上註解,說明為何執行該步驟,那麼您可能需要自訂協調作業。如要這樣做,您必須在建立 LangchainAgent 時覆寫預設可執行項目,方法是使用以下簽名的 Python 函式指定 runnable_builder= 引數:

from typing import Optional
from langchain_core.language_models import BaseLanguageModel

def runnable_builder(
    model: BaseLanguageModel,
    *,
    system_instruction: Optional[str] = None,
    prompt: Optional["RunnableSerializable"] = None,
    tools: Optional[Sequence["_ToolLike"]] = None,
    chat_history: Optional["GetSessionHistoryCallable"] = None,
    model_tool_kwargs: Optional[Mapping[str, Any]] = None,
    agent_executor_kwargs: Optional[Mapping[str, Any]] = None,
    runnable_kwargs: Optional[Mapping[str, Any]] = None,
    **kwargs,
):

其中

  • model 對應至從 model_builder 傳回的即時通訊模型 (請參閱「定義及設定模型」)。
  • toolsmodel_tool_kwargs 對應要使用的工具和設定 (請參閱「定義及使用工具」)。
  • chat_history 對應至用於儲存即時通訊訊息的資料庫 (請參閱「儲存即時通訊記錄」)。
  • system_instructionprompt 對應至提示設定 (請參閱「自訂提示範本」)。
  • agent_executor_kwargsrunnable_kwargs 是可用來自訂要建構的可執行項目的關鍵字引數。

這可提供不同的自訂管控邏輯選項。

ChatModel

在最簡單的情況下,如要建立不經過調度的代理程式,您可以覆寫 LangchainAgentrunnable_builder,直接傳回 model

from vertexai import agent_engines
from langchain_core.language_models import BaseLanguageModel

def llm_builder(model: BaseLanguageModel, **kwargs):
    return model

agent = agent_engines.LangchainAgent(
    model=model,
    runnable_builder=llm_builder,
)

ReAct

如要根據您自己的 prompt 使用自訂的 ReAct 代理程式,覆寫預設的工具呼叫行為 (請參閱「自訂提示範本」),您需要覆寫 LangchainAgentrunnable_builder

from typing import Sequence
from langchain_core.language_models import BaseLanguageModel
from langchain_core.prompts import BasePromptTemplate
from langchain_core.tools import BaseTool
from langchain import hub

from vertexai import agent_engines

def react_builder(
    model: BaseLanguageModel,
    *,
    tools: Sequence[BaseTool],
    prompt: BasePromptTemplate,
    agent_executor_kwargs = None,
    **kwargs,
):
    from langchain.agents.react.agent import create_react_agent
    from langchain.agents import AgentExecutor

    agent = create_react_agent(model, tools, prompt)
    return AgentExecutor(agent=agent, tools=tools, **agent_executor_kwargs)

agent = agent_engines.LangchainAgent(
    model=model,
    tools=[get_exchange_rate],
    prompt=hub.pull("hwchase17/react"),
    agent_executor_kwargs={"verbose": True}, # Optional. For illustration.
    runnable_builder=react_builder,
)

LCEL 語法

如要使用 LangChain 運算子語言 (LCEL) 建構下列圖表,

   Input
   /   \
 Pros  Cons
   \   /
  Summary

您需要覆寫 LangchainAgentrunnable_builder

from vertexai import agent_engines

def lcel_builder(*, model, **kwargs):
    from operator import itemgetter
    from langchain_core.prompts import ChatPromptTemplate
    from langchain_core.runnables import RunnablePassthrough
    from langchain_core.output_parsers import StrOutputParser

    output_parser = StrOutputParser()

    planner = ChatPromptTemplate.from_template(
        "Generate an argument about: {input}"
    ) | model | output_parser | {"argument": RunnablePassthrough()}

    pros = ChatPromptTemplate.from_template(
        "List the positive aspects of {argument}"
    ) | model | output_parser

    cons = ChatPromptTemplate.from_template(
        "List the negative aspects of {argument}"
    ) | model | output_parser

    final_responder = ChatPromptTemplate.from_template(
        "Argument:{argument}\nPros:\n{pros}\n\nCons:\n{cons}\n"
        "Generate a final response given the critique",
    ) | model | output_parser

    return planner | {
        "pros": pros,
        "cons": cons,
        "argument": itemgetter("argument"),
    } | final_responder

agent = agent_engines.LangchainAgent(
    model=model,
    runnable_builder=lcel_builder,
)

LangGraph

如要使用 LangGraph 建構下列圖表,

   Input
   /   \
 Pros  Cons
   \   /
  Summary

您需要覆寫 LangchainAgentrunnable_builder

from vertexai import agent_engines

def langgraph_builder(*, model, **kwargs):
    from langchain_core.prompts import ChatPromptTemplate
    from langchain_core.output_parsers import StrOutputParser
    from langgraph.graph import END, MessageGraph

    output_parser = StrOutputParser()

    planner = ChatPromptTemplate.from_template(
        "Generate an argument about: {input}"
    ) | model | output_parser

    pros = ChatPromptTemplate.from_template(
        "List the positive aspects of {input}"
    ) | model | output_parser

    cons = ChatPromptTemplate.from_template(
        "List the negative aspects of {input}"
    ) | model | output_parser

    summary = ChatPromptTemplate.from_template(
        "Input:{input}\nGenerate a final response given the critique",
    ) | model | output_parser

    builder = MessageGraph()
    builder.add_node("planner", planner)
    builder.add_node("pros", pros)
    builder.add_node("cons", cons)
    builder.add_node("summary", summary)

    builder.add_edge("planner", "pros")
    builder.add_edge("planner", "cons")
    builder.add_edge("pros", "summary")
    builder.add_edge("cons", "summary")
    builder.add_edge("summary", END)
    builder.set_entry_point("planner")
    return builder.compile()

agent = agent_engines.LangchainAgent(
    model=model,
    runnable_builder=langgraph_builder,
)

# Example query
agent.query(input={"role": "user", "content": "scrum methodology"})

後續步驟