RunnableAgent#

class langchain.agents.agent.RunnableAgent[source]#

Bases: BaseSingleActionAgent

Agent powered by Runnables.

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

param input_keys_arg: List[str] = []#
param return_keys_arg: List[str] = []#
param runnable: Runnable[dict, AgentAction | AgentFinish] [Required]#

Runnable to call to get agent action.

param stream_runnable: bool = True#

Whether to stream from the runnable or not.

If True then underlying LLM is invoked in a streaming fashion to make it possible

to get access to the individual LLM tokens when using stream_log with the Agent Executor. If False then LLM is invoked in a non-streaming fashion and individual LLM tokens will not be available in stream_log.

async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) AgentAction | AgentFinish[source]#

Async based on past history and current inputs, decide what to do.

Parameters:
Returns:

Action specifying what tool to use.

Return type:

AgentAction | AgentFinish

classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: BaseCallbackManager | None = None, **kwargs: Any) BaseSingleActionAgent#

Construct an agent from an LLM and tools.

Parameters:
  • llm (BaseLanguageModel) – Language model to use.

  • tools (Sequence[BaseTool]) – Tools to use.

  • callback_manager (BaseCallbackManager | None) – Callback manager to use.

  • kwargs (Any) – Additional arguments.

Returns:

Agent object.

Return type:

BaseSingleActionAgent

get_allowed_tools() List[str] | None#
Return type:

List[str] | None

plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) AgentAction | AgentFinish[source]#

Based on past history and current inputs, decide what to do.

Parameters:
  • intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with the observations.

  • callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) – Callbacks to run.

  • **kwargs (Any) – User inputs.

Returns:

Action specifying what tool to use.

Return type:

AgentAction | AgentFinish

return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) AgentFinish#

Return response when agent has been stopped due to max iterations.

Parameters:
  • early_stopping_method (str) – Method to use for early stopping.

  • intermediate_steps (List[Tuple[AgentAction, str]]) – Steps the LLM has taken to date, along with observations.

  • **kwargs (Any) – User inputs.

Returns:

Agent finish object.

Return type:

AgentFinish

Raises:

ValueError – If early_stopping_method is not supported.

save(file_path: Path | str) None#

Save the agent.

Parameters:

file_path (Path | str) – Path to file to save the agent to.

Return type:

None

Example: .. code-block:: python

# If working with agent executor agent.agent.save(file_path=”path/agent.yaml”)

tool_run_logging_kwargs() Dict#

Return logging kwargs for tool run.

Return type:

Dict

property input_keys: List[str]#

Return the input keys.

property return_values: List[str]#

Return values of the agent.