Skip to contents

Overview

An AI agent that reads files, queries databases, and calls APIs combines an LLM’s reasoning with executable tools. Because that reasoning is probabilistic and shaped by its inputs, any tool the LLM can call might be called in unexpected or adversarial ways. Agent frameworks must treat tool access as a security boundary.

securetools provides pre-built, security-hardened tools that plug directly into orchestr agents. Each tool factory returns a securer::securer_tool() object that orchestr’s Agent class converts to an ellmer tool automatically when secure = TRUE. You get structural security guarantees (path scoping, parameterized SQL, rate limiting) without writing any security code yourself. The tools enforce constraints by design.

The examples below wire securetools into single-agent ReAct loops, multi-agent supervisor graphs, and mixed toolkits. For tool factory basics, see vignette("securetools"). For orchestr fundamentals, see vignette("quickstart", package = "orchestr").

Setup

library(securetools)
library(orchestr)
library(ellmer)

# Set your LLM provider API key
Sys.setenv(ANTHROPIC_API_KEY = "your-key-here")

ReAct agent with tools

A ReAct (Reason + Act) agent is the standard pattern for tool-using LLMs. The agent receives a task, reasons about what to do next, acts by calling a tool, observes the result, and repeats until it has a final answer. Each iteration gives the LLM another chance to call a tool. A confused or manipulated agent can spiral into unbounded tool use.

securetools makes ReAct loops safe by routing every tool call through parent-process validation before it reaches the sandbox. The LLM reasons freely; its actions are constrained by the guarantees of each tool.

The following diagram shows the execution flow of a ReAct agent with securetools:

  ┌───────────────────────────────────────────────────┐
  │                   ReAct Loop                      │
  │                                                   │
  │   ┌──────────┐    ┌──────────┐    ┌───────────┐   │
  │   │          │    │          │    │           │   │
  │   │  Reason  │───>│   Act    │───>│  Observe  │   │
  │   │  (LLM)   │    │ (tool)   │    │ (result)  │   │
  │   │          │    │          │    │           │   │
  │   └──────────┘    └────┬─────┘    └─────┬─────┘   │
  │        ^               │                │         │
  │        │               v                │         │
  │        │        ┌──────────────┐        │         │
  │        │        │   Validate   │        │         │
  │        │        │  (parent R)  │        │         │
  │        │        │  - rate limit│        │         │
  │        │        │  - allow-list│        │         │
  │        │        │  - path check│        │         │
  │        │        └──────┬───────┘        │         │
  │        │               v                │         │
  │        │        ┌──────────────┐        │         │
  │        │        │   Execute    │        │         │
  │        │        │  (sandbox)   │────────┘         │
  │        │        └──────────────┘                  │
  │        │                                          │
  │        └──────────── loop ────────────────────────┘
  │                                                   │
  │   Done? ──yes──> Return final answer              │
  └───────────────────────────────────────────────────┘

Pass securetools to the agent() constructor with secure = TRUE so tool calls run inside a securer sandbox.

# Create security-scoped tools
calc <- calculator_tool()
reader <- read_file_tool(allowed_dirs = "/path/to/project/data")

# Build an agent with tools and secure execution
analyst <- agent(
  "analyst",
  chat = chat_anthropic(
    system_prompt = "You are a data analyst. Use your tools to answer questions."
  ),
  tools = list(calc, reader),
  secure = TRUE
)

# Wrap in a ReAct graph for state management
graph <- react_graph(analyst)

result <- graph$invoke(list(messages = list(
  "Read the file sales.csv from the data directory and calculate the total revenue."
)))

When secure = TRUE, orchestr creates a SecureSession behind the scenes. Each securer_tool is converted to an ellmer tool definition that executes inside the sandbox. Path scoping, AST validation, and rate limits still apply at the parent-process level.

Supervisor with tool specialists

The supervisor pattern applies the principle of least privilege across multiple agents. Specialist workers each carry only the tools they need. A supervisor agent routes incoming requests to the appropriate worker without itself holding direct tool access.

Each worker gets its own SecureSession, so rate limits, allowed directories, and domain allow-lists are isolated per worker. A compromised file worker cannot start making HTTP requests. The supervisor sees only high-level results, not raw tool outputs, which limits information leakage across agent boundaries. You can also tune constraints per role: generous rate limits for the data specialist, tight caps for the file writer.

  ┌─────────────────────────────────────────────────┐
  │                 Supervisor Agent                 │
  │           (no tools, routes only)                │
  │                                                  │
  │    "Read sales.csv, compute mean revenue"        │
  │                                                  │
  │         ┌──────────┬──────────┐                  │
  │         v          v          v                  │
  │   ┌───────────┐ ┌──────────┐ ┌──────────────┐   │
  │   │   File    │ │  Data    │ │  Research    │   │
  │   │ Specialist│ │Specialist│ │  Specialist  │   │
  │   │           │ │          │ │              │   │
  │   │ read_file │ │calculator│ │  fetch_url   │   │
  │   │ write_file│ │ profiler │ │              │   │
  │   └───────────┘ └──────────┘ └──────────────┘   │
  │                                                  │
  │   Each worker has its own SecureSession          │
  │   with isolated rate limits and allow-lists      │
  └─────────────────────────────────────────────────┘

A supervisor graph routes tasks to specialized worker agents, each carrying its own set of tools.

# Data agent: calculation and profiling
data_agent <- agent(
  "data_specialist",
  chat = chat_anthropic(
    system_prompt = paste(
      "You are a data specialist.",
      "Use the calculator for arithmetic and the profiler for data summaries."
    )
  ),
  tools = list(
    calculator_tool(),
    data_profile_tool(max_rows = 50000)
  ),
  secure = TRUE
)

# File agent: reading and writing
file_agent <- agent(
  "file_specialist",
  chat = chat_anthropic(
    system_prompt = paste(
      "You are a file specialist.",
      "Read and write files as requested.",
      "Always specify format = 'auto' when reading."
    )
  ),
  tools = list(
    read_file_tool(allowed_dirs = "/path/to/project/data"),
    write_file_tool(allowed_dirs = "/path/to/project/output")
  ),
  secure = TRUE
)

# Supervisor routes between specialists
supervisor <- agent(
  "supervisor",
  chat = chat_anthropic(
    system_prompt = paste(
      "You coordinate a team.",
      "Route data questions to the data specialist",
      "and file operations to the file specialist."
    )
  )
)

graph <- supervisor_graph(
  supervisor = supervisor,
  workers = list(
    data_specialist = data_agent,
    file_specialist = file_agent
  )
)

result <- graph$invoke(list(messages = list(
  "Read sales.csv, then calculate the mean of the revenue column."
)))

The supervisor needs no tools of its own; it uses the route tool that supervisor_graph() injects automatically. Workers each get their own SecureSession, isolating allowed directories and rate limits per worker.

Rate limiting in agent loops

Rate limiting matters in agent loops because the LLM controls how many iterations occur. A ReAct agent that misunderstands a task might call the calculator 500 times trying to “verify” an answer. A research agent fetching URLs might follow links recursively, hammering an external API. Without hard caps, these loops become runaway execution: wasted tokens, exhausted external rate limits, enormous output that overwhelms downstream processing.

securetools has two rate limiting mechanisms:

  • Lifetime caps (max_calls): the total number of times a tool can be invoked across the entire session. Once hit, every subsequent call returns an error. This is the backstop against runaway loops.
  • Sliding window (max_calls_per_minute): limits burst frequency. Even if you allow 1000 lifetime calls, restricting to 10 per minute prevents overwhelming external services or disk I/O.

When a rate limit is reached, the tool returns a structured error message to the LLM (not an R exception), giving the agent a chance to adjust its strategy. It might summarize what it has so far instead of fetching more data.

# Cap the calculator at 50 calls per agent session
calc <- calculator_tool(max_calls = 50)

# URL fetch with both lifetime and per-minute limits
fetcher <- fetch_url_tool(
  allowed_domains = c("api.github.com"),
  max_calls = 100,
  max_calls_per_minute = 10
)

researcher <- agent(
  "researcher",
  chat = chat_anthropic(
    system_prompt = "You fetch data from APIs and analyze results."
  ),
  tools = list(calc, fetcher),
  secure = TRUE
)

graph <- react_graph(researcher)

When a limit is hit, the tool returns an error message to the LLM, which can then stop or adjust its approach.

Mixing securetools with custom tools

Most agents need capabilities beyond what securetools provides directly. You can combine securetools factories with custom securer::securer_tool() definitions in the same agent. All tools run inside the same secure session and share the same sandbox isolation.

# A custom tool alongside securetools
timestamp_tool <- securer::securer_tool(
  name = "timestamp",
  description = "Return the current UTC timestamp.",
  fn = function() {
    format(Sys.time(), tz = "UTC", usetz = TRUE)
  },
  args = list()
)

# Mix custom + securetools
assistant <- agent(
  "assistant",
  chat = chat_anthropic(
    system_prompt = "You help with data tasks and can check the current time."
  ),
  tools = list(
    calculator_tool(),
    read_file_tool(allowed_dirs = "/path/to/data"),
    timestamp_tool
  ),
  secure = TRUE
)

graph <- react_graph(assistant)

result <- graph$invoke(list(messages = list(
  "What time is it? Also, what is 2^10?"
)))

Custom tools follow the same security model as securetools factories: the fn runs in the parent process (not inside the sandbox), and orchestr handles the ellmer conversion automatically.

Next steps