What problem does orchestr solve?
Large language models work well on their own, but building a production agent requires more than a single prompt-response exchange. A production agent needs structured reasoning loops, pipelines that chain specialists together, supervisors that route work to the right expert, and controls that keep the whole system observable, checkpointable, and safe.
orchestr is a graph-based orchestration layer for R that sits on top of ellmer (the LLM chat interface) and optionally securer (sandboxed code execution). Instead of writing ad-hoc loops and if-else routing, you define agents, wire them into a graph, and let orchestr handle execution flow, state management, and iteration control.
Agents are nodes, edges define flow, and the graph runtime handles the rest. Whether you need a single ReAct agent, a three-stage pipeline, or a supervisor that dynamically routes to a pool of workers, the same graph primitives apply.
The ReAct pattern
orchestr’s default execution model follows the ReAct (Reasoning + Acting) pattern. The agent repeatedly thinks about what to do, takes an action (typically a tool call), observes the result, and decides whether to continue or stop:
think --> act --> observe
^ |
| |
+------ loop ------+
(until done or max iterations)
This loop is implemented by react_graph() and runs
within a safety cap (max_iterations) to prevent runaway LLM
calls.
When to use orchestr vs. plain ellmer
If your use case is a single prompt-response exchange, plain ellmer is the right choice. ellmer handles tool call loops internally, supports streaming, and has minimal overhead.
Reach for orchestr when you need one or more of:
- Multi-agent workflows: pipelines, supervisors, or custom graph topologies
- State management: typed state schemas with reducers that accumulate results across nodes
- Checkpointing: save and resume graph execution mid-run
- Observability: automatic span creation when combined with securetrace
- Iteration control: max_iterations caps, conditional routing, interrupt and approval flows
ellmer runs a single agent with tools; orchestr composes multiple agents into governed workflows.
Installation
install.packages("orchestr")API key setup
orchestr uses ellmer for LLM access. Set your provider’s API key before running any examples:
# For Anthropic (Claude)
Sys.setenv(ANTHROPIC_API_KEY = "your-key-here")
# For OpenAI
Sys.setenv(OPENAI_API_KEY = "your-key-here")See ellmer’s documentation for all supported providers.
Using different providers
orchestr works with any ellmer chat backend. Pass the appropriate
chat_*() constructor to agent() and everything
works the same regardless of the backing model.
library(orchestr)
library(ellmer)
# OpenAI
agent("analyst", chat = chat_openai(model = "gpt-4o"))
# Google Gemini
agent("analyst", chat = chat_google_gemini(model = "gemini-1.5-pro"))
# Claude via AWS Bedrock
agent("analyst", chat = chat_aws_bedrock(
model = "anthropic.claude-3-5-sonnet-20241022-v2:0"
))
# GPT-4o via Azure OpenAI
agent("analyst", chat = chat_azure_openai(
endpoint = "https://my-resource.openai.azure.com",
deployment_id = "gpt-4o"
))All graph types (react_graph(),
pipeline_graph(), supervisor_graph()) work
identically regardless of which provider backs the agent.
Your first agent
An Agent wraps an ellmer Chat object with a
name and an optional system prompt. The agent() constructor
validates inputs, applies defaults, and returns an R6 Agent
instance that manages conversation state and tool registration.
Adding tools
Tools let an agent call R functions as part of its reasoning –
looking up data, running calculations, querying databases. ellmer’s
Chat class handles tool call loops internally. When the
model decides to use a tool, ellmer executes the function and feeds the
result back, so the agent can observe real data and adjust its
response.
summary_tool <- tool(
function(dataset_name) {
data <- get(dataset_name, envir = asNamespace("datasets"))
paste(capture.output(summary(data)), collapse = "\n")
},
"Get a summary of a built-in R dataset.",
arguments = list(
dataset_name = type_string("Name of a dataset in the datasets package")
)
)
analyst <- agent("analyst",
chat = chat_anthropic(
system_prompt = "You are a data analyst. Use your tools to examine data."
),
tools = list(summary_tool)
)
response <- analyst$invoke("Summarize the mtcars dataset.")
cat(response)Single-agent graph with react_graph()
react_graph() wraps a single agent in a graph that adds
three things a bare agent lacks: state management (a typed state object
persists across iterations), checkpointing (save and resume mid-run),
and tracing (pass a securetrace Trace to instrument every
iteration). The graph interface ($invoke(),
$stream()) stays the same whether you run one agent or
many.
analyst <- agent("analyst", chat = chat_anthropic(
system_prompt = "You are a data analyst. Analyze data and provide insights."
))
graph <- react_graph(analyst)
result <- graph$invoke(list(messages = list(
"What are the key relationships in the mtcars dataset?"
)))Use verbose = TRUE when compiling to see execution flow.
With the convenience functions, pass verbose to
$invoke():
Agent pipeline with pipeline_graph()
When a task breaks into distinct stages (profile the data, analyze patterns, write a report), a pipeline chains agents in sequence. Each agent processes the shared state and passes it forward. Pipelines cost less than supervisors because each agent makes exactly one LLM call, and the execution order is fixed at graph construction time.
profiler <- agent("profiler", chat = chat_anthropic(
system_prompt = "Profile datasets: describe columns, types, missing values, distributions."
))
analyst <- agent("analyst", chat = chat_anthropic(
system_prompt = "Given a data profile, identify patterns, correlations, and anomalies."
))
pipeline <- pipeline_graph(profiler, analyst)
result <- pipeline$invoke(list(messages = list(
"Analyze the mtcars dataset focusing on fuel efficiency factors."
)))Next steps
- Multi-agent workflows: pipelines, supervisor routing, and visualization
- Secure execution: sandboxed code execution with securer
- Traced workflows: observability with securetrace
- Governed agent: the full 7-package stack