securetrace gives you structured tracing, token accounting, and cost tracking for LLM agent workflows in R.
Quick start
Wrap your workflow in with_trace(), break it into spans,
and record tokens:
library(securetrace)
result <- with_trace("my-agent-run", {
with_span("planning", type = "llm", {
record_tokens(1500, 300, model = "claude-sonnet-4-5")
"The answer is 42"
})
})-
with_trace()creates aTrace, starts the clock, evaluates your code, and ends the trace. -
with_span()wraps a single operation (LLM call, tool use, etc.). -
record_tokens()logs input/output tokens and model on the current span.
Use current_trace() and current_span() to
access active objects anywhere inside the block.
Token and cost tracking
Built-in pricing covers Anthropic, OpenAI, Gemini, Mistral, and DeepSeek.
# All known pricing (per 1M tokens)
costs <- model_costs()
head(names(costs))
#> [1] "claude-opus-4-6" "claude-sonnet-4-5"
#> [3] "claude-haiku-4-5" "claude-3-5-sonnet-20241022"
#> [5] "claude-3-5-haiku-20241022" "claude-3-opus-20240229"
# Cost for a single call
calculate_cost("claude-sonnet-4-5", input_tokens = 5000, output_tokens = 1000)
#> [1] 0.03Register your own models:
add_model_cost("my-fine-tuned", input_price = 5, output_price = 20)
calculate_cost("my-fine-tuned", input_tokens = 10000, output_tokens = 2000)
#> [1] 0.09Cloud provider model IDs (Bedrock, Vertex) resolve automatically via
resolve_model():
calculate_cost(
"anthropic.claude-3-5-sonnet-20241022-v2:0",
input_tokens = 10000, output_tokens = 2000
)
#> [1] 0.06Map internal deployment names with
add_model_alias():
add_model_alias("my-company-claude", "claude-sonnet-4-5")
calculate_cost("my-company-claude", input_tokens = 5000, output_tokens = 1000)
#> [1] 0.03Exporting traces
Write traces to JSONL for downstream analysis:
exp <- jsonl_exporter(tempfile("traces", fileext = ".jsonl"))
with_trace("exported-run", exporter = exp, {
with_span("work", type = "tool", { 42 })
})
#> [1] 42Print to console while debugging:
debug_exp <- console_exporter(verbose = TRUE)
with_trace("debug-run", exporter = debug_exp, {
with_span("step", type = "custom", { 1 + 1 })
})
#> --- Trace: debug-run ---
#> Status: completed
#> Duration: 0.00s
#> Spans: 1
#> -- Spans --
#> * step [custom] (ok) - 0.000s
#> [1] 2Set a default exporter so every with_trace()
auto-exports:
set_default_exporter(exp)See vignette("exporters") for custom exporters,
multi_exporter(), and the JSONL schema reference.
Trace summary
Call $summary() on a completed trace to see duration,
span count, tokens, and cost at a glance:
tr <- Trace$new("summarized-run")
tr$start()
s <- Span$new("llm", type = "llm")
s$start()
s$set_tokens(input = 5000, output = 1000)
s$set_model("claude-opus-4-6")
s$end()
tr$add_span(s)
tr$end()
tr$summary()
#> Trace: summarized-run (completed) ID: 163a3181d2b2f09bcf64302605ab6a97
#> Duration: 0.00s Spans: 1 Tokens: 5000 input, 1000 output Cost: $0.150000Next steps
-
vignette("observability")– spans, events, metrics, error handling, nested workflows. -
vignette("exporters")– JSONL schema, console exporter, custom exporters. -
vignette("cloud-native")– OTLP, Prometheus, W3C Trace Context.