phoenix-client package includes a utility that converts ATIF trajectory JSON into OpenTelemetry-compatible span trees and uploads them to Phoenix, so you can visualize and evaluate agent runs using Phoenix’s tracing UI.
Quick start
If you’re using Phoenix Cloud, set
PHOENIX_CLIENT_HEADERS and PHOENIX_COLLECTOR_ENDPOINT before creating the client. See Connect to Phoenix.Trace hierarchy
ATIF stores agent steps as a flat list. The converter builds a hierarchical span tree that matches what real-time instrumentors produce: Single-turn conversations (one user message):Batch uploads and subagent linking
When an agent delegates work to a subagent, the ATIF trajectories reference each other viasubagent_trajectory_ref. Upload the parent and child trajectories together in one call and the converter automatically nests the child’s spans under the parent’s tool span:
Continuation merging
When an agent’s context window fills up, Harbor splits the session across multiple trajectory files. The continuation file gets asession_id ending in -cont-N. The converter automatically detects these and merges them into a single trace, so the full agent session is visible as one trace in Phoenix.
Continuation root spans are annotated with metadata.is_continuation = True.
Attribute mapping
The converter maps ATIF fields to standard OpenInference attributes:| ATIF field | OpenInference attribute |
|---|---|
metrics.prompt_tokens | llm.token_count.prompt |
metrics.completion_tokens | llm.token_count.completion |
metrics.cached_tokens | llm.token_count.prompt_details.cache_read |
metrics.cost_usd | llm.cost.total |
agent.model_name / step model_name | llm.model_name |
agent.tool_definitions | llm.tools.{i}.tool.json_schema |
reasoning_content | metadata.reasoning_content |
session_id | session.id |
| Step messages | llm.input_messages / llm.output_messages |
| Tool calls | llm.output_messages.{i}.message.tool_calls |
| Observations | Tool span output.value |
Deterministic IDs
Trace and span IDs are derived from the trajectory’ssession_id via SHA-256, so uploading the same trajectory twice produces the same trace. This makes uploads idempotent — you can safely re-run without creating duplicates.
Known limitations
Each LLM span includes the full conversation history up to that point asllm.input_messages. For very long sessions (roughly 16+ turns with dense tool calls), this can exceed OpenTelemetry attribute size limits, causing span data to be truncated. This matches the behavior of real-time instrumentors and is a known platform-wide limitation.

