Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
Deploy, evaluate, and manage AI agents end-to-end on Microsoft Azure AI Foundry
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
foundry-agent/trace/references/conversation-detail.md
1# Conversation Detail — Reconstruct Full Span Tree23Reconstruct the complete span tree for a single conversation to see exactly what happened: every LLM call, tool execution, and agent invocation with timing, tokens, and errors.45## Step 1 — Fetch All Spans for a Conversation67Use `operation_Id` (trace ID) to get all spans in a single request:89```kql10dependencies11| where operation_Id == "<operation_id>"12| project timestamp, name, duration, resultCode, success,13spanId = id,14parentSpanId = operation_ParentId,15operation = tostring(customDimensions["gen_ai.operation.name"]),16model = tostring(customDimensions["gen_ai.request.model"]),17responseModel = tostring(customDimensions["gen_ai.response.model"]),18inputTokens = toint(customDimensions["gen_ai.usage.input_tokens"]),19outputTokens = toint(customDimensions["gen_ai.usage.output_tokens"]),20responseId = tostring(customDimensions["gen_ai.response.id"]),21finishReason = tostring(customDimensions["gen_ai.response.finish_reasons"]),22errorType = tostring(customDimensions["error.type"]),23toolName = tostring(customDimensions["gen_ai.tool.name"]),24toolCallId = tostring(customDimensions["gen_ai.tool.call.id"])25| order by timestamp asc26```2728Also fetch the parent request:2930```kql31requests32| where operation_Id == "<operation_id>"33| project timestamp, name, duration, resultCode, success, id, operation_ParentId34```3536## Step 2 — Build Span Tree3738Use `spanId` and `parentSpanId` to reconstruct the hierarchy:3940```41invoke_agent (root) ─── 4200ms42├── chat (LLM call #1) ─── 1800ms, gpt-4o, 450→120 tokens43│ └── [output: "Let me check the weather..."]44├── execute_tool (get_weather) [tool: remote_functions.weather_api] ─── 200ms45│ └── [result: "rainy, 57°F"]46├── chat (LLM call #2) ─── 1500ms, gpt-4o, 620→85 tokens47│ └── [output: "The weather in Paris is rainy, 57°F"]48└── [total: 450+620=1070 input, 120+85=205 output tokens]49```5051Present as an indented tree with:52- **Operation type** and name53- **Duration** (highlight if > P95 for that operation type)54- **Model** and token counts (for chat operations)55- **Error type** and result code (if failed, highlight in red)56- **Finish reason** (stop, length, content_filter, tool_calls)5758## Step 3 — Extract Conversation Content from invoke_agent Spans5960The full input/output content lives on `invoke_agent` dependency spans in `gen_ai.input.messages` and `gen_ai.output.messages`. These JSON arrays contain the complete conversation (system prompt, user query, assistant response):6162```kql63dependencies64| where operation_Id == "<operation_id>"65| where customDimensions["gen_ai.operation.name"] == "invoke_agent"66| project timestamp,67inputMessages = tostring(customDimensions["gen_ai.input.messages"]),68outputMessages = tostring(customDimensions["gen_ai.output.messages"])69| order by timestamp asc70```7172Message structure: `[{"role": "user", "parts": [{"type": "text", "content": "..."}]}]`7374Also check the `traces` table for additional GenAI log events:7576```kql77traces78| where operation_Id == "<operation_id>"79| where message contains "gen_ai"80| project timestamp, message, customDimensions81| order by timestamp asc82```8384## Step 4 — Check for Exceptions8586```kql87exceptions88| where operation_Id == "<operation_id>"89| project timestamp, type, message, outerMessage,90details = parse_json(details)91| order by timestamp asc92```9394Present exceptions inline in the span tree at their position in the timeline.9596## Step 5 — Fetch Evaluation Results9798See [Eval Correlation](eval-correlation.md) for the full workflow to look up evaluation scores by response ID or conversation ID. Use `gen_ai.response.id` values from Step 1 spans to correlate.99