Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
Deploy, evaluate, and manage AI agents end-to-end on Microsoft Azure AI Foundry
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
foundry-agent/trace/references/search-traces.md
1# Search Traces — Conversation-Level Search23Search agent traces at the conversation level. Returns summaries grouped by conversation or operation, not individual spans.45## Prerequisites67- App Insights resource resolved (see [trace.md](../trace.md) Before Starting)8- Selected agent root, metadata file, and environment confirmed from `.foundry/agent-metadata*.yaml`9- Time range confirmed with user (default: last 24 hours)1011## Search by Conversation ID1213Keep the selected environment visible in the summary, and add the selected agent name or environment tag filters when the telemetry emits them.1415```kql16dependencies17| where timestamp > ago(24h)18| where customDimensions["gen_ai.conversation.id"] == "<conversation_id>"19| project timestamp, name, duration, resultCode, success,20operation = tostring(customDimensions["gen_ai.operation.name"]),21model = tostring(customDimensions["gen_ai.request.model"]),22inputTokens = toint(customDimensions["gen_ai.usage.input_tokens"]),23outputTokens = toint(customDimensions["gen_ai.usage.output_tokens"]),24operation_Id, id, operation_ParentId25| order by timestamp asc26```2728## Search by Response ID2930Auto-detect the response ID format to determine agent type:31- `caresp_...` → Hosted agent (AgentServer)32- `resp_...` → Prompt agent (Foundry Responses API)33- `chatcmpl-...` → Azure OpenAI chat completions3435```kql36dependencies37| where timestamp > ago(24h)38| where customDimensions["gen_ai.response.id"] == "<response_id>"39| project timestamp, name, duration, resultCode, success,40operation = tostring(customDimensions["gen_ai.operation.name"]),41model = tostring(customDimensions["gen_ai.request.model"]),42inputTokens = toint(customDimensions["gen_ai.usage.input_tokens"]),43outputTokens = toint(customDimensions["gen_ai.usage.output_tokens"]),44operation_Id, id, operation_ParentId45```4647Then drill into the full conversation:4849> ⚠️ **STOP — read [Conversation Detail](conversation-detail.md) before writing your own drill-down query.** It contains the correct span tree reconstruction logic, event/exception queries, and eval correlation steps.5051Quick drill-down using the `operation_Id` from above:5253```kql54dependencies55| where operation_Id == "<operation_id_from_above>"56| project timestamp, name, duration, resultCode, success,57spanId = id, parentSpanId = operation_ParentId,58operation = tostring(customDimensions["gen_ai.operation.name"]),59model = tostring(customDimensions["gen_ai.request.model"]),60inputTokens = toint(customDimensions["gen_ai.usage.input_tokens"]),61outputTokens = toint(customDimensions["gen_ai.usage.output_tokens"]),62responseId = tostring(customDimensions["gen_ai.response.id"]),63errorType = tostring(customDimensions["error.type"]),64toolName = tostring(customDimensions["gen_ai.tool.name"])65| order by timestamp asc66```6768Also check for eval results: see [Eval Correlation](eval-correlation.md).6970## Search by Agent Name7172> **Note:** For hosted agents, `gen_ai.agent.name` in `dependencies` refers to *sub-agents* (e.g., `BingSearchAgent`), not the top-level hosted agent. See "Search by Hosted Agent Name" below.7374> 💡 **Hosted-agent versioning:** If you need the deployed version, use the hosted-agent pattern below and parse `gen_ai.agent.id` when it is emitted in `<agent-name>:<version>` format.7576```kql77dependencies78| where timestamp > ago(24h)79| where customDimensions["gen_ai.agent.name"] == "<agent_name>"80| summarize81startTime = min(timestamp),82endTime = max(timestamp),83totalDuration = max(timestamp) - min(timestamp),84spanCount = count(),85errorCount = countif(success == false),86totalInputTokens = sum(toint(customDimensions["gen_ai.usage.input_tokens"])),87totalOutputTokens = sum(toint(customDimensions["gen_ai.usage.output_tokens"]))88by conversationId = tostring(customDimensions["gen_ai.conversation.id"]),89operation_Id90| order by startTime desc91| take 5092```9394## Search by Hosted Agent Name9596For hosted agents, the Foundry agent name (e.g., `hosted-agent-022-001`) appears on `requests` and `traces` — NOT on `dependencies`. Use `requests` as the preferred entry point, materialize the matching request rows, then join downstream spans on `operation_Id`:9798```kql99let agentRequests = materialize(100requests101| where timestamp > ago(24h)102| extend103foundryAgentName = coalesce(104tostring(customDimensions["gen_ai.agent.name"]),105tostring(customDimensions["azure.ai.agentserver.agent_name"])106),107agentId = tostring(customDimensions["gen_ai.agent.id"]),108agentNameFromId = tostring(split(agentId, ":")[0]),109agentVersion = iff(agentId contains ":", tostring(split(agentId, ":")[1]), ""),110conversationId = coalesce(111tostring(customDimensions["gen_ai.conversation.id"]),112tostring(customDimensions["azure.ai.agentserver.conversation_id"]),113operation_Id114)115| where foundryAgentName == "<agent_name>"116or agentNameFromId == "<agent_name>"117| project operation_Id, conversationId, agentVersion118);119dependencies120| where timestamp > ago(24h)121| where isnotempty(customDimensions["gen_ai.operation.name"])122| join kind=inner agentRequests on operation_Id123| summarize124startTime = min(timestamp),125endTime = max(timestamp),126spanCount = count(),127errorCount = countif(success == false),128totalInputTokens = sum(toint(customDimensions["gen_ai.usage.input_tokens"])),129totalOutputTokens = sum(toint(customDimensions["gen_ai.usage.output_tokens"]))130by conversationId, operation_Id, agentVersion131| order by startTime desc132| take 50133```134135If `gen_ai.agent.id` does not contain `:`, continue using the requests-scoped name fields for filtering and treat `agentVersion` as optional enrichment rather than a required key.136137## Conversation Summary Table138139Present results in this format:140141| Conversation ID | Agent Version | Start Time | Duration | Spans | Errors | Input Tokens | Output Tokens |142|----------------|---------------|------------|----------|-------|--------|-------------|---------------|143| conv_abc123 | 3 | 2025-01-15 10:30 | 4.2s | 12 | 0 | 850 | 320 |144| conv_def456 | 4 | 2025-01-15 10:25 | 8.7s | 18 | 2 | 1200 | 450 |145146Highlight rows with errors in the summary. Offer to drill into any conversation via [Conversation Detail](conversation-detail.md).147148## Free-Text Search149150When the user provides a general search term (e.g., agent name, error message):151152```kql153union dependencies, requests, exceptions, traces154| where timestamp > ago(24h)155| where * contains "<search_term>"156| summarize count() by operation_Id157| order by count_ desc158| take 20159```160161## After Successful Query162163> 📝 **Reminder:** If this is the first trace query in this session, ensure App Insights connection info was persisted to the selected metadata file for the selected environment (see [trace.md — Before Starting](../trace.md#before-starting--resolve-app-insights-connection)).164