Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
Deploy, evaluate, and manage AI agents end-to-end on Microsoft Azure AI Foundry
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
foundry-agent/trace/references/kql-templates.md
1# KQL Templates — GenAI Trace Query Reference23Ready-to-use KQL templates for querying GenAI OpenTelemetry traces in Application Insights.45**Table of Contents:** [App Insights Table Mapping](#app-insights-table-mapping) · [Key GenAI OTel Attributes](#key-genai-otel-attributes) · [Span Correlation](#span-correlation) · [Hosted Agent Attributes](#hosted-agent-attributes) · [Response ID Formats](#response-id-formats) · [Common Query Templates](#common-query-templates) · [OTel Reference Links](#otel-reference-links)67## App Insights Table Mapping89| App Insights Table | GenAI Data |10|-------------------|------------|11| `dependencies` | GenAI spans: LLM inference (`chat`), tool execution (`execute_tool`), agent invocation (`invoke_agent`) |12| `requests` | Incoming HTTP requests to the agent endpoint. For hosted agents, also carries `gen_ai.agent.name` (Foundry name) and `azure.ai.agentserver.*` attributes — **preferred entry point** for agent-name filtering |13| `customEvents` | GenAI evaluation results (`gen_ai.evaluation.result`) — scores, labels, explanations |14| `traces` | Log events, including GenAI events (input/output messages) |15| `exceptions` | Error details with stack traces |1617## Key GenAI OTel Attributes1819Stored in `customDimensions` on `dependencies` spans:2021| Attribute | Description | Example |22|-----------|-------------|---------|23| `gen_ai.operation.name` | Operation type | `chat`, `invoke_agent`, `execute_tool`, `create_agent` |24| `gen_ai.conversation.id` | Conversation/session ID | `conv_5j66UpCpwteGg4YSxUnt7lPY` |25| `gen_ai.response.id` | Response ID | `chatcmpl-123` |26| `gen_ai.agent.name` | Agent name | `my-support-agent` |27| `gen_ai.agent.id` | Agent identifier | `asst_abc123` |28| `gen_ai.request.model` | Requested model | `gpt-4o` |29| `gen_ai.response.model` | Actual model used | `gpt-4o-2024-05-13` |30| `gen_ai.usage.input_tokens` | Input token count | `450` |31| `gen_ai.usage.output_tokens` | Output token count | `120` |32| `gen_ai.response.finish_reasons` | Stop reasons | `["stop"]`, `["tool_calls"]` |33| `error.type` | Error classification | `timeout`, `rate_limited`, `content_filter` |34| `gen_ai.provider.name` | Provider | `azure.ai.openai`, `openai` |35| `gen_ai.input.messages` | Full input messages (JSON array) — on `invoke_agent` spans | `[{"role":"user","parts":[{"type":"text","content":"..."}]}]` |36| `gen_ai.output.messages` | Full output messages (JSON array) — on `invoke_agent` spans | `[{"role":"assistant","parts":[{"type":"text","content":"..."}]}]` |3738Stored in `customDimensions` on `customEvents` (name == `gen_ai.evaluation.result`):3940| Attribute | Description | Example |41|-----------|-------------|---------|42| `gen_ai.evaluation.name` | Evaluator name | `Relevance`, `IntentResolution` |43| `gen_ai.evaluation.score.value` | Numeric score | `4.0` |44| `gen_ai.evaluation.score.label` | Human-readable label | `pass`, `fail`, `relevant` |45| `gen_ai.evaluation.explanation` | Free-form explanation | `"Response lacks detail..."` |46| `gen_ai.response.id` | Correlates to the evaluated span | `chatcmpl-123` |47| `gen_ai.conversation.id` | Correlates to conversation | `conv_5j66...` |4849> **Correlation:** Eval results do NOT link via id-parentId. Use `gen_ai.conversation.id` and/or `gen_ai.response.id` to join with `dependencies` spans.5051## Span Correlation5253| Field | Purpose |54|-------|---------|55| `operation_Id` | Trace ID — groups all spans in one request |56| `id` | Span ID — unique identifier for this span |57| `operation_ParentId` | Parent span ID — use with `id` to build span trees |5859### Operation_Id Join (requests → dependencies)6061Use `requests` as the hosted-agent entry point, then carry `operation_Id` forward as the trace key when joining into `dependencies`, `traces`, or `customEvents`:6263```kql64let agentRequests = materialize(65requests66| where timestamp > ago(7d)67| extend68foundryAgentName = coalesce(69tostring(customDimensions["gen_ai.agent.name"]),70tostring(customDimensions["azure.ai.agentserver.agent_name"])71),72agentId = tostring(customDimensions["gen_ai.agent.id"]),73agentNameFromId = tostring(split(agentId, ":")[0]),74agentVersion = iff(agentId contains ":", tostring(split(agentId, ":")[1]), ""),75conversationId = coalesce(76tostring(customDimensions["gen_ai.conversation.id"]),77tostring(customDimensions["azure.ai.agentserver.conversation_id"]),78operation_Id79)80| where foundryAgentName == "<foundry-agent-name>"81or agentNameFromId == "<foundry-agent-name>"82| project operation_Id, conversationId, agentVersion83);84dependencies85| where timestamp > ago(7d)86| where isnotempty(customDimensions["gen_ai.operation.name"])87| join kind=inner agentRequests on operation_Id88| extend89operation = tostring(customDimensions["gen_ai.operation.name"]),90model = tostring(customDimensions["gen_ai.request.model"])91| project timestamp, duration, success, operation, model, conversationId, agentVersion, operation_Id92| order by timestamp desc93```9495## Hosted Agent Attributes9697Stored in `customDimensions` on **both `requests` and `traces`** tables (NOT on `dependencies` spans):9899| Attribute | Description | Example |100|-----------|-------------|---------|101| `azure.ai.agentserver.agent_name` | Hosted agent name | `hosted-agent-022-001` |102| `azure.ai.agentserver.agent_id` | Internal agent ID | `code-asst-xmwokux85uqc7fodxejaxa` |103| `azure.ai.agentserver.conversation_id` | Conversation ID | `conv_d7ab624de92d...` |104| `azure.ai.agentserver.response_id` | Response ID (caresp format) | `caresp_d7ab624de92d...` |105106> **Important:** Use `requests` as the preferred entry point for agent-name filtering — it has both `azure.ai.agentserver.agent_name` and `gen_ai.agent.name` with the Foundry-level name. To reach downstream spans and related telemetry, carry `operation_Id` forward from the filtered request set and join other tables on that trace key.107108> 💡 **Version enrichment:** Some hosted-agent `requests` telemetry emits `gen_ai.agent.id` in `<foundry-agent-name>:<version>` format. When that delimiter is present, split on `:` to recover `agentVersion`; if it is absent, keep filtering on the requests-scoped name fields and leave version blank.109110> ⚠️ **`gen_ai.agent.name` means different things on different tables:**111> - On `requests`: the **Foundry agent name** (user-visible) → e.g., `hosted-agent-022-001`112> - On `dependencies`: the **code-level class name** → e.g., `BingSearchAgent`113>114> **Always start from `requests`** when filtering by the Foundry agent name the user knows.115116## Response ID Formats117118| Agent Type | Prefix | Example |119|------------|--------|---------|120| Hosted agent (AgentServer) | `caresp_` | `caresp_d7ab624de92da637008Rhr4U4E1y9FSE...` |121| Prompt agent (Foundry Responses API) | `resp_` | `resp_4e2f8b016b5a0dad00697bd3c4c1b881...` |122| Azure OpenAI chat completions | `chatcmpl-` | `chatcmpl-abc123def456` |123124When searching by response ID, use the appropriate prefix to narrow results. The `gen_ai.response.id` attribute appears on `dependencies` spans (for `chat` operations) and in `customEvents` (for evaluation results).125126## Common Query Templates127128### Overview — Conversations in last 24h129```kql130dependencies131| where timestamp > ago(24h)132| where isnotempty(customDimensions["gen_ai.operation.name"])133| summarize134spanCount = count(),135errorCount = countif(success == false),136avgDuration = avg(duration),137totalInputTokens = sum(toint(customDimensions["gen_ai.usage.input_tokens"])),138totalOutputTokens = sum(toint(customDimensions["gen_ai.usage.output_tokens"]))139by bin(timestamp, 1h)140| order by timestamp desc141```142143### Error Rate by Operation144```kql145dependencies146| where timestamp > ago(24h)147| where isnotempty(customDimensions["gen_ai.operation.name"])148| summarize149total = count(),150errors = countif(success == false),151errorRate = round(100.0 * countif(success == false) / count(), 1)152by operation = tostring(customDimensions["gen_ai.operation.name"])153| order by errorRate desc154```155156### Token Usage by Model157```kql158dependencies159| where timestamp > ago(24h)160| where customDimensions["gen_ai.operation.name"] == "chat"161| summarize162calls = count(),163totalInput = sum(toint(customDimensions["gen_ai.usage.input_tokens"])),164totalOutput = sum(toint(customDimensions["gen_ai.usage.output_tokens"])),165avgInput = avg(todouble(customDimensions["gen_ai.usage.input_tokens"])),166avgOutput = avg(todouble(customDimensions["gen_ai.usage.output_tokens"]))167by model = tostring(customDimensions["gen_ai.request.model"])168| order by totalInput desc169```170171### Tool Call Details172```kql173dependencies174| where operation_Id == "<operation_id>"175| where customDimensions["gen_ai.operation.name"] == "execute_tool"176| project timestamp, duration, success,177toolName = tostring(customDimensions["gen_ai.tool.name"]),178toolType = tostring(customDimensions["gen_ai.tool.type"]),179toolCallId = tostring(customDimensions["gen_ai.tool.call.id"]),180toolArgs = tostring(customDimensions["gen_ai.tool.call.arguments"]),181toolResult = tostring(customDimensions["gen_ai.tool.call.result"])182| order by timestamp asc183```184185Key tool attributes:186187| Attribute | Description | Example |188|-----------|-------------|---------|189| `gen_ai.tool.name` | Tool function name | `remote_functions.bing_grounding`, `python` |190| `gen_ai.tool.type` | Tool type | `extension`, `function` |191| `gen_ai.tool.call.id` | Unique call ID | `call_db64aa6a004a...` |192| `gen_ai.tool.call.arguments` | JSON arguments passed | `{"query": "latest AI news"}` |193| `gen_ai.tool.call.result` | Tool output (may be truncated) | `<<ImageDisplayed>>` |194195### Evaluation Results by Conversation196```kql197customEvents198| where timestamp > ago(24h)199| where name == "gen_ai.evaluation.result"200| extend201evalName = tostring(customDimensions["gen_ai.evaluation.name"]),202score = todouble(customDimensions["gen_ai.evaluation.score.value"]),203label = tostring(customDimensions["gen_ai.evaluation.score.label"]),204conversationId = tostring(customDimensions["gen_ai.conversation.id"])205| summarize206evalCount = count(),207avgScore = avg(score),208failCount = countif(label == "fail" or label == "not_relevant" or label == "incorrect"),209evaluators = make_set(evalName)210by conversationId211| order by failCount desc212```213214> For detailed eval queries by response ID or conversation ID, see [Eval Correlation](eval-correlation.md).215216## OTel Reference Links217218- [GenAI Spans](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/)219- [GenAI Agent Spans](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/)220- [GenAI Events](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-events/)221- [GenAI Metrics](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-metrics/)222