Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
Design and configure Azure API Management as an AI Gateway for LLM traffic routing and rate limiting
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
references/sdk/azure-ai-contentsafety-py.md
1# Azure AI Content Safety — Python SDK Quick Reference23> Condensed from **azure-ai-contentsafety-py**. Full patterns (blocklist management, image analysis, 8-severity mode)4> in the **azure-ai-contentsafety-py** plugin skill if installed.56## Install7```bash8pip install azure-ai-contentsafety9```1011## Quick Start12```python13from azure.ai.contentsafety import ContentSafetyClient, BlocklistClient14from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory15client = ContentSafetyClient(endpoint=endpoint, credential=credential)16```1718## Non-Obvious Patterns19- Two clients: `ContentSafetyClient` (analyze) and `BlocklistClient` (blocklist management)20- Image from file: base64-encode bytes, pass via `ImageData(content=base64_str)`21- 8-severity mode: `AnalyzeTextOptions(text=..., output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS)`22- Blocklist analyze: `AnalyzeTextOptions(text=..., blocklist_names=[...], halt_on_blocklist_hit=True)`2324## Best Practices251. Use blocklists for domain-specific terms262. Set severity thresholds appropriate for your use case273. Handle multiple categories — content can be harmful in multiple ways284. Use `halt_on_blocklist_hit` for immediate rejection295. Log analysis results for audit and improvement306. Consider 8-severity mode for finer-grained control317. Pre-moderate AI outputs before showing to users32