Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
Manage Azure Storage services including Blob, File Shares, Queues, Tables, and Data Lake
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
references/sdk/azure-storage-file-datalake-py.md
1# Data Lake Storage Gen2 — Python SDK Quick Reference23> Condensed from **azure-storage-file-datalake-py**. Full patterns (ACL management,4> async client, directory operations, range downloads)5> in the **azure-storage-file-datalake-py** plugin skill if installed.67## Install8pip install azure-storage-file-datalake azure-identity910## Quick Start1112> **Auth:** `DefaultAzureCredential` is for local development. See [auth-best-practices.md](../auth-best-practices.md) for production patterns.1314```python15from azure.storage.filedatalake import DataLakeServiceClient16from azure.identity import DefaultAzureCredential17service_client = DataLakeServiceClient("https://<account>.dfs.core.windows.net", DefaultAzureCredential())18```1920## Best Practices21- Use hierarchical namespace for file system semantics22- Use `append_data` + `flush_data` for large file uploads23- Set ACLs at directory level and inherit to children24- Use async client for high-throughput scenarios25- Use `get_paths` with `recursive=True` for full directory listing26- Set metadata for custom file attributes27- Consider Blob API for simple object storage use cases2829## Non-Obvious Patterns30```python31# Large file upload requires append + flush32offset = 033for chunk in chunks:34file_client.append_data(data=chunk, offset=offset, length=len(chunk))35offset += len(chunk)36file_client.flush_data(offset)37```38