Snippets prontos para uso cobrindo SDKs, CLI, YAML de pipelines e código de inferência.
Submeter job de treino (Azure ML SDK v2)
pythonPython — cria um job de treino em compute cluster.
from azure.ai.ml import MLClient, command, Input
from azure.identity import DefaultAzureCredential
ml = MLClient(DefaultAzureCredential(), "<sub>", "<rg>", "<workspace>")
job = command(
code="./src",
command="python train.py --data ${{inputs.data}} --epochs 10",
inputs={"data": Input(type="uri_folder", path="azureml://datastores/main/paths/train/")},
environment="azureml://registries/azureml/environments/sklearn-1.5/labels/latest",
compute="cpu-cluster",
display_name="train-churn-v1",
experiment_name="churn",
)
returned_job = ml.jobs.create_or_update(job)
print(returned_job.studio_url)
az ml model create com path e tags.
az ml model create \
--name churn-classifier \
--version 3 \
--path ./outputs/model \
--type mlflow_model \
--tags env=prod owner=data-team \
--resource-group rg-mlops \
--workspace-name ws-mlops
Deploy em Managed Online Endpoint
yamlYAML de endpoint + deployment com traffic split.
# endpoint.yml
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
name: churn-ep
auth_mode: key
---
# deployment.yml
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
name: blue
endpoint_name: churn-ep
model: azureml:churn-classifier:3
instance_type: Standard_DS3_v2
instance_count: 2
# az ml online-endpoint update -n churn-ep --traffic "blue=90 green=10"
curl com token Bearer e payload JSON.
TOKEN=$(az ml online-endpoint get-credentials -n churn-ep --query primaryKey -o tsv)
curl -X POST https://churn-ep.eastus.inference.ml.azure.com/score \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"input_data":{"columns":["age","balance"],"data":[[42,15000]]}}'
Azure DevOps pipeline YAML
yamlStages: build, test, train, register, deploy.
# azure-pipelines.yml
trigger: [ main ]
pool: { vmImage: ubuntu-latest }
stages:
- stage: Build
jobs:
- job: build
steps:
- script: pip install -r requirements.txt && pytest tests/
- stage: Train
jobs:
- job: train
steps:
- task: AzureCLI@2
inputs:
azureSubscription: sc-azureml
scriptType: bash
inlineScript: az ml job create -f jobs/train.yml --stream
- stage: Deploy
dependsOn: Train
jobs:
- deployment: prod
environment: production
strategy:
runOnce:
deploy:
steps:
- script: az ml online-deployment create -f deploy.yml
Configurar Data Drift Monitor
pythonBaseline vs. produção via SDK.
from azure.ai.ml.entities import (
MonitorSchedule, MonitorDefinition, DataDriftSignal,
BaselineDataRange, ProductionData, ReferenceData,
)
monitor = MonitorSchedule(
name="churn-drift",
create_monitor=MonitorDefinition(
compute={"instance_type": "Standard_E4s_v3", "runtime_version": "3.3"},
monitoring_signals={
"data_drift": DataDriftSignal(
production_data=ProductionData(input_data="azureml://endpoints/churn-ep/logs"),
reference_data=ReferenceData(input_data="azureml:churn-baseline:1"),
alert_enabled=True,
),
},
),
)