Am I Affected? Detecting the Trivy Compromise in Azure DevOps¶
This page is part of a series on the Trivy supply chain compromise of March 2026. See also: the main article, the chain attack timeline and IOCs, and supply chain risks in Azure DevOps.
Almost all the detection tooling published after the Trivy compromise assumes GitHub Actions. StepSecurity's trivy-compromise-scanner queries GitHub's workflow run API. The IOC hunting guides reference GitHub audit logs. Even Microsoft's own guidance from the Security Blog focuses on GitHub-hosted runners.
If you run Azure DevOps, none of that tooling applies directly. But the underlying questions are the same: did a compromised binary execute in your environment, does your network tell a story that confirms it, and what does your agent filesystem look like today?
This guide works through the detection in Azure DevOps terms.
Step 1: Identify your exposure paths¶
Before looking at logs, establish which of the following applies to your environment. Each has a different detection approach.
Path A — You use MicrosoftSecurityDevOps@1 (MSDO) with Trivy enabled.
This is the most common Azure DevOps exposure path. MSDO distributes Trivy via its own NuGet feed (SecDevTools). Microsoft has not publicly confirmed whether the feed was affected during the March 19–22 window. This path requires the most conservative approach.
Path B — You run Trivy directly as a script step or Docker task.
If your pipeline does something like docker pull aquasec/trivy:latest or downloads the Trivy binary via curl/wget in a bash/PowerShell step, you are directly in scope for the Docker Hub and binary exposure windows.
Path C — You pull container images from a registry that mirrored Docker Hub.
If your organisation uses Azure Container Registry, Artifactory, or Harbor with Docker Hub as an upstream proxy, cached versions of aquasec/trivy:0.69.4, 0.69.5, 0.69.6 or latest may still be present even if you never directly pulled from Docker Hub.
Path D — Your pipeline installs npm packages.
CanisterWorm propagated through the npm ecosystem using stolen tokens. If your pipeline runs npm install or npm ci against any package from the affected scopes, you may have pulled a compromised dependency during the propagation window.
Step 2: Query Azure DevOps pipeline logs¶
Find pipeline runs that executed during the exposure window¶
The primary Trivy exposure window is March 19 17:43 UTC to March 20 05:40 UTC. The Docker Hub window extends to March 23 01:36 UTC.
# Install Azure DevOps CLI extension if not already present
az extension add --name azure-devops
# Set your organisation and project
az devops configure --defaults organization=https://dev.azure.com/YOUR_ORG project=YOUR_PROJECT
# List pipeline runs during the exposure window
az pipelines runs list \
--status completed \
--query "[?startTime >= '2026-03-19T17:00:00Z' && startTime <= '2026-03-23T02:00:00Z'].[id, name, startTime, result, pipeline.name]" \
--output table
If you have many pipelines, narrow it down by searching for pipelines that reference Trivy by name:
# List all pipeline definitions and filter for Trivy references
az pipelines list --query "[].{id:id, name:name}" --output table
Then for each relevant pipeline, fetch the runs and their logs:
# Get log for a specific run
az pipelines runs show --id RUN_ID
az pipelines runs artifact list --run-id RUN_ID
What to look for in logs¶
Search logs for the following strings. Any hit during the exposure window warrants treating the pipeline's secrets as compromised:
scan.aquasecurtiy.org # typosquat C2 domain (note: aquasecurtiy not aquasecurity)
45.148.10.212 # C2 IP address
plug-tab-protective-relay.trycloudflare.com # fallback exfiltration tunnel
tpcp.tar.gz # exfiltration bundle filename
TeamPCP # self-identification string in payload
/tmp/pglog # persistence dropper download path
sysmon.py # persistence script name
If your pipeline logs are ingested into Azure Monitor or a SIEM, use the following KQL pattern:
// Azure Monitor — search pipeline job logs for IOC strings
AzureDiagnostics
| where TimeGenerated between (datetime(2026-03-19T17:00:00Z) .. datetime(2026-03-23T06:00:00Z))
| where Category == "PipelineRuns" or ResourceType contains "PIPELINES"
| where Message has_any (
"aquasecurtiy.org",
"45.148.10.212",
"tpcp.tar.gz",
"TeamPCP",
"sysmon.py",
"pglog"
)
| project TimeGenerated, Message, ResourceId
If you have Microsoft Defender for DevOps or Microsoft Sentinel with the Azure DevOps connector enabled, you can also search the SecurityAlert and AuditLogs tables:
// Sentinel — hunt for network connections to C2 infrastructure from ADO-connected workloads
DeviceNetworkEvents
| where TimeGenerated between (datetime(2026-03-19T17:00:00Z) .. datetime(2026-03-23T06:00:00Z))
| where RemoteIP == "45.148.10.212"
or RemoteUrl has "aquasecurtiy.org"
or RemoteUrl has "trycloudflare.com"
| project TimeGenerated, DeviceName, InitiatingProcessName, RemoteIP, RemoteUrl
Step 3: Check for the MSDO Trivy binary specifically¶
For Path A, the key question is which binary was bundled in the NuGet package that MicrosoftSecurityDevOps@1 downloaded and executed. Microsoft has not published this publicly (see issue #155).
To check what version actually ran, look in your pipeline job logs for the MSDO initialisation output. When MSDO starts, it typically logs which tool versions it is using. Search your logs for:
Trivy
trivy
tool: trivy
Running tool: trivy
If you find a log line indicating trivy 0.69.4 was used, treat the pipeline as compromised. If the log shows 0.69.3 or earlier, the binary itself was likely clean — but the MSDO task still ran, and whether the NuGet package distribution mechanism itself was clean remains unconfirmed without a Microsoft response.
If logs are not retained or are inconclusive, the conservative position is to treat any pipeline that ran MicrosoftSecurityDevOps@1 between March 19 and March 22 as potentially affected and rotate the pipeline's accessible secrets accordingly.
Step 4: Check container images in your registry¶
Azure Container Registry¶
# List all Trivy image tags in your ACR
az acr repository show-tags \
--name YOUR_ACR_NAME \
--repository aquasec/trivy \
--output table
# Get the digest for a specific tag
az acr repository show-manifests \
--name YOUR_ACR_NAME \
--repository aquasec/trivy \
--query "[?tags[?@ == '0.69.4' || @ == '0.69.5' || @ == '0.69.6' || @ == 'latest']].[digest, tags, createdTime]" \
--output table
Compare any returned digests against the known malicious digests:
sha256:27f446230c60bbf0b70e008db798bd4f33b7826f9f76f756606f5417100beef3 # Docker Hub 0.69.4
sha256:425cd3e1a2846ac73944e891250377d2b03653e6f028833e30fc00c1abbc6d33 # Docker Hub 0.69.6
If you find a match, the image is compromised and should be deleted immediately. Any pipeline or system that ran it during the exposure window should be treated as having had its secrets exfiltrated.
Artifactory or Harbor mirrors¶
If you run a self-hosted registry that proxied Docker Hub, the check is the same — look for the compromised digests, not just the tag names. Tags are mutable; digests are not.
# Docker inspect on a locally cached image
docker inspect aquasec/trivy:0.69.4 --format '{{.Id}}'
# Or pull by tag and immediately check digest
docker pull aquasec/trivy:0.69.4
docker inspect aquasec/trivy:0.69.4 | grep -i '"Id"'
Step 5: Check self-hosted agents¶
If any of your Azure DevOps pipelines ran on self-hosted agents during the exposure window and executed Trivy or pulled a compromised Docker image, check each agent for the persistence mechanism:
# Check for the TeamPCP Python dropper
ls -la ~/.config/systemd/user/sysmon.py 2>/dev/null && echo "FOUND: sysmon.py present"
# Check for active systemd user services matching known persistence names
systemctl --user list-units | grep -E "sysmon|pgmon|pgmonitor|internal-monitor"
# Check for the CanisterWorm-related service
systemctl --user status pgmon 2>/dev/null
# Check for Kubernetes-related persistence (if agent has kubectl access)
kubectl get daemonsets -n kube-system 2>/dev/null | grep -E "host-provisioner"
# Search for the dropper download path
ls -la /tmp/pglog 2>/dev/null && echo "FOUND: dropper binary present"
If sysmon.py or pgmon.service is found, the persistence mechanism has been installed. Do not simply rotate credentials and continue using the agent. The dropper polls for arbitrary payloads and any new credentials would be immediately re-stolen. Rebuild the agent from scratch.
Step 6: Check network logs¶
If your self-hosted agents or Microsoft-hosted equivalent log outbound connections, search for the following in your firewall, NSG flow logs, or Azure Monitor network data:
Destination IP: 45.148.10.212
Destination domain: scan.aquasecurtiy.org (note the misspelling)
Destination domain: plug-tab-protective-relay.trycloudflare.com
Destination domain: tdtqy-oyaaa-aaaae-af2dq-cai.raw.icp0.io
In Azure Monitor / Log Analytics:
AzureNetworkAnalytics_CL
| where TimeGenerated between (datetime(2026-03-19T17:00:00Z) .. datetime(2026-03-24T00:00:00Z))
| where DestIP_s == "45.148.10.212"
or DestPublicIPs_s has "45.148.10.212"
| project TimeGenerated, DestIP_s, DestPort_d, SrcIP_s, FlowStatus_s
// Or via NSG flow logs
AzureNetworkAnalytics_CL
| where TimeGenerated between (datetime(2026-03-19) .. datetime(2026-03-24))
| where DestPublicIPs_s has "45.148.10.212"
or FQDN_s has "aquasecurtiy"
Step 7: Decision matrix¶
Use this to decide what action to take based on what you found:
| What you found | Conclusion | Action |
|---|---|---|
| MSDO ran during window, version unknown | Uncertain | Rotate all pipeline secrets as precaution. Open Microsoft support ticket. |
| Log shows Trivy 0.69.4 executed | Compromised | Rotate all secrets accessible to that pipeline. Check agent for persistence. |
| Log shows Trivy 0.69.3 or earlier | Binary likely clean | No rotation required for binary exposure. MSDO channel still uncertain. |
| Compromised Docker image digest found in registry | Compromised | Delete image. Rotate all secrets from any pipeline that ran it. |
sysmon.py or pgmon.service found on agent |
Compromised + persistent backdoor | Rebuild agent. Rotate all secrets. Do not rotate-in-place. |
| C2 IP or domain found in network logs | Confirmed exfiltration | Treat all pipeline secrets as stolen. Initiate full IR process. |
| Nothing found | Likely unaffected | Still worth rotating long-lived credentials as hygiene measure. |
Step 8: What to rotate if affected¶
If any of the above checks indicates a compromise, rotate everything accessible to the affected pipeline. In Azure DevOps that typically means:
Service connections — Azure Resource Manager, Docker registry, Kubernetes, SSH, generic. Revoke and re-issue all service principal credentials linked to affected service connections.
Variable groups and pipeline secrets — Any secret stored in a variable group that was linked to an affected pipeline definition should be treated as exposed.
Azure DevOps PATs and service account tokens — If the pipeline had access to any Azure DevOps PAT or service endpoint token, rotate those in Azure DevOps under User Settings > Personal Access Tokens.
Cloud provider credentials — AWS access keys, Azure service principal client secrets, GCP service account keys. If these were accessible as environment variables or mounted files in the pipeline, rotate them at the provider level, not just in the Azure DevOps variable store.
Container registry tokens — ACR service principals, Docker Hub access tokens, any registry credentials stored in the pipeline.
This guide reflects the state of public information as of March 29, 2026. The MSDO NuGet feed provenance question (issue #155) remains unanswered by Microsoft. If Microsoft publishes an explicit confirmation or denial, the MSDO-specific steps above should be revisited.
Sources: Aqua Security GHSA-69fq-xp46-6x23, Docker supply chain advisory, Microsoft Azure Pipelines security documentation, SafeDep incident timeline, GitHub issue microsoft/security-devops-azdevops#155