A Bedrock spend spike and an unauthorized InvokeModel call are the same event. One shows up in AWS Cost Anomalies Detection and routes to your FinOps team via Amazon Simple Notification Service (SNS). The other shows up in CloudTrail and, if you’ve written the detection logic, routes to your Security Operations Center (SOC). In most organizations I work with, these two signals never meet. The fastest indicator that someone is abusing an AI service in your AWS org is sitting in a billing dashboard your security team doesn’t watch.
Two Alerts, No Correlation
AWS Cost Anomaly Detection uses machine learning to flag spend deviations by service, account, or cost allocation tag. When it detects something, it sends a notification through SNS or email to whoever owns the cost monitor. That’s typically a FinOps engineer or a finance team lead.
Separately, CloudTrail logs every bedrock:InvokeModel, bedrock:Converse, and bedrock:InvokeModelWithResponseStream call as a management event. GuardDuty can flag suspicious patterns like guardrail removal or anomalous access from unfamiliar principals. If you’ve enabled model invocation logging (which requires explicit opt-in and routes to CloudWatch Logs or S3), you can also capture the request and response payloads.
These are two views of the same activity. One tells you what it cost. The other tells you who did it, from where, and with what permissions. There is no native AWS integration that cross-correlates a cost anomaly alert with the CloudTrail events that generated the spend. Neither team sees both views by default.
FinOps knows spend spiked but can’t assess whether the access was authorized. Security has the access data but didn’t know to look because nobody flagged the cost event to them. I wrote about this organizational dynamic in the first piece in this series. This article is about what to do about it at the detection layer.
The CloudTrail Gaps Worth Knowing
CloudTrail gives you the basics for Bedrock: the principal Amazon Resource Name (ARN), the model ID, the source IP, the timestamp. That’s enough to start an investigation. But there are real gaps that affect detection quality.
In mid-2024, Sysdig’s Threat Research Team found that failed Bedrock API calls logged in the same format as successful ones, without distinct error codes in the CloudTrail record. AWS resolved that specific issue by August 2024 after Sysdig’s disclosure, but the episode is instructive: security teams should validate CloudTrail log fidelity for newer AI services rather than assuming error codes behave as expected. Splunk has published specific detections for Bedrock access-denied events and for DeleteModelInvocationLoggingConfiguration, which is an anti-forensics indicator worth monitoring. Someone disabling model invocation logging should always generate an alert.
Model invocation logging itself is not on by default. Without it, you know a model was invoked but not what data was sent to it. For a security team trying to assess data exfiltration risk through a foundation model, that’s a significant blind spot.
Most SOCs I’ve assessed have CloudTrail feeding their Security Information and Event Management (SIEM) platform. Very few have written detection rules specifically for Bedrock API patterns. The rules exist in published form, but adoption is low because Bedrock wasn’t on the threat model when the detection engineering backlog was last prioritized.
Cost Anomalies Detection Is Fast but Shallow
Here’s what FinOps tooling does well in this context: it catches spend deviations relatively fast. AWS Cost Anomaly Detection runs approximately three times per day against cost data that can lag up to 24 hours, so detection typically occurs within a day of the anomalous spend. For AI services where a single compromised credential can generate thousands of dollars in model invocations in a short window, that’s still potentially faster than a CloudTrail-based detection in an environment where nobody has written rules for Bedrock API patterns.
Earlier this year, Sysdig’s Threat Research Team documented an AWS breach where attackers went from an exposed credential in a public S3 bucket to full administrative control in under ten minutes. The privilege escalation phase, credential theft to successful
Lambda execution, took eight minutes. The compromised IAM user had read/write permissions on Lambda and restricted permissions on Bedrock. In a scenario like that, a cost anomaly alert firing on unexpected Bedrock spend could be an early indicator of compromise, particularly in environments where the SOC hasn’t written Bedrock-specific detection rules.
But Cost Anomaly Detection can tell you that Bedrock spend increased 400% in a specific account. It can’t tell you whether the IAM role generating that spend should exist. It can’t tell you the role was attached to a Lambda function reachable from outside the VPC. It can’t tell you that model invocations included customer data in the prompt. It’s a financial signal. By itself, it triggers a cost investigation. Paired with CloudTrail context, it triggers a security investigation.
Building the Bridge
The integration doesn’t require a new product. It requires an EventBridge rule and a routing decision.
When AWS Cost Anomaly Detection flags a spend deviation on AI services (Bedrock, SageMaker, or whatever your org has classified as AI-adjacent), that alert should route to two destinations: the FinOps team’s existing workflow and a security triage queue. Not as an escalation. As a standing enrichment. The security team’s first step on receiving that alert is straightforward: pull the CloudTrail events for the flagged service and account over the anomaly window, identify the principals involved, assess whether the access was expected.
Going the other direction, detection rules for InvokeModel volume anomalies in CloudTrail should include cost context. If a principal is generating model invocations at a rate that would produce a meaningful cost event, that should be visible in the detection output. Correlating the two signals turns an ambiguous CloudTrail alert into one with financial impact attached, which changes how it gets prioritized.
At the account level, this means AI workloads need dual-owner alerting from provisioning. When an account is vended for AI experimentation, both a cost monitor (Cost Anomaly Detection, scoped to the account) and a security detection (CloudTrail rules for model invocation APIs, scoped to the account) should be provisioned at the same time. Bolting on one after the other creates the gap this entire series is about.
What Changes Operationally
When these two signal streams connect, triage changes. A FinOps engineer who flags a Bedrock cost spike doesn’t just tag it to a cost center and close the ticket. They route it for an access review, because that’s the standing workflow. A security analyst investigating anomalous InvokeModel calls doesn’t start from zero trying to assess blast radius. They already have the cost data showing financial exposure.
The organizations getting this right aren’t running exotic tooling. They’ve made a routing decision: AI service alerts go to both teams, from the moment the account is provisioned. If your cost anomaly alerts and your CloudTrail detections for AI services currently land in different queues with no shared context, that’s the gap to close first.

Author: Jorge P., Senior Security Engineer, RKON

