Here’s a scenario that’s becoming more common than most teams want to admit.
A cost anomaly alert fires late on a Tuesday. AWS Bedrock spend spiked 340% in a single account over 72 hours. The FinOps engineer traces it to a Lambda function attached to an IAM role with broad model invocation permissions. Nobody on the team recognizes the role. They escalate to Security. Security asks who provisioned it. Nobody knows. The account was inherited eight months ago during an acquisition and never fully inventoried.
That’s a FinOps problem, a security incident, and an AI governance failure all at once. Most organizations aren’t structured to treat it as all three simultaneously, and that’s where the risk compounds.
The Org Chart Didn’t Anticipate AI
The separation of FinOps and Security made sense when the infrastructure was predictable. FinOps owned cost visibility, tagging hygiene, and rightsizing. Security owned IAM, detective controls, and compliance posture. For EC2, RDS, and S3 workloads, these lanes rarely collided in meaningful ways.
AI services broke that assumption quietly.
An over-permissioned IAM role attached to a Bedrock endpoint is both a security finding and an uncapped cost exposure. An untagged SageMaker experiment is a workload running outside your threat model that nobody is tracking spend on. A foundation model accessible to an externally-invocable function is a data egress vector and a budget liability that nobody explicitly approved.
Every AI service misconfiguration has two blast radii. One is measured in dollars. The other is measured in data. Most organizations have a team watching each, but nobody watching both at the same time.
Shadow AI Is Already Inside Your AWS Org
This isn’t a future problem. It’s already happening in environments I’ve worked in across PE-backed and enterprise AWS organizations.
Developers are experimenting with Bedrock, SageMaker, and third-party model APIs directly in non-production accounts. Those accounts frequently carry permissive Service Control Policies (SCPs) inherited from early-stage growth or a pre-acquisition baseline that nobody revisited. AI services were never included in the tagging strategy because nobody anticipated needing to. FinOps dashboards aren’t surfacing model invocations with the same visibility as compute spend. Security teams aren’t writing detection logic for bedrock:InvokeModel the way they’d write rules for anomalous S3 access.
The assumptions quietly baked into this: AI spend is too small to govern yet. Bedrock is a managed service so the security surface is Amazon’s problem. Developers aren’t using foundation models in production because nobody submitted a budget request.
All three of those assumptions are wrong in most environments I’ve assessed.
Where Convergence Actually Matters
A shared accountability model, applied to a specific class of infrastructure that neither team currently owns cleanly. Not a new tool. Not a reorganization.
Visibility has to be symmetric. Cost anomaly alerts on AI services should automatically trigger a security review threshold alongside the budget notification. If FinOps sees a spike in Bedrock invocations, Security should be in that conversation by default, as a standing workflow rather than an escalation. Tag enforcement for AI services needs joint authorship. Right now it’s typically a FinOps initiative that Security never signed off on.
IAM is a cost control. Least-privilege access to AI services directly limits cost exposure. An IAM principal that can only invoke one specific model in one specific account can’t generate runaway spend across your org. Unused model access permissions are simultaneously a security finding and a FinOps finding. Treating them as only one or the other means half the organization never sees them.
The account boundary is your best governance primitive. AI workloads should live in dedicated accounts with an explicit cost owner and an explicit security owner, two people who have both agreed they’re responsible. Account vending for AI experimentation needs guardrails at provisioning time: budget thresholds, SCPs scoped to approved model access, egress controls active from day one. Retrofitting governance onto an account after spend has already appeared is governance that consistently arrives too late.
What Leadership Usually Gets Wrong
“We’ll govern AI workloads once they reach production.”
By that point the IAM roles are already months old, the tagging debt is structural, and the cost baseline was never established cleanly. The window to set governance boundaries is at provisioning, not at promotion.
“Our FinOps tool will catch runaway AI spend.”
FinOps tooling tells you that spend happened. It doesn’t tell you whether the principal that generated it should have had access at all. Cost anomaly detection and access governance are different signals. They need different owners. The problem is that right now, in most organizations, neither team is watching both signals simultaneously.
The Real Ask
The enterprises handling this well aren’t doing anything exotic. They’ve extended a principle that works everywhere else in mature cloud governance: the account boundary is the unit of ownership, and ownership means someone is accountable for both what it costs and what it can access.
AI just makes the cost of skipping that step visible faster than anything that came before it.
If your FinOps and Security teams aren’t in the same room when AI services get provisioned, they probably need to be. Not because of a compliance mandate. Because the blast radius when they aren’t is bigger than most organizations have priced in.

Author: Jorge P., Senior Security Engineer, RKON

