Industry Intel - Conference Recaps and Thought Leadership Article
The compliance industry has normalized 95% false positive rates as a cost of doing business. It isn’t a volume problem — it’s an architecture failure.
Somewhere in the last decade, the compliance industry arrived at an unspoken consensus: a 95% false positive rate is acceptable. Not celebrated, not defended in polite company — but accepted. The number shows up in conference presentations as a known quantity, not as an indictment. Entire departments, budgets, and staffing models are built around the assumption that nearly every alert the screening system generates will lead nowhere.
This article is not about data quality. We covered that last week. This article is about the economics and incentive structures that created the false positive economy, why those structures have been so durable, and why the regulatory environment is about to break them.
Ask a compliance leader whether a 95% false positive rate is acceptable, and they will tell you it is not. Then ask what their actual rate is, and the conversation gets quieter. The gap between what the industry says and what the industry tolerates is the defining feature of the false positive economy.
The consensus did not emerge from negligence. It emerged from a rational calculation: the cost of a missed sanctions hit — seven-figure fines, consent orders, criminal referrals, career-ending reputational damage — dwarfs the cost of a wasted investigation. So institutions set thresholds low, cast nets wide, and absorb the noise. The logic is defensible. The result is not.
A financial institution processing 10,000 screening alerts per day at a 90% false positive rate generates 9,000 alerts that lead to no action. At an industry-average thirty minutes per investigation, that is 4,500 analyst-hours spent daily on nothing. That is not a rounding error. It is a structural condition — and the industry has organized itself around it rather than solving it.
No regulator has ever fined an institution for generating too many alerts. That single fact explains more about the false positive economy than any technology discussion ever will.
The false positive economy persists because the penalty structure is radically asymmetric. False negatives are catastrophic: a missed hit on a sanctioned entity can trigger enforcement actions measured in hundreds of millions of dollars, plus the reputational and criminal exposure that follows. False positives carry no direct regulatory penalty whatsoever.
This creates a one-directional ratchet. Every enforcement action, every consent order, every headline about a compliance failure pushes thresholds lower and nets wider. Nothing pushes them back. The rational institutional response is to over-alert, because the downside of under-alerting is existential and the downside of over-alerting is merely expensive.
The result is a system that optimizes for the appearance of rigor rather than the reality of effectiveness. A compliance program that generates 50,000 alerts per month looks more robust on paper than one that generates 5,000 — even if the second program catches more genuine risk with fewer wasted investigations. Volume has become a proxy for vigilance, and that proxy has never been seriously challenged.
When compliance leaders calculate the cost of false positives, they typically count analyst hours and divide by alert volume. This dramatically understates the problem, because it ignores four categories of cost that never appear on the compliance budget line.
Cost 1: Investigation waste. This is the one everyone counts. Analyst salaries, technology infrastructure, QA overhead, management time. At a large institution, this runs $12–18 million per year. It is the visible part of the iceberg.
Cost 2: Missed true positives from alert fatigue. This is the cost nobody wants to talk about. When an analyst reviews twenty consecutive false positives, the twenty-first alert — which might be genuine — receives the same reflexive dismissal. The research on alert fatigue is unambiguous: detection accuracy degrades measurably after sustained exposure to false alarms. The false positive economy does not just waste money. It actively undermines the detection it exists to support.
Cost 3: Investigator attrition. Compliance analyst turnover is among the highest in financial services. The primary driver is repetitive, low-value work — the lived experience of the false positive economy. Every departure takes institutional knowledge with it and costs $40,000–80,000 in recruiting and training.
Cost 4: Customer friction. Every false positive in customer onboarding screening is also a customer experience failure. Legitimate applicants flagged for enhanced due diligence experience delays, intrusive questioning, and in some cases outright denial. In competitive markets, this friction drives business to competitors with cleaner screening.
Cost 5: The new regulatory exposure. The EU’s Anti-Money Laundering Authority closed its first major supervisory data call in April 2026, and the methodology explicitly evaluates alert quality — not just alert quantity. For the first time, generating too many false positives is becoming a regulatory liability, not just an operational one.
The false positive economy survives because its costs are distributed across five budget lines that no single executive owns. Investigation waste sits in compliance. Attrition sits in HR. Customer friction sits in revenue. Missed hits sit in legal. And regulatory exposure sits in the future. Nobody adds them up.
The obvious response to a 95% false positive rate is to raise matching thresholds. In a rules-based screening system, this is the only lever available. And it is a trap.
Rules-based engines operate on a fixed precision-recall tradeoff. Raising the threshold reduces false positives but simultaneously increases false negatives. There is no setting on a rules-based system that reduces false positives without increasing risk. This is why the threshold ratchet only turns one way.
The precision-recall tradeoff is not a law of physics. It is a property of the architecture. Predictive models trained on millions of adjudicated screening decisions learn which combinations of signals predict genuine matches and which predict noise. They can reduce false positives and false negatives simultaneously. Leading implementations report 70–90% false positive reductions while improving true-positive detection rates.
This is the core technical insight: the false positive problem is not a tuning problem. It is an architecture problem. You cannot tune your way out of it. You have to replace the decision-making framework.
The false positive economy has been stable for a decade because nothing in the environment penalized it. Three forces are now converging to break that stability.
Regulatory pressure on alert quality. AMLA’s supervisory methodology is the leading indicator. The OCC’s 2026 examination priorities reference “demonstrable effectiveness” of screening controls. FinCEN’s proposed rulemaking includes provisions for measuring screening precision. The message is shifting from “screen everything” to “screen everything well.”
Predictive model maturity. Three years ago, deploying machine learning in compliance screening was a pilot project. In 2026, it is production infrastructure at dozens of major institutions. The “unproven technology” objection is no longer credible.
Feedback loop automation. Agentic AI systems now make it operationally feasible to connect analyst dispositions back to the screening engine. The system gets measurably better over time instead of repeating the same mistakes indefinitely.
The false positive economy is not a fact of nature. It is an artifact of asymmetric penalties, first-generation technology, and an industry that organized itself around a problem rather than solving it. For a decade, the equilibrium held because nothing punished false positives. That is changing.
Institutions that move now will find themselves with leaner operations, better detection, lower attrition, and a regulatory posture that demonstrates effectiveness rather than just effort. Institutions that wait will discover that a 95% false positive rate is no longer evidence of conservative compliance. It is evidence that the system cannot tell the difference.