Zenoo
Risk operations

Sanctions list explosion: managing 50+ data sources without drowning your team

Jonnie Davis8 min read
Share
Sanctions list explosion: managing 50+ data sources without drowning your team

By Jonnie Davis, VP Sales and Partnerships, Zenoo

A mid-sized European payments firm told us last quarter that their sanctions screening bill had doubled in eighteen months, not because transaction volumes had grown, but because the number of lists they needed to screen against had. They went from 12 data sources in 2024 to 34 by the end of 2025. Their compliance team did not double. The budget certainly did not double. But the regulatory expectation did.

This is not an isolated case. The number of sanctions designations globally passed 40,000 in 2025, spread across more than 50 discrete lists maintained by different authorities, each with its own update cadence, data format, and jurisdictional scope. OFAC's SDN list alone added over 2,500 new designations in a single year. The EU's restrictive measures expanded significantly following geopolitical events. FATF's updated guidance on targeted financial sanctions raised the bar on what constitutes adequate screening coverage. And then there are the proprietary feeds: enhanced PEP databases, state-owned enterprise lists, sectoral sanctions data, beneficial ownership-linked designations, and vessel-specific lists that did not exist five years ago.

If your screening programme relies on a single vendor pulling from a single consolidated list, you have a coverage gap. And regulators are increasingly treating that gap as a programme deficiency, not a technology limitation.

Single-vendor screening creates blind spots that regulators now recognise

The traditional model is straightforward: you select a sanctions screening vendor, they provide access to a consolidated list, and your compliance team reviews the alerts. It works until it does not.

The problem is that no single vendor covers every list. Vendors make editorial decisions about which lists to include, how frequently to update them, and how to normalise data across different formats. These decisions are invisible to you unless you audit them, and most compliance teams do not. You trust the vendor. The vendor makes reasonable commercial trade-offs. And somewhere in the gap between their coverage and the regulatory expectation, risk accumulates.

We spoke with a compliance technology lead at a UK-regulated EMI who ran the same batch of 5,000 customer names through three different screening providers in a single week. The results were sobering.

"Provider A returned 312 alerts. Provider B returned 487. Provider C returned 391. The overlap, names that all three flagged, was only 58%. Each provider was missing matches that the others caught. We had been relying on Provider A alone for two years. The question that kept me up at night was: how many true matches did we miss?"

This is not a hypothetical compliance problem. OFSI enforcement actions in 2025 included cases where firms were penalised not for failing to screen, but for screening against incomplete list coverage. The EU's AMLA, which begins phased operations in 2025, has signalled that it will assess whether regulated entities have "adequate and comprehensive" screening arrangements. Single-provider models are increasingly difficult to defend as adequate when the data landscape has fragmented this far.

The sanctions data landscape is fragmenting by design

Understanding why this is happening matters for choosing the right response. Sanctions data is fragmenting for structural reasons, not temporary ones.

First, there are more sanctioning authorities than ever. Beyond the traditional programmes (OFAC, EU, UN Security Council, OFSI), there are now meaningful sanctions regimes maintained by Australia, Canada, Japan, Switzerland, and a growing number of other jurisdictions. Each maintains its own list with its own designations. Some mirror the UN list. Many do not.

Second, sanctions designations are becoming more granular. Sectoral sanctions, which restrict specific types of transactions rather than blocking all dealings with a person or entity, require different data structures and screening logic than traditional list-based sanctions. You cannot screen for a sectoral restriction with a simple name match.

Third, proprietary and enhanced data feeds have become a regulatory expectation in practice, if not in letter. Screening against government-issued lists alone may satisfy the minimum statutory requirement, but regulators assessing programme effectiveness increasingly look for adverse media screening, enhanced PEP data, state-owned enterprise identification, and vessel tracking. These data sources come from different providers, in different formats, at different price points.

Fourth, update frequencies are diverging. OFAC may update its list multiple times per week. Some national lists update monthly. Proprietary feeds update in near-real-time. If your screening programme runs on a single consolidated list with a single update cadence, you have latency gaps for some sources and unnecessary processing overhead for others.

The result is that a compliance team trying to maintain genuinely comprehensive screening coverage in 2026 is dealing with a data management problem as much as a compliance problem. And it is a data management problem that most single-vendor screening tools were not designed to solve.

Three orchestration models: parallel, sequential, and ensemble

When we talk to compliance teams about moving beyond single-vendor screening, the conversation usually centres on three models. Each has trade-offs, and the right choice depends on your risk profile, transaction volumes, and budget.

Parallel orchestration sends each screening query to multiple providers simultaneously and aggregates the results. This maximises coverage because every provider's list is checked on every query. The downside is cost: you are paying for multiple screening calls per transaction. You also need a deduplication layer to handle cases where multiple providers flag the same underlying designation with slightly different data. Parallel orchestration suits firms with high risk appetite for false negatives and sufficient budget to absorb the per-query cost multiplication.

Sequential orchestration routes each query through a primary provider first. Only queries that return no match (or a match below a confidence threshold) are escalated to secondary providers. This is more cost-efficient because the majority of clear cases are resolved on the first call. The risk is latency: multi-step screening takes longer, which matters for real-time transaction screening. Sequential orchestration works well for customer onboarding screening where response time is measured in seconds rather than milliseconds.

Ensemble orchestration combines elements of both. It maintains a primary provider for baseline screening but runs periodic batch reconciliations against secondary providers to catch anything the primary missed. New designations trigger an immediate parallel check across all providers. Routine re-screening follows the sequential model to control costs. This is the model we see most frequently in mature compliance programmes because it balances coverage, cost, and operational complexity.

The common thread across all three models is that you need an orchestration layer that can route queries, normalise responses from different providers, deduplicate alerts, and present a unified view to your analysts. Without that layer, multi-provider screening becomes a manual coordination exercise that is worse than single-vendor dependency.

False positive management gets harder with more data sources, not easier

Here is the uncomfortable truth about expanding your screening coverage: more data sources means more alerts. And more alerts, without better alert management, means your compliance team drowns in false positives.

A firm screening against 15 lists through a single provider might generate 200 alerts per day. The same firm screening against 50 lists through three providers might generate 600 to 800 alerts per day. If your alert review process does not scale with your screening coverage, you have traded one compliance risk (inadequate coverage) for another (inadequate alert disposition).

The challenge is compounded by the fact that different providers return alerts in different formats, with different confidence scores, and different supporting data. An analyst reviewing an OFAC match from Provider A and a potentially duplicate match from Provider B's proprietary enhanced list needs to determine whether these are the same underlying designation, whether the confidence levels are comparable, and which provider's supporting data is more complete.

"We expanded from one screening provider to three, and our alert volume tripled. But the real problem was not the volume. It was that our analysts were spending 40% of their time trying to reconcile duplicate alerts across providers. We needed a deduplication layer before we needed more providers. We got the sequencing backwards."

Effective multi-source alert management requires three capabilities that most standalone screening tools lack. First, entity resolution across providers: determining that Alert X from Provider A and Alert Y from Provider B relate to the same designated person or entity. Second, confidence score normalisation: translating different providers' scoring methodologies into a comparable scale so your analysts can prioritise consistently. Third, consolidated case management: presenting the analyst with a single alert package that contains all relevant matches from all providers, rather than requiring them to review each provider's output separately.

If you are planning to expand your screening data sources, invest in these capabilities first. The coverage improvement is wasted if your team cannot process the resulting alerts efficiently.

Cost per screening: vendor consolidation versus best-of-breed stacking

Cost is where the multi-provider conversation gets difficult. Screening vendors typically price per query or per customer screened. Moving from one provider to three does not triple your costs (sequential orchestration mitigates this), but it does increase them materially. The question is whether the increase is justified by the coverage improvement.

We have analysed cost structures across dozens of implementations, and the economics typically fall into three bands.

Single-vendor screening with a consolidated list provider costs roughly 0.8 to 1.5 pence per query. This is the baseline. It covers the major government lists and basic proprietary enhancements. Coverage is typically 60 to 75% of available sanctions data sources.

Dual-provider orchestration (primary plus one secondary) costs roughly 1.2 to 2.5 pence per query when using sequential routing, because only a fraction of queries reach the secondary provider. Coverage jumps to 80 to 90% of available sources. This is the sweet spot for most regulated firms: meaningful coverage improvement at a manageable cost increment.

Full multi-provider ensemble (three or more providers with parallel reconciliation) costs 2.5 to 4.5 pence per query. Coverage approaches 95% or above. This model is typically justified for firms in high-risk sectors (correspondent banking, trade finance, virtual asset service providers) or firms operating across many jurisdictions simultaneously.

The cost comparison that matters is not the per-query price. It is the fully loaded cost including analyst time for alert review. A cheaper screening provider that generates twice as many false positives may cost more in total than a more expensive provider with better matching precision. When evaluating multi-provider models, factor in the expected alert volume, the deduplication efficiency, and the analyst time per alert. The cheapest screening call is the one that produces a confident, actionable result on the first pass.

Implementation checklist: getting multi-source screening right

If you are moving from single-vendor to multi-source sanctions screening, here is what we recommend based on implementations we have supported across UK and EU regulated firms.

Audit your current coverage first. Before adding providers, understand what your existing provider actually covers. Request their full list inventory, including update frequencies. Compare it against the lists required by every jurisdiction in which you operate. The gap analysis tells you exactly which providers you need to add and which lists they must cover. Do not buy coverage you already have.

Build the deduplication layer before adding sources. As discussed, expanding coverage without deduplication creates an alert management crisis. Your orchestration layer must be able to resolve entities across providers before you start routing queries to multiple destinations. Test deduplication with a historical batch before going live with real-time multi-provider screening.

Define your SLA monitoring framework. Each provider has different uptime commitments, update latencies, and response times. Your orchestration layer needs to monitor these independently and route around providers that are degraded or unavailable. We have seen cases where a provider's list update was delayed by 48 hours without any notification to the customer. If you are not monitoring, you will not know.

Normalise confidence scores. A 92% match score from Provider A is not equivalent to a 92% match score from Provider B. Each provider uses different matching algorithms, different weighting for name components, and different reference data. Calibrate your score normalisation by running the same test dataset through all providers and mapping their score distributions.

Maintain a single audit trail. Regulators expect a complete, chronological record of every screening action: the query, the lists screened, the results, and the disposition. When you screen across multiple providers, this audit trail must consolidate all provider responses into a single record per screening event. If an examiner asks "what lists did you screen this customer against on this date, and what were the results?" you need a single answer, not three separate reports from three different systems.

Plan for provider exit. Vendor lock-in is a risk with any technology provider, and it is a more acute risk in a multi-provider model where your orchestration logic depends on specific provider APIs and data formats. Ensure your integration architecture uses abstraction layers that allow you to swap providers without rebuilding your screening workflow. Zenoo's orchestration approach is built around this principle: standardised interfaces that insulate your compliance operations from individual provider dependencies.

Test failover before you need it. If your primary provider goes down, does your orchestration layer automatically route to alternatives? How quickly? What happens to in-flight queries? Test this under realistic conditions, not just on paper. The firms that handle provider outages well are the ones that rehearsed the scenario before it happened.

The regulatory trajectory is clear. AMLA, FATF, and national supervisors are moving towards an expectation of comprehensive, multi-source screening coverage with documented rationale for every architectural decision. The firms that invest in orchestration infrastructure now will be well positioned. The firms that wait for an enforcement action to force the issue will pay more, in every sense.

Sanctions data is not going to consolidate. The number of lists, the variety of data formats, and the divergence of update frequencies will only increase as geopolitical complexity grows and new sanctioning authorities emerge. The question is not whether you need multi-source screening. It is whether your infrastructure can support it without overwhelming your team.

If you are managing sanctions screening across multiple data sources and finding that alert volumes, reconciliation overhead, or coverage gaps are becoming unsustainable, we should talk. Book a demo. 30 minutes. Your data. No slides.

Jonnie Davis is VP Sales and Partnerships at Zenoo, where he works with compliance and operations teams to design screening and onboarding infrastructure that scales with regulatory complexity.

Share

More from FinCrimeOps

22 hours per alert is too long. Cut it to 12 minutes.

One platform. 10 AI agents. 240+ check types. Live in weeks, not months.

30 minutes. Your data. No slides.