By Antonio Carro, Head of Delivery, Zenoo
In 2025, a mid-market payment processor in continental Europe received a fine of EUR 3.2 million. Not for onboarding failures. Not for missing sanctions hits. For stale KYB data. Their corporate customer records had not been refreshed in over three years, and during that period, one merchant's beneficial ownership had changed twice, its registered address had moved to a high-risk jurisdiction, and its trading activity had shifted entirely. The firm's compliance file still showed the original onboarding snapshot. The regulator's view was simple: if you cannot demonstrate that your business customer information is current, you cannot demonstrate that you are managing the risk.
This is not an isolated case. Across the UK and EU, enforcement actions increasingly cite outdated corporate customer data as a standalone failure, not merely an aggravating factor. The expectation is clear: KYB is not a one-time event. It is a continuous obligation. And most compliance teams are not meeting it.
Annual refresh for everyone is the wrong answer
The instinct, once a firm recognises the problem, is to mandate an annual refresh for the entire corporate customer base. It feels thorough. It looks defensible. And it is almost always the wrong approach.
Annual refresh cycles treat every business customer as equally likely to change. A FTSE 250 company with stable ownership, consistent trading patterns, and a UK registered office gets the same refresh cadence as a newly onboarded fintech operating across three jurisdictions with a complex holding structure. The result is predictable: compliance analysts spend weeks re-verifying entities where nothing has changed, while genuinely high-risk customers wait their turn in the queue.
The cost is not trivial. For a firm with 2,000 corporate customers, a full annual refresh cycle typically requires 1.5 to 2.5 analyst hours per entity when you account for data gathering, verification, risk reassessment, and documentation. At that volume, you are looking at 3,000 to 5,000 analyst hours per year dedicated purely to periodic reviews. For most mid-market compliance teams, that represents 1.5 to 2.5 full-time equivalent roles consumed entirely by refresh activity, much of it confirming that nothing has changed.
The alternative, which is what we see at the other end of the spectrum, is worse. Some firms simply do not refresh at all. They onboard, they screen periodically against sanctions lists, and they wait for something to go wrong before they revisit the KYB file. This is not a compliance programme. It is a bet that the regulator will not look too closely.
Risk-tiered refresh scheduling changes the calculation
The regulatory expectation is not that you refresh everything on the same cycle. It is that your refresh frequency reflects the risk. FATF guidance, the EU AML Package, and the FCA's Financial Crime Guide all point in the same direction: a risk-based approach to ongoing due diligence, where the intensity and frequency of review is proportionate to the risk the customer presents.
In practice, this means building a tiered refresh framework with at least three levels.
High-risk entities: refresh every 6 to 12 months. This tier includes corporate customers with complex ownership structures, entities operating in or through high-risk jurisdictions, businesses where beneficial ownership involves PEPs or individuals with adverse media, and any entity where previous reviews have identified discrepancies or concerns. For these customers, a full refresh means re-verifying the corporate structure, confirming beneficial ownership against independent sources, rescreening all connected individuals, and reassessing the overall risk rating.
Medium-risk entities: refresh every 12 to 24 months. This is the bulk of most corporate customer bases. Established businesses with relatively straightforward structures, operating in well-regulated jurisdictions, with no material changes flagged by ongoing monitoring. The refresh here is lighter: confirm that registry data is consistent with what you hold, rescreen key individuals, and verify that the business activity profile remains as expected.
Low-risk entities: refresh every 24 to 36 months. Regulated entities, public companies with transparent ownership, long-standing customers with stable profiles. These still need periodic refresh, but the cadence can be longer and the scope narrower. The key is to ensure that event-driven triggers (ownership changes, jurisdiction changes, adverse media) can promote a low-risk entity to a faster refresh cycle at any point.
The tiering itself is not static. A customer's risk tier should be recalculated whenever new information emerges, whether from a scheduled refresh, a monitoring event, or an external trigger. A low-risk customer whose parent company is acquired by an entity in a high-risk jurisdiction should not wait 36 months for its next review.
"We moved from a flat annual review to a three-tier model, and the effect was immediate. Our high-risk entities were getting reviewed more frequently than before, which the regulator explicitly welcomed. And our total review volume dropped by about 40%, because we were no longer spending time re-verifying stable, low-risk companies every twelve months."
Compliance director, UK payments firm
Automation turns refresh from a burden into a background process
Even with risk-tiered scheduling, the manual effort of gathering updated data, cross-referencing it against your records, and flagging discrepancies is substantial. This is where automation makes the difference between a refresh programme that works on paper and one that actually runs.
The refresh workflow has four stages, and each one can be partially or fully automated.
Stage 1: Data collection. Pull the latest corporate data from the relevant registries and commercial providers. Company status, registered address, director list, shareholder data, beneficial ownership filings. For UK companies, this means Companies House PSC data and filing history. For EU entities, it means the relevant national registry plus, where available, beneficial ownership register data. For multi-jurisdictional structures, it means routing queries to the right provider for each jurisdiction in the chain.
Stage 2: Change detection. Compare the freshly collected data against your existing records. Has the registered address changed? Are there new or removed directors? Has the beneficial ownership declaration changed? Has the company's filing status changed (for example, accounts overdue, or a winding-up notice filed)? Automated comparison flags the changes and classifies them by materiality: cosmetic (registered office moved within the same city), notable (new director appointed), or material (beneficial ownership changed, company status changed to dormant).
Stage 3: Rescreening. Any new individuals identified in the updated data (new directors, new beneficial owners) need to be screened against sanctions, PEP, and adverse media databases. This should be fully automated. Existing individuals should be rescreened as well, since their status may have changed since the last check.
Stage 4: Risk reassessment and routing. Based on the changes detected and the screening results, automatically recalculate the entity's risk score. If the score has not changed materially, the refresh is complete and the record is updated with a new review date. If the score has changed, or if material changes were detected, the case is routed to an analyst for manual review with all the relevant context already assembled.
The goal is that the majority of refreshes, particularly for medium and low-risk entities where nothing material has changed, complete without any analyst involvement. The analyst's time is reserved for cases where something has actually changed and a human judgement is needed.
An orchestrated approach is critical here. No single data provider covers every jurisdiction and entity type with equal depth. Routing verification queries to the best provider for each specific case, whether that is a domestic registry API, a commercial corporate structure provider, or a specialist in a particular jurisdiction, produces better coverage than relying on one source. An orchestration layer handles this routing automatically, pulling from multiple providers through a single integration point so that your refresh workflow does not need to manage a dozen different API connections.
Surfacing stale data before auditors do
The worst time to discover that your KYB data is outdated is during a regulatory inspection. The second worst time is during an internal audit. Both situations mean you are already behind.
Building a stale-data detection capability into your compliance operations is straightforward, but most firms do not do it. Here is what it looks like in practice.
Dashboard-level visibility. Your compliance dashboard should show, at a glance, how many corporate customers are overdue for refresh based on their risk tier. Not just the total number, but the breakdown by tier, by jurisdiction, and by days overdue. If 15% of your high-risk entities are more than 30 days past their refresh date, that is a metric your MLRO needs to see every morning.
Automated escalation. When a refresh is overdue by more than a defined threshold (for example, 30 days for high-risk, 60 days for medium-risk), the system should automatically escalate. This might mean reassigning the review to a senior analyst, flagging it for management attention, or, in extreme cases, triggering a restriction on the customer's activity until the review is completed.
Audit-ready reporting. When the regulator asks, and they will ask, you need to be able to produce a report showing: which entities were refreshed, when, what data sources were used, what changes were found, and what actions were taken. This report should be generated from your system of record, not assembled manually from spreadsheets and email chains. If your refresh process runs through a single platform with a complete audit trail, this report is a query, not a project.
"Before we automated our stale-data reporting, it took us two weeks to prepare for a regulatory visit because we had to manually compile refresh evidence across multiple systems. Now we can generate the full report in under an hour. That alone justified the investment."
Head of financial crime, European e-money institution
The ROI of getting refresh right
Compliance teams are often asked to justify the cost of improving their refresh processes. The business case is clearer than most people think, because the cost of getting it wrong is both quantifiable and increasingly likely.
Consider the cost side first. A well-implemented, risk-tiered, automated refresh programme for a firm with 2,000 corporate customers will typically cost in the range of 1,200 to 1,800 analyst hours per year, compared to 3,000 to 5,000 hours for a flat annual refresh. That is a saving of 1,800 to 3,200 analyst hours, or roughly one to two full-time roles. At a fully loaded cost of GBP 55,000 to GBP 70,000 per analyst, the operational saving alone is GBP 55,000 to GBP 140,000 per year.
Now consider the risk side. AML fines related to ongoing due diligence failures have ranged from EUR 500,000 to well over EUR 10 million in recent enforcement actions across the EU and UK. The FCA's 2025 enforcement strategy explicitly identifies inadequate ongoing customer due diligence as a priority area. Even setting aside the direct financial penalty, the cost of remediation programmes (typically requiring external consultants, additional hires, and system upgrades under regulatory direction) routinely runs to several hundred thousand pounds and consumes 6 to 12 months of management attention.
The calculation is not complicated. The cost of implementing a proper refresh programme is a fraction of the cost of a single enforcement action. And unlike many compliance investments, this one also delivers operational efficiency by freeing analyst time from low-value reviews.
Start with what you have, then build
You do not need to implement the perfect refresh framework on day one. Start with three steps.
First, classify your existing corporate customer base into risk tiers if you have not already. Use your existing risk scoring methodology. The tiers do not need to be perfect; they need to be defensible and documented.
Second, identify how many entities in each tier are overdue for refresh based on the cadence you have defined. This gives you your current exposure and helps you prioritise the backlog.
Third, automate the data collection and change detection stages. This is where the largest time saving comes from, and it is the stage where automation is most mature. You can keep manual review for the risk reassessment stage while you build confidence in the automated scoring.
At Zenoo, we see teams move from a completely manual, flat-cycle refresh to a risk-tiered, partially automated model in 4 to 8 weeks. The full automation, including multi-provider orchestration and automated risk reassessment, typically takes another 4 to 6 weeks beyond that. The point is that you do not need to solve everything at once. Each stage delivers measurable improvement.



