Single Blog

  • Home
  • Set Up a Compliance Monitoring Programme That Works
Set Up a Compliance Monitoring Programme That Works

Set Up a Compliance Monitoring Programme That Works

February 20, 2026

Most compliance monitoring programmes fail quietly. Not because the firm does nothing, but because the work is disconnected: a test plan that does not match the Business Risk Assessment, QA checks that never reach the first line, and MI that looks busy but cannot evidence control effectiveness when an auditor asks the simple question: “So what changed?”

A compliance monitoring programme should do three jobs at once. It should find weaknesses before they become findings, prove that controls work in practice (not just on paper), and drive measurable improvement in onboarding quality and ongoing AML controls. If your programme is set up to tick a calendar rather than reduce risk, it will cost time and still leave exposure.

Compliance monitoring programme set up: start with purpose, not a schedule

A good compliance monitoring programme set up begins with a clear statement of intent: what decisions will monitoring enable, and what regulatory expectations must you be able to evidence?

For AML-driven organisations, monitoring typically needs to cover both the design of controls (are policies, procedures and risk models appropriate?) and their operating effectiveness (are staff actually doing what the policy says, consistently, with defensible rationale?). Regulators and auditors are rarely satisfied by “we have a procedure”. They look for repeatable practice, meaningful escalation, and records that show decisions were made using a risk-based approach.

Before building a testing plan, align on what “good” looks like for your firm. That might mean reducing incomplete KYC files, improving the quality of risk ratings, shortening onboarding time without lowering standards, or ensuring sanctions and PEP screening is evidenced and refreshed correctly. Your programme should then be able to show progress against those outcomes, not just activity.

Anchor monitoring to your risk assessments and regulatory perimeter

Monitoring scope should map directly to your Business Risk Assessment (BRA) and your regulatory obligations, including any sector-specific requirements (for example, gaming, payments, corporate services, or investment services). Where firms go wrong is treating monitoring as a generic AML checklist, independent of their actual exposure.

A risk-based scope is not about doing less. It is about focusing deeper testing where the firm could realistically suffer regulatory action, financial crime exposure, or reputational damage. If you onboard higher-risk customers, operate across multiple jurisdictions, or rely heavily on intermediaries, your monitoring programme should reflect that complexity.

In practice, this means building a control universe that connects:

  • your key risks (products, customers, geographies, delivery channels, third parties)
  • the controls you rely on to mitigate those risks (CDD/EDD, screening, transaction monitoring, ongoing reviews, training, governance)
  • how you will test them (file reviews, process walkthroughs, data checks, thematic reviews)

This mapping is the backbone of audit defensibility. It allows you to explain why you tested what you tested, and why you did not test something else as frequently.

Define ownership and governance that holds up under scrutiny

A monitoring programme cannot be “owned by compliance” in a vague sense. Regulators expect clear accountability across lines of defence. Compliance may design and run the programme, but the first line must own remediation, and senior management must own risk acceptance.

Set governance early: who approves the annual monitoring plan, who receives reports, and who can challenge or overrule remediation timelines? For higher-risk firms, it is often appropriate for a board or risk committee to see monitoring themes and overdue actions, not just a summary.

Be disciplined about conflict management. If compliance is both advising the business and testing it, document how independence is protected in practice, such as second-review sign-off for high-impact findings or periodic independent internal audit coverage.

Build a testing approach that matches the control

Different controls require different testing. A single method (usually file sampling) is rarely enough to evidence effectiveness across an AML framework.

File reviews are essential for CDD quality, risk scoring rationale, source of funds/source of wealth documentation, and EDD triggers. But they should be complemented by walkthroughs for process adherence (for example, how onboarding decisions are made, how exceptions are approved), and by data-led checks for controls that are system-driven (screening, alert handling, periodic reviews).

Sampling is where “it depends” matters. Small firms can test a meaningful proportion of higher-risk files each quarter, while larger firms may need statistically informed sampling or risk-weighted selection. Either way, avoid vanity sampling that spreads thinly across low-risk files just to create volume. Your samples should be explainable: selected due to higher risk rating, high-risk geography, complex ownership, unusual transactional behaviour, or reliance on third-party introducers.

Also decide early how you will rate results. A common pitfall is grading each file but never reaching a clear view on the control itself. You need both: file-level issues and control-level conclusions. For instance, repeated weak rationales for risk ratings may indicate a training and procedure clarity problem, not just individual errors.

Establish evidence standards that make findings defensible

Monitoring is only as strong as its working papers. If you cannot show how you reached a conclusion, you should assume it will not stand up to an audit challenge.

Define minimum evidence requirements for each test type. For file reviews, document exactly what you looked at (screenshots are not always necessary, but references to system notes, timestamps, and document versions are). For walkthroughs, capture who was interviewed, what process steps were observed, and how you verified statements. For data testing, keep the query logic, parameters, and the output sample used.

Be careful with subjectivity. Statements such as “CDD is satisfactory” are weak unless tied to specific criteria. Use consistent review templates with clear pass/fail thresholds and a field for rationale. This is not bureaucracy – it is what allows your programme to be repeatable, trainable, and credible.

Turn findings into action that the first line can deliver

Monitoring only reduces risk if issues are translated into practical remediation. That requires findings written in operational language, with clear cause and consequence.

A well-formed finding explains: what happened, why it matters, how often it happens, and what needs to change. It also distinguishes between isolated error and systemic breakdown. If you identify missing beneficial ownership evidence on several corporate clients, the remediation may not be “tell staff to be careful”. It may be a clearer ownership decision tree, a system validation rule, or a revised EDD trigger that forces review for complex structures.

Set an action tracking mechanism with ownership, deadlines, and status. But keep it proportionate. Over-engineered trackers collapse under their own weight and lead to performative updates. What matters is that actions close, are tested for effectiveness, and are re-opened if the issue persists.

Reporting that informs decisions, not just compliance comfort

MI and reporting should be built for the people who can change outcomes: onboarding teams, operations leaders, the MLRO, and senior management. A report that reads like a checklist will not change behaviour.

Effective reporting combines themes, risk impact, and trend direction. It highlights repeat issues, control drift, and areas where policy is not reflected in practice. It also distinguishes between design gaps (policy/procedure or system gaps) and execution gaps (training, capacity, supervision).

Avoid reporting that is all red/amber/green without context. If you rate a thematic review as “amber”, explain what that means for the firm’s exposure and what is being done about it. If you are seeing improvement, show it with credible measures, such as reduced rework rates in onboarding, fewer EDD breaches, improved timeliness of periodic reviews, or reduced false positives due to tuned screening parameters.

Bake in regulatory change without rebuilding the programme every year

Regulatory change is constant, particularly for AML obligations, sanctions regimes, and supervisory expectations. The monitoring programme should not be rewritten each time guidance updates, but it must be sensitive to change.

A practical approach is to maintain a small “change lens” within the programme: a process to identify new or updated requirements, assess impact on controls, update testing criteria, and schedule targeted thematic reviews. This keeps the programme relevant while avoiding disruption.

It also helps to maintain a forward-looking view of your control environment. If you are introducing a new onboarding platform, outsourcing screening, or expanding into a new geography, monitoring should treat that as a risk event. Test early, test more frequently, and shorten the feedback loop while the change beds in.

Common set-up mistakes and how to avoid them

The most expensive mistakes tend to look harmless at the start.

One is treating monitoring as the same as onboarding QA. QA can be valuable, but compliance monitoring must go beyond completeness checks and prove adherence to risk-based decisions, escalation, and governance.

Another is producing a plan that is not achievable with available resources. If the plan is not delivered, credibility suffers and risks remain untested. A smaller plan executed well, with deeper testing in higher-risk areas, is usually safer than a broad plan that slips every quarter.

A third is failing to retest. Without retesting, you cannot evidence that remediation worked, and you may repeat the same findings each year. Retesting should be built into the calendar from the outset.

When to bring in specialist support

If your firm has had an audit finding, a regulatory enquiry, rapid growth, or material process change, external support can help you reset the programme quickly without losing operational momentum. Specialist reviewers can also provide an independent view on whether your testing approach, evidence, and governance are proportionate to your risk profile.

Where this is done well, the output is not a thick report that sits on a shelf. It is a monitoring framework that your teams can run, with reporting that drives action. If you want a partner to help design or uplift a monitoring programme with clear, implementable recommendations, Complipal supports AML compliance and due diligence teams with advisory-led monitoring and internal audit work built for regulatory scrutiny.

A closing thought

Set up your monitoring programme so that it makes you slightly uncomfortable in the right places. If it only confirms that everything is fine, it is probably not testing the controls that matter most.