We specialize in compliance consultancy, due diligence, and audit services to help businesses meet regulatory standards with confidence. Our experienced team provides tailored solutions to identify and manage risks, ensuring you operate responsibly and securely in today’s complex landscape. We are committed to integrity, excellence, and empowering our clients with the insights they need for sustainable growth.
Copyright © COMPLIPAL all rights reserved.
AML Controls Testing Plan That Stands Up
When an AML programme fails, it is rarely because a policy was missing. It fails because controls were assumed to work – and no-one could prove they did. The uncomfortable moment usually arrives in an audit, a regulatory visit, or after a suspicious activity event when you need to demonstrate what happened, why it happened, and what your governance did about it.
A controls testing plan for aml programmes is how you turn good intentions into evidence. Done properly, it gives senior management confidence that onboarding decisions are consistent, monitoring is meaningful, escalation works, and deficiencies are found early – before they become findings.
What a controls testing plan is (and what it is not)
Controls testing is not a one-off “health check” and it is not a collection of screenshots gathered the week before an audit. It is a structured approach to verifying that AML controls are designed appropriately, operating as intended, and producing reliable outcomes.
The nuance is important. Design effectiveness asks: if the control is followed, would it reasonably prevent or detect the risk? Operating effectiveness asks: is it actually being followed, consistently, by the right people, using the right tools and data? A plan that tests only one of these gives you false comfort.
In practice, a defensible plan ties directly to your Business Risk Assessment (BRA) and your AML/CFT obligations. It prioritises the controls that protect your highest-risk decisions – client acceptance, beneficial ownership, ongoing monitoring, sanctions screening, PEP treatment, and escalation to the MLRO.
Start with a risk-based scope that executives can defend
The scope should not be “everything we do in AML”. It should be the control areas that matter most for your risk profile and regulatory exposure.
Begin by mapping your key risks to control objectives. For example, if your BRA identifies elevated exposure to cross-border payment flows, the control objectives might include: screening quality at onboarding, transaction monitoring calibration, and timely escalation of unusual activity.
Then decide what sits in scope for this cycle. A quarterly cycle may focus on high-risk onboarding and sanctions, while semi-annual testing may cover training, governance reporting, and quality assurance. The trade-off is coverage versus depth. Smaller teams often try to test too many controls lightly; regulators generally prefer fewer controls tested properly, with clear evidence and remediation.
A practical way to set scope is to apply three filters: inherent risk (where harm is most likely), change (new products, vendors, rules, or volumes), and known weakness (previous findings, incidents, or high override rates).
Define the control universe in plain language
A common reason testing plans fail is that control descriptions read like policy statements rather than testable activities. If a control cannot be observed, it cannot be tested.
For each control, write a short statement that includes the trigger, owner, action, and evidence. “Compliance reviews high-risk files” is not enough. “A second-line reviewer completes a documented CDD quality review for all high-risk onboardings before activation, evidencing sign-off in the case management tool” is testable.
This is also where you clarify whether the control is preventive (stops a risk event) or detective (identifies it after the fact). Both are valid, but they are tested differently and have different tolerance for timing.
Decide what “good” looks like before you start testing
Testing collapses when pass/fail criteria are vague. Your plan should define the standard for each control, including timeliness, completeness, and escalation thresholds.
Take ongoing monitoring as an example. “Alerts are reviewed” is too loose. Better criteria might specify that alerts are dispositioned within a defined timeframe, that rationale is recorded, that supporting evidence is attached, and that escalation occurs when risk indicators are met.
This is where it “depends” by sector. A payments firm with high volumes might justify time-to-review metrics and triage rules; a corporate service provider may focus more on narrative quality and source of wealth substantiation. The point is to make the standard explicit so testing is fair, consistent, and repeatable.
Build a sampling approach that matches the risk
Sampling is not just a number. It is a rationale.
For high-risk controls, judgemental sampling often makes more sense than purely random selection. You want to see the edge cases: high-risk geographies, complex ownership, unusually high transaction velocity, manual overrides, and repeat alerts.
For stable, low-risk controls, random samples can be efficient and still credible. But even there, include a “change” lens. If you have a new screening tool, a new onboarding workflow, or a new outsourcing arrangement, increase coverage temporarily.
Your plan should document sample sizes, selection method, period covered, and data source. If you cannot reproduce the population you sampled from, you will struggle to defend the work later.
Specify the testing methods and the evidence you will accept
A strong plan avoids fuzzy methods. For each control, state whether you will test via walkthrough, inspection, reperformance, observation, or data analysis.
Walkthroughs help validate understanding and identify hidden workarounds, but they are not sufficient on their own. Inspection of case files, system logs, and approvals gives you the evidence trail. Reperformance – for example, rescreening a sample of clients against the sanctions and PEP rules in place at the time – can be powerful where tool configuration is a concern.
Evidence standards should be clear. Screenshots without identifiers or timestamps are weak. Better evidence includes system-generated audit trails, case notes showing rationale, policy versions applicable at the time, and proof of independent review where required.
Also decide how you will handle tools that are partially automated. If monitoring is vendor-driven, your plan should still test your governance over the vendor: tuning decisions, model changes, back-testing, and the controls around data quality.
Don’t ignore governance controls – they are often the finding
Regulators frequently focus on whether AML is governed, not just performed.
Include testing of: MLRO escalation pathways, suspicious activity decision records, board or committee reporting quality, management information (MI) accuracy, training completion and effectiveness, and the closure of previous actions. These controls can feel secondary, but they are often where accountability breaks down.
A practical test is to take a sample of issues raised in the last period and trace them end-to-end: when identified, who owned them, whether deadlines were met, and whether the fix actually addressed root cause.
Plan the cadence, ownership, and independence
Testing that is always postponed is a signal of poor control culture. Set a cadence that matches your business rhythm and risk. Many firms benefit from quarterly thematic testing with a rolling annual plan, so you avoid the “annual scramble” and get earlier feedback.
Define roles clearly: first line owns controls, second line sets standards and performs oversight testing, and internal audit provides independent assurance. Smaller firms may combine responsibilities, but you should still document how you avoid marking your own homework – for example, peer review, MLRO sign-off, or external support for higher-risk areas.
Where resource is a constraint, be honest and risk-based. It is better to have a plan that can be delivered than an ambitious schedule that becomes performative.
Reporting that leads to change, not paperwork
Testing results should be written for decision-makers. That means moving beyond “non-compliant” labels and explaining impact.
A useful finding explains: what failed, how often, why it failed, and what risk it creates. It also distinguishes between control design gaps (the process cannot work as written) and operating gaps (the process is fine but not followed).
Ratings should be consistent and tied to remediation urgency. If everything is “high”, nothing is. If everything is “low”, your plan loses credibility.
Most importantly, include actionable recommendations that fit the business. If onboarding quality is inconsistent, the remedy might be clarifying risk acceptance criteria, improving checklists, tightening second-line reviews for specific triggers, or changing system validations. The right answer depends on your operating model and volumes.
Remediation tracking is part of the plan
A controls testing plan is incomplete if it ends at reporting. Define how actions will be logged, owned, time-boxed, and verified.
Verification should test whether the fix works in practice. If you update a CDD template, you should re-test a sample of new files. If you tune transaction monitoring rules, you should validate alert quality and coverage after the change.
Where remediation is delayed, your plan should require an agreed risk acceptance statement and interim controls. This is often what protects you in regulatory discussions: not perfection, but disciplined governance.
When to bring in external support
There are scenarios where independent support materially strengthens your position: preparing for a regulatory inspection, responding to a significant incident, validating a new monitoring system, or addressing repeat findings.
An external team can also help you benchmark what “good” looks like across similar firms and challenge internal assumptions. If you need a partner to design or execute risk-based testing with clear, regulator-ready reporting, Complipal supports organisations with AML internal audit and controls assurance that is built for real operational uplift.
A sensible approach is to keep routine testing in-house, then use targeted independent reviews for higher-risk themes or where independence is essential.
Make the plan live, not static
A testing plan should evolve as your risks change. If your product mix shifts, if you enter a new market, if typologies change, or if regulatory expectations tighten, your testing priorities should move with them.
The best plans are used in management conversations throughout the year – not filed away until year-end. If your controls testing creates earlier visibility, clearer ownership, and faster fixes, it becomes a resilience tool, not an administrative burden.
Choose one control area you would not want to explain under pressure, and test it properly this quarter. That simple discipline is how AML programmes mature in a way regulators, auditors, and boards recognise.
Recent Post
AML Controls Testing Plan That Stands Up
February 21, 2026Set Up a Compliance Monitoring Programme That
February 20, 2026MLRO Support Services That Stand Up to
February 19, 2026Categories