As AI becomes more embedded in due diligence, regulatory expectations are becoming clearer.

At XapienCon, compliance, legal and risk leaders across financial services, professional services, philanthropy, and other regulated sectors talked about how AI is already being assessed. The discussion focused on real audits, investigations, and supervisory conversations.

Across jurisdictions and industries, the direction of travel is consistent. Guidance from the US Department of Justice, principles applied by the FCA, and obligations emerging from the EU AI Act all point to the same conclusion. AI now sits inside the compliance environment and is no longer a separate or experimental category.

Here are some important lessons from the speakers for due diligence teams preparing for 2026.

Read more: What we learned at XapienCon: 10 lessons for due diligence in 2026

AI now belongs inside due diligence governance

Regulators increasingly expect AI to sit within established compliance frameworks, not alongside them.

The DOJ reports the use of AI doesn’t change an organisation’s responsibility to maintain adequate procedures. The FCA’s principles-based approach similarly focuses on outcomes and accountability, regardless of whether decisions involve automation. The EU AI Act reinforces this responsibility by insisting on defined governance, risk management, and human oversight for higher risk uses.

In practice, that means due diligence teams should be able to explain where AI is used, what role it plays, and who is accountable for its output. That understanding should come from policy, training, and day-to-day habits, not just vendor documentation.

Decisions need to be explainable to non-technical audiences

Explainability is becoming an expectation. Regulators are paying close attention to how conclusions are reached. They expect to see clear sources, documented reasoning, and evidence of human review. Outputs that can only be understood through vendor explanation or technical translation introduce avoidable risk.

This expectation aligns with the EU AI Act’s emphasis on transparency and traceability, aligning with the FCA’s focus on understanding and control. AI tools that produce clear, sourced outputs allow teams to explain not just what decision was taken, but why it was reasonable at the time.

Explainability also strengthens internal decision making. When risk can be clearly interrogated and discussed, teams are better equipped to apply judgement, escalate appropriately, and defend outcomes.

Adequacy is frequently judged later

Due diligence decisions are rarely scrutinised when they are made.

Regulatory review often comes much later, when circumstances have changed and files are examined in hindsight. At that point, regulators assess whether the process was reasonable, proportionate, and appropriate given the risk profile at the time.

Here is where concepts such as adequate procedures under the UK Bribery Act and DOJ guidance come in. Clear records, consistent approaches, and evidence of judgment matter more than speed when decisions are revisited.

In 2026, assume that routine decisions could be reviewed long after they felt routine.

Coverage needs to match real risk exposure

Many screening tools perform reliably in familiar, lower-risk markets. Risk tends to sit elsewhere.

Higher-risk jurisdictions often involve language barriers, limited public records, and complex corporate structures. Due diligence teams need to understand how their tools perform in these environments and where limitations exist.

Both the FCA’s focus on proportionality and the EU AI Act’s risk-based controls reinforce the need to align coverage with exposure. Regulators are increasingly interested in where tools perform least well, not where they are strongest.

Acknowledging limits, and knowing how they are addressed, will help boost regulatory confidence.

Positive identification remains foundational

An idea that resonated across industries was that if a counterparty is asking to pay or be paid, they have a footprint.

That footprint may appear in a corporate registry, tax record, or commercial history. When a search returns little or no information, it should trigger closer examination, not reassurance.

Regulators expect teams to be able to positively identify who they are dealing with. Gaps in identification usually point to data limitations or unresolved identity issues rather than low risk.

In 2026, reliance on name matching alone will be harder to defend.

Informal AI use is becoming a regulatory risk

Several discussions highlighted a growing gap between formal tools and what teams revert to under pressure.

Running names through public AI tools, using AI outside documented workflows, or treating AI checks as informal context are practices regulators are increasingly aware of. When AI influences a decision, regulators expect it to be governed.

There is no indication of an AI exemption emerging. If anything, informal use is becoming an additional risk factor.

Regulators are already aligned

While enforcement timelines differ, regulatory principles are converging.

The DOJ emphasises understanding and control. The FCA applies an outcomes-focused approach with no exception for automation. The EU AI Act formalises expectations around transparency, oversight and documentation.

Together, they underline that AI use in 2026 should be understood, governed, and reviewable.

What this means for due diligence teams

Regulators are now less interested in whether AI is used and more interested in whether its use is controlled, explainable, and proportionate to the risk present. Platforms that help teams demonstrate those qualities reduce uncertainty when decisions are revisited later.

Xapien is designed as a first-line due diligence platform that does exactly this. Reports are generated quickly, but they are built on positive identification, structured discovery and fully traceable sources. Every assertion can be inspected and understood without technical interpretation or reliance on the vendor.

It organises and summarises information that has already been established as relevant, making outputs easier to review, challenge, and defend, both internally and under regulatory scrutiny.

AI is now part of the compliance environment. Xapien makes that environment easier to navigate.

Discover the new standard in due diligence.