On March 31st, our head of Comms and Marketing, Jess Denny, ran a session at the Ethisphere Global Ethics Summit in New York. The workshop was called AI in Third Party Due Diligence: Eliminating the trade-off between risk control and speed. The room was full of compliance, legal, and risk professionals, including compliance officers from some of the world’s largest organisations. Here’s what we took away from the experience:

1. The appetite to implement AI is real, but so is the uncertainty

Almost everyone in the room was feeling pressure from board-level leadership to introduce AI into their compliance programmes. The real question was how to adopt AI safely, defensibly, and in a way that integrates well with their current workflows.

What struck us was how consistent that uncertainty was across very different organisations. Whether someone was running a lean compliance team or managing a global programme, the core tensions were the same: speed versus rigour, board expectations versus operational reality, and a genuine concern about AI implementation going wrong.

2. Starting from a shared definition

    One of the first things Jess did was establish some common ground on what we actually mean by “AI.” There’s a significant difference between general-purpose tools like ChatGPT or Copilot and purpose-built platforms designed specifically for due diligence workflows. Generic AI models can hallucinate, lack consistency, and largely operate as ‘black boxes’ meaning users are not fully informed about where their outputs are drawn from. In a regulated context, compliance outputs must to be traceable, auditable, and repeatable — they can’t be inconsistent and opaque.

    3. What defensibility actually requires

      The FCPA’s September 2024 guidance made something clear: regulators are already assuming AI is part of your compliance programme. But crucially, there is no AI exemption. Responsibility still sits with the Chief Compliance Officer. If you’re using AI, you need to be able to explain how it fits in your compliance workflow and why you’re using it.

      That means asking the right questions of any tool you’re evaluating. Is it accurate? Not just “does it find things?” but does it avoid false positives, and does it avoid missing things? Is it explainable, so you can walk your board or a regulator through how it reached a conclusion? Is it consistent, producing the same structured output each time? Can it be embedded in your current processes?

      4. Purpose-built AI supports human judgement

        In the interactive part of the session, attendees worked in small groups to map out their own due diligence workflows. They identified where the biggest bottlenecks were and at which points AI could serve as an accelerator. By the end, nearly everyone had identified specific, concrete ways AI could integrate into their existing work. They were clear that AI platforms would not fully replace human judgement, but would support and scale it.

        What comes next

        The sense in the room was clear: boards want AI. The compliance function is still working out how. This session aimed to close the gap between these two positions. To further support the attendees, we gave them all copies of our AI Buyer’s Guide, a practical resource for compliance teams evaluating AI tools. If you’re starting that process, you can read Part 1 here.