Reflections from the C5 panel, including our Global Partnerships Director Emily Morgan, alongside Anna Chimerek (Compliance Director, AbbVie), Sarah Woodget (Chief Compliance Officer, News UK) — moderated by Samer Jannoun (Head of Regional Ethics & Compliance, Special Oversight, Meta)
The conversation about AI in compliance has shifted. Two years ago, the question on most compliance leaders’ minds was whether to use AI at all. Today that question has been settled: employees are already using it, vendors are already harnessing it, and regulators are already asking how it is being governed. The harder and more interesting question is no longer whether to implement AI, but how. Specifically, organisations are asking how to do so in a manner that is measurable, defensible, and actually delivers.
The C5 panel, composed of compliance leaders across pharma, media, and tech, sketched out the landscape. In their eyes, the technology may be new, but the principles of good compliance still apply.
Compliance is the natural governance layer
The panel was unanimous on one thing: compliance teams are uniquely equipped to lead AI governance. They already operate the frameworks AI governance demands: risk classification, escalation pathways, accountability structures, audit trails. In many ways, AI usage is a new risk that needs to be assessed, controlled, and reviewed. Compliance has done this before, but what is unfamiliar is the speed at which the underlying technology is changing, and the scale at which it’s being rolled out. Both of these things raise major structural questions that organisations are working hard to solve.
Ownership, shadow AI, and the problems still to solve
When compliance, legal, privacy, and cyber all feel some claim over AI governance, the practical effect is often that no function fully owns it. The most mature organisations represented on the panel had moved past this through cross-functional AI governance committees with board-level visibility. They treat AI risk as comparable to any other category of enterprise risk.
This intentional governance matters, especially given the rise of “shadow AI,” which the panel flagged as the most immediate and least controlled risk on the corporate agenda. Employees and contractors are using consumer or unapproved tools, often without understanding the data residency, privilege, or confidentiality implications. Top-down policy regulation alone does not solve this. Governance has to be embedded into culture from the ground-up, and integrated into the procurement and training processes.
Another theme was that using AI does not relieve a company of liability if things go wrong. On this point, panellists drew from a real employment screening case in which an AI tool was found to have systematically discriminated against applicants, and the company was the responsible party. Regulators and courts are increasingly clear that the deploying organisation is accountable for outcomes, regardless of the vendor. That makes proactive testing before go-live, documented human oversight, and full auditability baseline requirements. “The computer says no” is not a defensible position.
For media organisations, Sarah Woodget surfaced a further concern: AI-generated content is increasingly being ingested by AI tools, creating the prospect of feedback loops in which compounding errors propagate through the research pipeline. Detection tools exist, but they are imperfect. Human judgement and source verification remain essential, and are arguably more valuable now than ever before.
What measurable deployment actually looks like
So what does responsible deployment look like in practice? The panel offered two concrete examples. Anna Chimerek described an AI voice-transcription service designed to capture and flag issues in field interactions with healthcare professionals. It was built cross-functionally with IT, legal, and privacy — reflecting the major benefit of distributing AI governance across teams. It also included localised features that the organisation could adjust to match the geographical context. Emily Morgan demonstrated how Xapien compresses due-diligence research that previously took days into 20 minutes, with full source traceability and multilingual coverage.
Both illustrate the same principles of responsible AI implementation: cross-functionality, clear ownership, documented oversight, evidence trails, and outcomes that can be tested.
The questions worth asking your vendor (and yourselves)
The most useful artefact from the discussion was a checklist: the questions every compliance officer should be asking their AI vendor. They cluster around seven properties.
- Accuracy: does the system produce correct outputs, and how is that measured?
- Precision: does it avoid surfacing irrelevant noise?
- Recall: does it avoid missing what matters?
- Source traceability: can you show where every claim comes from, in a form an auditor or a regulator would accept?
- Explainability: can you describe how the system reached its conclusion?
- Configurability: is the system flexible, and capable of fitting with your needs and current processes?
- Consistency: will it behave the same way next quarter?
Vendors who cannot answer these questions cleanly, with documentation, are not yet ready for compliance-grade deployment. Vendors who can are the ones worth working with.
The gap is closing
The encouraging signal from the panel is that the gap between AI enthusiasm and measurable deployment is closing, particularly in the organisations that took governance seriously from the start. These leaders are not waiting for regulators to dictate the standard; they are setting it themselves. They are intentionally maintaining human oversight of AI tools, and are asking the right questions before buying a new platform.
These leaders are doing what good compliance functions have always done: insisting that what is deployed is what was designed, and that what was designed can stand up to scrutiny, regulatory or otherwise.
This legacy of due diligence is a model worth following.



