Client intake:
Reshma Raja from RPC on 5 guardrails to adopt AI safely
AI is on the radar for many law firms this year. In fact, over 60% of large firms are actively exploring its potential. However, there’s still hesitation around how to use the technology safely while adhering to the SRA code of conduct. After all, law firms have a responsibility to protect their clients’ data and maintain high-quality legal services.
But with the right guardrails in place, firms can harness AI’s capabilities without risking client privacy or their own reputation. In a recent webinar, we brought together Janis Wong, Policy Advisor of Data and Technology Law at The Law Society, and Reshma Raja, General Counsel and Partner at RPC, to discuss risk management strategies.
Missed the live event? Here are our key takeaways from the conversation.
1. Prioritise data protection and privacy
Data protection is one of the biggest risks that most lawyers consider when it comes to using AI. Regulatory frameworks are a key concern, as Raja points out. Additionally, many clients have specific requirements about where their data is hosted, especially regarding AI tool suppliers. It’s often unclear where their data will be stored, whether outside the EU or elsewhere. Client confidentiality and consent are fundamental to the SRA code of conduct, standards that lawyers uphold in every aspect of their work. Raja emphasises that “you can’t just bury that in the terms and conditions you use for a client on what data is going to go on an AI tool. It has to be very much informed consent. The client needs to know what tools are being used and what data you’re using.”
2. Implement robust information security testing
Robust information security testing is crucial, especially when onboarding suppliers. Raja points out that while law firms typically maintain strong protocols, the rise of AI makes rigorous security even more essential. It’s about making sure the tools are secure and the data input is protected. Raja emphasises that discrimination bias is a big concern, noting that the quality of AI output heavily depends on the input. This highlights the need for proper training to ensure users input content that is neither offensive nor discriminatory. The goal is to use AI tools that minimise bias, although Raja acknowledged that some potential for bias might still exist in the future. It comes down to understanding how the tool works and being able to clearly explain that to not just users, but clients and stakeholders, too.
3. Maintain human judgement and curiosity
The overreliance on AI and lack of human judgement concerns Raja. “You can’t get away from the fact that there is no human brain behind it,” she said. As lawyers, people buy people. Raja elaborates that through her relationships built while fee-earning, she got to know her clients and how their business worked. She remains uncertain whether an AI tool can replicate this understanding, concerned that it could lead to a disconnect from the ethical, moral decision-making, and commercial acumen essential in legal practices. At Xapien, our approach to due diligence is to efficiently and transparently surface information, empowering humans to apply their judgement. While lawyers rely on our AI to do the legwork, it’s ultimately them who makes those nuanced decisions.
4. Establish a governance framework
Having a governance framework is absolutely critical, said Raja. And by governance framework, she means ensuring that proper policies are in place and that your people are educated about them. She notes that many law firms are adopting a policy where AI is allowed, but client data or any sensitive information should never be included. Establishing AI committees and conducting workshops are essential steps to putting your policies into action. Raja emphasises that a robust governance framework, including AI committees and workshops, forms a solid foundation. When brainstorming and assessing use cases, it’s crucial to realistically evaluate which initiatives will yield the greatest return on investment. She points out that firms won’t have an unlimited budget, so it’s important to allocate resources wisely at the board level.
5. Work closely with AI providers
We’re witnessing a significant shift from vendors just providing solutions to adopting a more partnership model. For law firms, this means tools tailored to their specific needs and supporting them through the change management process. Nick Morgan, our Head of Legal Professionals, agrees that law firms must lean on their AI vendors. ‘It’s our role as technologists to help you embrace this technology in a way that you feel comfortable,” he said. Raja pointed out that there is a lot of hard selling happening, and she’s not convinced that every tool on the market can necessarily deliver what it promises. This is important from a risk appetite and risk tolerance perspective. Raja concludes that firms must accept the practical reality that one size does not fit all, and they must determine what works best for their firm’s needs.
Looking for a legal AI tool you can trust? Talk to us today and see how Xapien can support your firm.
Monthly learnings and insights to your inbox
Xapien streamlines due diligence
Xapien's AI-powered research and due diligence tool goes faster than manual research and beyond traditional database checks. Fill in the form to the right to book in a 30 minute live demonstration.