May 10 • 2 min read

4 key learnings from the WSJ Risk & Compliance Forum 2023

05519d_1cc2a47030194449ade45cde53e58f05~mv2

On Tuesday (May 9), a group of industry experts gathered to unpack the latest Russia sanctions, data privacy laws, climate risks, and more. As you might expect, Artificial Intelligence (AI) surfaced a lot in these discussions. For those who missed out or simply need a recap, we’ve compiled our four biggest takeaways.

1) Europe aims to set the global standard for AI regulation

By the end of 2023, the EU is set to adopt the Artificial Intelligence Act, marking Europe’s ambition to set a global AI regulation standard. Just as it did with the GDPR in the early days, Europe is determined to take the lead in regulating AI. But while many regulations are still in development, it turns out they all share a common goal: detecting bias, ensuring human oversight, providing transparency, and outlining how AI arrives at decisions.

2) Growing reliance on AI raises privacy concerns

When it came to regulatory pressure, the advice from speakers was clear: start preparing a framework for responsible AI use. One key point made during the discussion was the importance of involving senior executives and subject matter experts from the beginning when building AI applications. Especially in instances, such as the US, where regulators like the Federal Trade Commission (FTC) stress the need to support claims about AI and avoid making unrealistic promises about what the technology can do.

3) The importance of hiring AI and data experts

As we just mentioned, companies building AI-powered solutions need to involve subject matter experts from the get-go to help govern their work. This starts with hiring experts in AI, data analytics, and even cybersecurity, so they can work together to thoroughly evaluate the risks and learn how to mitigate them. By bringing these experts on board internally, businesses can promote responsible use of AI and get the most out of its powerful capabilities.

4) ChatGPT is still a massive talking point

ChatGPT is still a big topic of conversation with speakers across the board discussing its opportunities and limitations. One interesting point was made about why everything can’t be automated – sometimes a human touch is necessary to keep things authentic and accurate. One speaker compared the Q&A aspect of AI to that of a very articulate and informed teenager who’s unaware of what they don’t know, and will continue providing responses based solely on mathematical calculations.

Final thoughts: Limitations of ChatGPT answers in research

As our Chief Technology Officer, Shaun O’Mahony, said in a recent interview: “ChatGPT itself is a remarkable tool. But answers generated are not always truthful, or transparently sourced to the original content”. While search lies at its core, using ChatGPT as a research tool isn’t its best function. Read the full interview with Shaun to find out why, and how Xapien fills that void.

AI insights, straight to your inbox

Stop searching.
Start knowing.

Search engines are great but they are only the starting point. Finding, reading and condensing the full picture is slow, hard, and painstaking work. Xapien can help.