Chat GPT creates massive opportunities, but human input remains vital
Our CEO Shaun O’Mahony and Head of Comms and Marketing Jess Denny discuss ChatGPT’s incredible potential, when used in the right context.
In just a few months, OpenAI’s GPT3, and now GPT4, has disrupted how the world researches and works. What does generative AI like ChatGPT mean for Xapien?
OpenAI’s GPT models are an amazing feat of engineering. I’ve worked at the intersection of data, security and deep tech for 20 years, and even a year ago, I would not have predicted that such capable generative AI would arrive so soon.
ChatGPT itself is a remarkable tool. With a single prompt, it can produce natural-sounding conversations, tell you about practically any topic you can think of, debug computer code, write songs, and draft Congressional speeches.
When that is considered, it is unsurprising that more than 100 million people have used it since its release in November, making it the fastest growing app in history.
When used as a research tool, there are a number of seeming similarities between Xapien and ChatGPT, although our users are slightly less numerous!
Search lies at both our cores. With ChatGPT, users no longer need to shift through Google links to find the information they need. Instead, they can ask a question, and be presented with a written answer, compiled from multiple complex datasets. They can even then ask for those results to be written out as a Shakespearean sonnet.
Microsoft has made major investments in OpenAI in the hope its technology will overhaul its underused search engine, Bing, and shift market share away from Google, which currently accounts for 84% of the global search market.
Here at Xapien, our technology also transforms research. We have built a unique system that harnesses the linguistic knowledge embedded deep within Large Language Models to build a tool that saves you from spending hours on multiple search platforms, inputting details and analysing results. We have re-imagined background research and due diligence on both individuals and organisations.
But are there also some important differences?
Indeed. When used for producing iterative unstructured creative work, ChatGPT is outstanding. It’s less useful when you want to use that output as knowledge in an enterprise situation or for regulated purposes. But it’s worth reflecting on why?
Answers generated by tools like GPT are incredibly detailed and nuanced, but importantly, they are not always truthful, and not transparently sourced to the original content, as you get with Xapien.
We are a very specific tool that tells you everything you need to know about a person or business – and, everything you can know about them using internet research. This is more challenging than you might think for two core reasons.
The first is the issue of common names. At the beginning of any search, you have to do the research, but you also have to do the analysis. If you search for someone with a common name, you have to unpick ‘is this my guy’? Is this the ‘Joe Bloggs’ I’m looking for?
If you put a common name into most search tools you get millions of results. There is then significant work to analyse, identify and resolve entities. Only then can you summarise what you find.
It is very hard for an automated solution to do that, but we’ve built a tool which uses advanced machine learning and natural language processing models to handle the incredible complexity that comes with resolving identity from mentions across text in articles and structured commercial data whilst seamlessly translating, transliterating and lots more.
The outcome is fully-automated AI reports about the specific person or organisation that you were interested in, in minutes.
The second issue is traceability. These generative AI models ingest vast amounts of data from disparate sources. Tracing the original source of a tiny fragment of knowledge (a sentence or less) is tough.
This means that most NLP-AI technologies cannot currently demonstrate how they got to an answer. The head of Google Search, Prabhakar Raghavan, neatly summarised the resulting risks of errors and bias.
He said: “This kind of artificial intelligence we’re talking about right now can sometimes lead to something we call hallucination. This then expresses itself in such a way that a machine provides a convincing but completely made-up answer.”
It’s clear that no credible human action can, or should, be taken using the output from these technologies alone.
Xapien also consumes vast amounts of data from disparate sources. However, with integrity being a core pillar of the Xapien platform, we source every fact we present to the user, right back to the original records or articles we sourced it from, which is another way we are different from most generative black-box AI tools.
This sourcing and traceability are key to pushing beyond generative AI models’ current limitations.
We’re aware our users require full traceability from what we summarise and distil for them, all the way back to the raw sources, such as news articles, comments and corporate records. You would rightly be cautious to make a recommendation to a senior stakeholder (or the regulator) on the basis of a ChatGPT answer. You have to be able to show your working. Hence the explosion of the whole field of Explainable AI, something we take really seriously at Xapien.
It sounds like the bigger picture is that these AI technologies alone cannot provide solutions to real world problems alone. How should this be managed?
No tool should be dependent on any single AI ‘brain’. At Xapien, where we are solving the problem of finding out useful information about any person or business you might engage with, we take the process of human analysis and research, break it into different elements, each requiring their own discreet technologies, which each then interact at different stages.
And secondly, society and the regulators alike, still need humans to make the judgement calls. They should be able to trust the information the AI provides, but we are clear that there is a long way to go before we could allow AI to make critical business decisions on your behalf.
At a certain point, a human has to take ownership. There will always be a point where AI stops and humans start.
So people shouldn’t be worrying that AI will take their jobs?
Their jobs might evolve to involve fewer repetitive tasks, but there will always be a need for a human. ChatGPT and Xapien take different approaches here, but both need a human. Humans are accountable for their actions and, for now, they are the ultimate decision makers in all cases. The job of technology is to inform those decisions.
Tools like ChatGPT are heavily dependent on a human asking the right question to get a useful answer. Humans then have no way of verifying if that answer is wrong or right.
In a world where ‘prompt engineering’ (determining particular words and instructions to feed into generative models like ChatGPT) has now become a profession, even a simple task like asking GPT to summarise an article can yield hugely different results depending on how the question is phrased.
At Xapien, from the outset, we minimise what the human needs to ‘know’ and you don’t need any specialist knowledge to get the best out of the tool. This means anyone can use our AI, and benefit from all the ground-clearing, research work that it can do.
We completely eliminate the risk of asking the wrong question, or framing it incorrectly. Xapien is designed to always answer the same question, ‘what do I need to know about this person or organisation before I do business with them?’ This is not a challenge that a tool like ChatGPT can answer. It would give assorted random answers from across the internet. Instead, the team which built Xapien understands what business users need to know and it is designed to find, analyse and summarise that specific information that is useful for individuals and organisations when making decisions about their counterparts.
With Xapien, you don’t need to be an expert at asking questions to get all the information you need about your subject – whereas generative AI has created a whole new job category of ‘prompt engineer’.
However, our clients do need to review the results to make them useful for their organisation. The research we provide can help inform the decision, but that is as far as the AI’s input goes.
We don’t think it is the role of AI to tell you whether or not you should work with someone. That is a nuanced decision, dependent on many factors that extend beyond their online profile. We are firm believers that the role of AI is to make humans’ jobs easier – to do the heavy lifting of data – in an ever-exploding data marketplace. It doesn’t replace them. It just helps them do their jobs better, and faster.
So are technologies like ChatGPT going to be used at Xapien?
AI technologies sit at the heart of what we do at Xapien. Many of the things that technologies like ChatGPT do are not necessarily new, just that they have never been executed so effectively in a single conversational package.
There are over 20 unique algorithms at work under the bonnet in Xapien, each addressing a discrete part of the puzzle that Xapien solves for users, be that extracting job roles from text, cleaning addresses or summarising thousands of fragments of information. Our RnD team have been working with the GPT models since first released and they already perform specific tasks within some of our algorithms. There’s some really exciting work due in our summer release that will harness generative models to summarise real-world events for our customers, uniquely though, our summaries will be hallucination free and entirely traceable right back to the original content that Xapien found and read.
Exciting stuff. Thanks Shaun!
To find out more about Xapien’s use of AI and NLP get in touch with one of our team.
Follow our blog...
Search engines are great but they are only the starting point. Finding, reading and condensing the full picture is slow, hard, and painstaking work. Xapien can help.