What are Large Language Models, and how does Xapien use them

Explainable AI:

What are Large Language Models, and how does Xapien use them?  

What are Large Language Models, and how does Xapien use them

Large Language Models: what are they, how do they work, and why do we use them? 

In the next blog of our Explainable AI (XAI) series, we’re going to break down what LLMs are and why they’re so powerful. But first, let’s quickly recap why we create these XAI blogs.  

Imagine solving a math problem and someone giving you an answer, without showing any of the steps they took to get there. It would be difficult to trust the answer’s accuracy.  

Similarly, if an AI makes decisions but can’t explain them, we might not trust it. That’s the purpose of these blogs: to help you understand how our technology works and build trust in its capabilities. Especially for tasks that have regulatory implications, like due diligence.  

What are Large Language Models?  

Large Language Models (LLMs) are advanced computer programs that understand and generate human-like language. They can read, write, and understand text in a way that’s almost like how we do.  

However, there was a time when training machines meant feeding sequences and explicitly telling them how to recognise different words and patterns, such as identifying capitalised words as entities. 

This way of training models was painstaking and limited the capabilities of what LLMs could do. But the paradigm shifted with the introduction of word embeddings. 

What are word embeddings? 
 
Word embeddings are a way of representing words as numbers (or numerical vectors), which help machines understand the relationships between single words and their meanings. Since words can have different meanings depending on the sentence, word embeddings will adjust those numbers to fit the context. It’s simply teaching them to be flexible with word meanings, like humans are.  

LLMs took this concept to a new level by encoding entire sentences and paragraphs into numbers to preserve their meaning. This has enabled Xapien to handle the complexity that comes with resolving identities from mentions across text in news, media and unstructured data on the entire indexed web. And that leads us to the next part…  

How does Xapien use Large Language Models? 

Named Entity Recognition (NER) 

Named Entity Recognition (NER) is a subtask of Natural Language Processing (NLP) that finds and categorises names of entities in text. Consider a common word like “Paris.” On its own, it lacks context. While humans might examine its placement in a sentence to determine whether it refers to the city or a person’s name, this task is challenging for machines. But LLMs can do this at scale. 

Since they can model the relationship between words and grammatical structures (such as verbs) using numbers, LLMs can then distinguish between using the word Paris as a location rather than a person. This mathematical approach might seem alien from how we naturally understand language, but it’s highly effective for machines to do their job. 

For Xapien, it means searching through the entire indexed web and quickly identifying and distinguishing names, places, and other entities in news, blogs, databases, and other unstructured data. This is valuable because, as the example of ‘Paris’ showed, words can have different meanings depending on the context. Being able to accurately determine whether an article refers to a city, an individual, or something else entirely is crucial for determining whether that article is relevant to the subject of a Xapien report.  

That leads us to how Xapien understands whether sentences contain risk words

Risk identification 

For risk identification, we teach our technology to tell the difference between words that might seem the same but have different meanings depending on the situation.  

The old-school way was to explicitly input “risk” words and their synonyms or variations. This included considering different language variations and grammatical forms which were prone to error.  Now, our machines remember based on modelling proximity to expert-provided examples: 

Example one: John Smith was charged with killing 3 people 

Example two: James charged his phone. 

LLMs would encode a new sentence such as “Amy was charged with murder” and determine which example it’s closer in meaning to. Of course, it’s example one. 

What do LLMs mean for background research? 

Just like manual keyword-based research, training machines based on rules limited how much they could find from open-source data. But Xapien can uncover risks and opportunities that we, as humans, wouldn’t even know where to search for.  

For instance, it’ll surface all issues it knows to be related to “fraud”. This enables a much broader lens view to catch obscure risks in your due diligence. On the other hand, it can help spot broader business opportunities when performing research tasks such as prospecting and market analysis.  

Interested in how Xapien could work for you? Book a demo with the team. 

Monthly learnings and insights to your inbox

Xapien streamlines 
due diligence

Xapien's AI-powered research and due diligence tool goes faster than manual research and beyond traditional database checks. Fill in the form to the right to book in a 30 minute live demonstration.