What you need to know about Xapien and the EU’s AI Act
TLDR: The Act doesn’t currently restrict or otherwise apply to Xapien’s product and our users due to how we handle open source information. But we’re keeping a close eye on it for any developments that might affect us and you.
Recap of the EU’s Artificial Intelligence Act
The EU’s Artificial Intelligence Act is designed to respect and protect the fundamental rights of individuals, prevent the misuse of AI that could potentially undermine democratic processes, safeguard the Rule of Law, and mitigate the environmental impact of AI.
It also aims to nurture innovation and establish Europe as a global leader in artificial intelligence. To achieve this, it has categorised AI systems based on their perceived risks and impact levels and laid out specific requirements when using them.
What’s considered high-risk AI?
High-risk AI refers to AI systems that pose significant risks to our safety or rights.
Each high-risk AI system will undergo a thorough assessment before being introduced to the market. Additionally, their performance and compliance will be monitored throughout their lifecycle to ensure they remain compliant.
The European Union (EU) categorises these systems into two main groups for regulatory purposes:
AI Systems in EU-regulated products
This category includes AI technologies used in products that fall under the EU’s existing product safety legislation. For example: toys, aviation equipment, cars, medical devices, and lifts. If an AI system is part of a product governed by EU safety rules, it means the product is considered a risk.
AI systems in specific areas requiring EU registration
AI systems operating in certain sensitive domains must be registered in an EU database. These domains include…
Management and operation of critical infrastructure: AI systems involved in managing facilities and systems vital for societal and economic functioning, like power grids and water supply.
Education and vocational training: AI applications that influence educational content, methods, or access.
Employment, worker management, and access to self-employment: This includes AI in hiring processes, employee monitoring, and workplace decision-making.
Access to and enjoyment of essential private services and public services and benefits: AI systems that might impact the distribution or quality of crucial services like healthcare or social security.
Law enforcement: AI used in policing, investigations, or other law enforcement activities.
Migration, asylum, and border control management: AI involved in the control and management of immigration, asylum-seeking processes, and border security.
Assistance in legal interpretation and application of the law: AI systems aiding in legal decision-making or interpretation.
A closer look at the four risk categories
Minimal risk AI
These AI systems pose a low risk to users and society.
Example: AI-driven recommendation engines for books or music.
Regulatory requirement: Subject to minimal regulatory requirements, primarily focusing on transparency to users.
Limited risk AI
AI systems in this category present a slightly higher level of risk compared to minimal risk AI.
Regulatory requirement: Specific transparency obligations are mandated. Users must be informed that they are interacting with an AI, allowing them to adjust their expectations and reliance on the system.
High risk AI
AI systems used in critical sectors like transportation, healthcare, education, employment, and the legal sector.
Example in education: student evaluation or admissions process.
Example in the legal sector: Predictive policing, recidivism risk assessment tools, or algorithms for judicial decision-making.
Regulatory requirement: These systems must undergo rigorous conformity assessments to ensure safety, transparency, and non-discrimination. They are subject to strict data governance, documentation requirements, and must include human oversight to mitigate risks.
Unacceptable risk AI
AI systems that pose a clear threat to the safety, livelihoods, and rights of individuals.
Example: Social scoring systems by governments leading to discrimination, or AI systems using subliminal techniques to manipulate behaviour in harmful ways.
Regulatory requirement: Such AI systems are banned outright due to their potential for significant harm.
What does the Act mean for Xapien?
In short, it doesn’t affect our ability to deliver our product to our customers in the same way we have always done. Further insights are still emerging about the regulation and its application, so we’ll closely monitor developments, collaborate with industry bodies and keep our clients closely informed. We prioritise staying ahead in AI regulation and compliance to meet customer needs and exceed industry standards. For more on how Xapien uses open-source information to generate fully-sourced research reports, contact our team using the form below.
AI insights, straight to your inbox
Search engines are great but they are only the starting point. Finding, reading and condensing the full picture is slow, hard, and painstaking work. Xapien can help.