Monitoring fraud under the Artificial Intelligence Act
EU regulations banning certain AI practices go into effect on 2 February 2025. Some institutions may assume that the bans only apply to extreme practices, which they would never be involved in. But the ban on using AI systems to assess the risk of that someone has committed a crime, or will commit a crime, shows that this is not the correct approach. A more in-depth analysis reveals that some market practices now considered standard, especially in financial services, may prove questionable once the bans enter into force. This is particularly true for monitoring of money-laundering risk and more broadly the risk of fraud.
Art. 5 of the Artificial Intelligence Act (Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence) includes a list of banned AI practices. It is a core provision of the AI Act, and violation of these bans can attract the harshest fines (up to EUR 35 million or 7% of total annual worldwide turnover). It is also one of the first provisions of the AI Act to take effect, on 2 February 2025 (most of the provisions do not take effect until August 2026).
One banned practice relates to assessing the risk of committing crimes. Art. 5(1)(d)) prohibits “the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity.”
Under this provision, it appears that a practice is banned when it involves all the following conditions:
- The practice consists of placing on the market, putting into service or using an AI system of the sort and manner described in this provision
- The AI system is used to assess or predict the risk of committing a crime by a natural person
- The assessment is based solely on profiling of a natural person or solely on assessing the natural person’s personality traits and characteristics.
The notion of “profiling” under the AI Act incorporates the definition in the General Data Protection Regulation, i.e. “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements.”
What is at stake in the ban?
What does the ban protect?
To properly understand this, it is useful to turn to recital 42 of the AI Act, which makes it clear this it is intended to protect the fundamental principle of the presumption of innocence, i.e. the right of each person to be treated as not having committed a criminal offence until the person has been legally convicted of the offence. This presumption may be associated primarily with guarantees in the context of judicial procedure, but it has much broader implications. It is a general right to be treated as a person who does not commit prohibited acts in various socio-economic contexts, unless found guilty by the court.
In particular, the presumption of innocence in the context of AI systems may be violated when negative consequences are drawn against a person (e.g. refusal to enter into a contract, or imposing less favourable terms for providing services) based not on a criminal conviction, but based on an analysis of specific behaviour by AI systems. In extreme cases, an individual may experience negative consequences based exclusively on statistical calculations of the probability that, in the future, the person will behave in a certain way. Thus an individual might be denied the use of a service not on the basis of his or her actual behaviour, but solely on the basis of predictions regarding future behaviour. In the case of applying AI systems, the danger lies in shifting the moment from which we can begin to draw negative consequences against a natural person. Under the presumption of innocence, such negative consequences cannot be drawn until the person has been validly convicted. AI systems could do this earlier, without deeming a conviction to be a prerequisite for drawing negative consequences. In practical terms, this could mean disregarding the presumption of innocence.
What type of analysis does the ban apply to?
First of all, the ban applies only to risk assessments made using AI systems. Without discussing in detail how an “AI system” is defined, I would simply point out that while this is a broad definition, it does not cover all systems used for risk assessment. Therefore, the first key challenge is determining whether the system used for risk assessment qualifies as an AI system at all.
Regardless, Art. 5 of the AI Act provides that assessing or predicting the risk of a natural person committing a criminal offence is a banned AI practice. The distinction between “assessment” and “prediction” suggests that these concepts should not be conflated. Assuming that their meanings are different, one potential interpretation I propose to consider is that “prediction” refers to estimating the likelihood of committing a criminal offence in the future, while “assessment” refers to determining whether a particular person is committing a criminal offence at the time of the assessment or has committed a criminal offence before. This would lead to the conclusion (assuming that the other conditions in the provision are met) that this ban would apply both to evaluation of the risk of committing a criminal offence (as a prediction of future criminal offences) and evaluation of the likelihood that a certain person is committing a criminal offence at the time of the assessment or committed an offence in the past.
What methods of analysis are banned?
First, under this ban, the danger against which natural persons must be protected is evaluation of the risk of committing a criminal offence exclusively on the basis of profiling, that is, solely on the basis of automated processing of personal data for the purpose of evaluating certain characteristics of a person (in this case, criminal behaviour). By analogy with well-established interpretations of Art. 22 GDPR (which refers to decisions based exclusively on automated processing of personal data), it should recognised that a risk evaluation is based solely on profiling when it is done without human involvement. In other words, it will be prohibited to evaluate the risk of commission of a criminal offence (in the past, at the time of the assessment or in the future) by a person, when performed exclusively by an AI system without human involvement. This opens us up to a great deal of controversy over what exactly human participation means and what sort of human participation will safely prevent the evaluation from being based exclusively on automated processing, but that is a broader topic. At this point, suffice it to say that Art. 5 bans evaluation of the risk of committing a criminal offence based solely on the automated processing of personal data.
The drafters of the AI Act also considered it particularly dangerous to the presumption of innocence to evaluate the risk of committing a criminal offence based solely on the individual’s personality and characteristics. In simple terms, it would violate fundamental rights to make determinations based exclusively on “who someone is” or “what someone is like,” apart from the person’s actual behaviour. Such determinations can be particularly hurtful and discriminatory as they are often based only on historical statistics on specific populations, which may be irrelevant in the individual case (even if the person exhibits characteristics that statistically increase the likelihood of committing a criminal offence). In practice, this would penalise a person (e.g. by denying them access to a service) not because of anything they have actually done, but because of “what they’re like” or “who they are.”
In this context, another key nuance is that an evaluation based exclusively on a person’s characteristics may or may not be fully automated. This follows from the clear separation in the act between “profiling” and evaluations based on a person’s characteristics. Thus it seems that evaluations that are not fully automated (e.g. using an AI system accompanied by the human factor) will also be banned, as long as a person’s characteristics are the sole basis for evaluating the risk. On the other hand, when an evaluation is based solely on profiling, it is not only profiling based on processing of data about a person’s characteristics, but also profiling based on processing of other categories of personal data (e.g. about transactions a person has executed), that will be banned, as long as the evaluation is based solely on profiling.
What types of evaluations are expressly permitted by Art. 5?
The foregoing analysis leads to the conclusion that the AI Act does not completely ban the use of AI systems to evaluate the risk of committing criminal offences. Only determinations made in a specific manner are banned. This is confirmed by Art. 5 of the AI Act and by recital 42, which identifies cases where evaluation of risk using AI systems will be permissible and consistent with the act. As stated in Art. 5(1)(d), “this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity.”
As explained in recital 42, “Natural persons should never be judged on AI-predicted behaviour based solely on their profiling, personality traits or characteristics, such as nationality, place of birth, place of residence, number of children, level of debt or type of car, without a reasonable suspicion of that person being involved in a criminal activity based on objective verifiable facts and without human assessment thereof.”
For these reasons, it should be assumed that it will be compliant with the act to use an AI system to evaluate the risk of committing a criminal act if the following conditions are met:
- The evaluation relates to the person’s involvement in criminal activity
- The evaluation is based on objective and verifiable facts directly related to criminal activity
- The evaluation is done by a human with the support of an AI system.
At the same time, the express indication in Art. 5 of how to use an AI system to evaluate the risk of committing a criminal act without falling afoul of the ban creates some interpretive doubts. It raises the question of whether this is the only use of an AI system to evaluate the risk of committing criminal acts that will comply with the act. On one hand, Art. 5(1)(d) clearly indicates the banned methods of performing risk evaluation using AI systems (based solely on profiling or analysis of a person’s characteristics). On the other hand, it expressly points to only one specific use of AI systems that would comply with the act. But in practice, it seems there may be many intermediate variants in the use of AI systems, i.e. variants that don’t constitute a banned method of using AI systems but are not expressly allowed by Art. 5(1)(d). An example of such an intermediate variant would be an AI system supporting human evaluation based on an analysis of both actual behaviour and a person’s characteristics, or a system focused solely on predicting the commission of a criminal act in the future, working under human supervision and using various data for making predictions, not just a person’s characteristics.
It seems that it would be a mistake to assume that such intermediate options are covered by the ban, because they do not exhibit the features referred to in the first part of Art. 5(1)(d). Thus the method described in the second part of Art. 5(1)(d) should be regarded as merely an example of a system not covered by the ban. This does not preclude other uses of AI systems to evaluate the risk of committing criminal offences, in other configurations, which are compliant with the act (as long as they do not exhibit the features indicated in the first part of Art. 5(1)(d)).
The second part of Art. 5(1)(d) contains another potentially valuable interpretive clue: that the facts used as the basis for the risk evaluation should be “objective and verifiable.” What exactly these terms mean will likely be the subject of lively debate in the legal literature, but it seems that by indicating this interpretive track, the drafters expressed the expectation that any risk assessment using an AI system, to the extent is based on a person’s actual behaviour, should rely on facts meeting the criteria of objectivity and verifiability.
It should also be stressed that the banned practices only apply to evaluation of the risk of commission of criminal offences by natural persons. If the same evaluation is made regarding legal persons or other organisational units, it will not be subject to the restrictions under the AI Act.
Examples
The comments above may seem abstract and convoluted, so I will illustrate how Art. 5(1)(d) works using some specific examples.
Example 1: An AI system supports a human in evaluating the risk of committing a criminal offence based solely on a client’s characteristics (including nationality, age, and residence).
Such a system will be banned, as the evaluation is made solely based on a person’s characteristics. That a human being is involved in the process is irrelevant.
Example 2: An AI system operating without human intervention evaluates the profile of a client’s risk of committing a criminal offence, based solely on the client’s historical transaction data.
Such a system would likely be considered banned, as it is based solely on profiling of the person.
Example 3: An AI system operating without human intervention evaluates a client’s profile for the risk of committing a criminal offence, based solely on the individual’s personal characteristics.
Such a system will be banned because the evaluation assessment is made solely on the basis of profiling, which in this case is also performed on the basis of data about the person’s characteristics.
Example 4: An AI system evaluates the risk of committing a criminal offence based solely on the characteristics of a client that is a legal person (including the location of the client’s registered office or branch).
Probably such a system would be permissible, since the banned practices apply exclusively to risk evaluations for individuals.
Example 5: An AI system evaluates the risk of committing a criminal offence based solely on the characteristics of a client that is a legal person, but, for purposes of the analysis, taking into account the personal characteristics of the company’s management board members.
This is a problematic example. Here, on the surface the risk evaluation concerns a legal person. But it hardly seems possible to overlook that an element of the evaluation involves evaluation of natural persons (management board members) based solely on their personal characteristics. Thus, in this case, the evaluation is also performed indirectly in relation to a natural person.
Example 6: An AI system supports a human in evaluating the risk of committing a criminal act based solely on the client’s characteristics and data relating to the client’s historical transactions.
Such a system will probably be considered permissible, as the evaluation is not made solely on the basis of profiling (a human is involved in the evaluation), nor based solely on the person’s characteristics (it also takes into account information about actual behaviour, i.e. completed transactions).
Which areas require particular care?
These examples are simplified and offer only a small sample. In practice, the systems used to evaluate risk are more nuanced. Whether the evaluation is automatic or quasi-automatic, in connection with entry into force of the AI Act it is necessary to analyse whether these systems or their components exhibit the features of a banned AI system. This is particularly relevant in the areas of AML compliance and fraud, where the analysis is chiefly aimed at evaluating the risk of committing a criminal offence, including prediction of future offences.
Areas and processes where the risk evaluation is made exclusively by automated means (without human involvement), or based on a person’s traits or characteristics, are particularly risky. When evaluating systems for compliance with the AI Act, it should also be borne in mind that the human factor must really be present, not a sham.
The paradox of regulation
In conclusion, it should be highlighted that the ban in question contains a certain paradox. It applies only to evaluations performed using AI systems. The same practices (e.g. evaluating the risk of commission of a criminal offence based solely on a person’s characteristics) carried out without the use of AI systems will not be explicitly banned (at least not by the AI Act). Thus there may be calls for greater systemic consistency and a ban on certain practices regardless of the technique used to implement them. There is a real risk of harming individuals as a result of evaluating them based solely on who they are, or how they act, regardless of whether the evaluation is made by a sophisticated algorithm or by a human.
Krzysztof Wojdyło, adwokat, New Technologies practice, Wardyński & Partners
This article originally appeared on the newtech.law blog