Is AI the Magic Bullet for Payments?

April 25, 2024 | Expertise
Adam Vissing

Prospective clients regularly ask us if IXOPAY uses artificial intelligence, primarily in the context of transaction routing. Considering the hype surrounding AI, these questions are to be expected. But there is an inherent assumption behind these questions: that using AI is the solution and invariably delivers benefits. However, like any tool, AI is no magic bullet. Before we pitch a specific feature to our clients, it needs to provide real, tangible benefits. At the time of writing, we remain unconvinced that AI routing offers any meaningful advantages - beyond perhaps making good marketing copy - although the reasons for this are nuanced and complex. We will take a look at some of these reasons below, as well as the challenges that AI brings.

A Quick Introduction to Machine Learning

Before we dive into the why, we should clear up a couple of common misconceptions. When people ask about “AI”, they generally mean applied machine learning (ML). This technology has actually been around for many decades and is one of many sub-disciplines of artificial intelligence. Machine learning relies on large datasets to train a model. A good example of this is optical character recognition (OCR). To train the model, the system is provided with training data containing known characters. The system is then tasked with identifying the characters in the set, with the system’s answers compared to the actual (known) characters. Initially, the success rate is very low as the system makes random guesses. However, by providing feedback on whether each answer is correct or not (what is known as “validating”), the system slowly converges on a model that identifies characters with a high degree of accuracy.

What has changed in the past decades is not the technique behind machine learning, but computer processing power and the cost thereof. This has made it far quicker and cheaper to process large data sets and train models iteratively. Training involves repeating the process of generating output, evaluating it, and providing feedback many thousands or millions of times. These advances in processing power have allowed models such as ChatGPT to be trained on vast amounts of data (e.g. text available on the internet), to an extent that was not previously feasible.

Data Quality and the Cold Start Problem

Of course, the quality of the training data plays an important role. No amount of computing power can make up for a data set that is lacking in quantity and/or quality. This applies just as much to training a model to handle transaction routing as it does to large language models like ChatGPT. Machine learning is great at recognizing patterns, but if those patterns do not reflect real world circumstances or key data is missing, the AI model will invariably fail at the task it has been trained for.

This plays directly into the “cold start” problem. To understand this, think of YouTube, Spotify or Amazon, who all use recommendation systems. When a new user signs up to the platform, nothing is initially known about their tastes and preferences. This data can only be gathered as the user interacts with the platform - watching videos, listening to music or searching for products. Collecting this data takes time. The same also applies to new content added to the platform. When content is first added, there is no information on how users have interacted with that content and thus limited information on who it may appeal to. Handled poorly, this can lead to results like Spotify revealing in 2014 that 20% of its content had never been played.

A similar cold start problem exists for transaction routing. When a new merchant is first onboarded, we typically have no historical transaction data for that merchant. As a result, there is no readily available dataset on which to train a model for that merchant’s particular use case. One approach would be to use data from merchants in a similar industry, but that is not without issues. For example, many cryptocurrency exchanges use IXOPAY, and most of their transactions are US-based. There are only a limited number of issuers that account for the vast majority of US credit cards, as well as a limited pool of acquirers willing to partner with cryptocurrency exchanges. This means that all these exchanges deal with the same issuers and acquirers. You would therefore expect similar results across all these exchanges, given that they are operating in the same industry and processing similar transactions via the same banks. But in practice, this is not the case. We see exchanges with authorization rates well above 80% and others whose authorization rates are just above 50%.

Blind Spots in the Data and the Risk of Training Bias

What is the reason for these stark differences? One important factor is issuer outreach. Exchanges with higher authorization rates have invested time and resources into reaching out to issuing banks and regulators, assuring them that they are legitimate businesses with solid AML and fraud management processes. This issuer outreach affects authorization rates directly without being represented in the transaction data at all.

The merchant’s merchant category code (MCC) is another factor that affects authorization rates. Again, this information is not captured in the transaction data. Yet an issuer or acquirer’s risk appetite is influenced by the MCC, affecting the likelihood of a transaction being authorized. This critical information also flies under the radar.

Issues with training data are not the only concern for merchants though, who face additional risks when dealing with opaque AI. The payment industry is rife with kickback agreements, and the interests of PSPs and merchants can be at odds with one another. There is a risk that routing AI will prioritize the PSP offering the highest kickback for the orchestrator, rather than the option most likely to benefit the merchant themselves. With no way for merchants to know the basis for decisions, merchants should be wary of such black boxes. And even if there is no bias at play, training the model using the wrong incentives can still lead to unwanted results. 

We can see this in ChatGPT’s hallucinations. ChatGPT was trained specifically to create plausible output, i.e. output that appears similar to texts created by real humans. It was not trained to create output that is factually correct, however desirable this may be. Even small adjustments to the weights used to train a model can have a significant effect on its behavior. There is also the risk of an AI converging on a local optimum rather than the global optimum. But the challenge of creating a suitable AI model is not only technical in nature.

Lack of Legislative Clarity

Legislation and compliance introduce another layer of complexity. Upcoming AI legislation may require explainability, and thus address the aforementioned black box issue from the perspective of merchants. But for financial service and payment providers - likely to be subject to some of the strictest regulatory requirements - this opens up a whole new can of worms. Explaining why an AI made a particular decision is easier said than done when the engineers who design these systems are themselves unable to answer these questions. If we look at the example of image recognition, we can see how tiny manipulations of the image - editing just a few select pixels - can alter the way the system categorizes the image. What is still easily recognizable as a giraffe or remote control to a human now gets detected as something else entirely. Explaining why these minute changes affect the system’s output makes fulfilling any future explainability requirements extremely challenging, if not impossible.

Furthermore, any system processing card data (i.e. training systems) need to be PCI-compliant. Cardholder data used in training is subject to GDPR and other data privacy laws. This matters because patterns in a cardholder’s transaction data significantly affect the chance of a transaction being successful. An unusual transaction amount is a strong indicator of possible fraud and thus affects the likelihood of the transaction being authorized. With future EU legislation requiring businesses to provide immediate access to all data stored on an individual, handling this data will become more complex and processes will need to be put in place to disclose this information.

Another open legal question is that of liability. Who is liable if decisions made by an AI result in monetary losses? The payment orchestration platform? The merchant who opted into the service, fully aware that it was AI-based? With these questions unanswered, a system built today could end up no longer fit for purpose a few months down the line when new legislation comes into effect.

AI Still Has Its Place

This is not to deny the potential of AI and machine learning. Some of the results demonstrate a great ability to optimize strategies and even exploit unexpected system behaviors. There are many applications where machine learning is already proving its value, including in the field of payments. Many fraud and risk management services already use this technology to help detect fraud. Pattern recognition - which is at the heart of risk and fraud detection - is an area where machine learning really shines. These cases also involve a clear liability shift. If a provider deems a transaction safe, the provider assumes liability for any fraudulent transactions. In fact, risk management solutions that use AI to analyze transactions can be integrated into IXOPAY already.

However, when it comes to transaction routing, and the many factors that determine the likelihood of a transaction being authorized, we see little evidence that current “AI-based” systems offer any benefits over a well designed routing strategy devised by a human. It does however introduce additional layers of complexity, opacity and legal uncertainty, as well as the risk of unintended consequences. Given the lack of demonstrable, tangible benefits of AI to routing, we prefer to err on the side of caution, careful not to overpromise on a tech that has yet to mature. We will soon enter the inevitable trough of disillusionment following a few years of hype and unrealistic expectations. When we do, that will not mean that AI has failed either. It will simply indicate that some of the expectations were unrealistic and many AI capabilities were added more as a result of FOMO, rather than because they delivered actual benefits. When we feel that we can offer our merchants real benefits through AI, we will do so. But we do not want to promise benefits we cannot deliver just to surf the latest hype wave.


IXOPAY is a best-of-breed payment orchestration platform offering flexible and independent global payment processing options. Fully PCI-DSS Level 1 certified and highly scalable, IXOPAY caters to the needs of enterprise merchants and white label clients, including payment service providers (PSPs), acquirers and independent sales organizations (ISOs). Built upon modern, easily extendable architecture, IXOPAY provides smart transaction routing with cascading, state-of-the-art risk and fraud management, fully automated reconciliation and settlements processing, comprehensive reporting and access to hundreds of acquirers, payment service providers and alternative payment methods.

IXOPAY is trusted by national and international enterprises and has offices in Austria and the USA. The company has grown from 2 to around 160 employees by delivering innovative products and solutions.

For more information, visit:

About the Author

Adam Vissing

VP Sales & Business Development

Adam has worked in the enterprise software and high-tech industry for several decades and a has degree in Computer Science & Management. His role at IXOPAY involves helping our clients streamline and scale their global payment processes.