Is AI Being Abused to Commit Chargeback Fraud?
AI has been a very hot topic over the last three years.
To most people, AI was something limited to the bounds of science fiction. The arrival of large language model (LLM) technologies like ChatGPT changed that, though. Now, even relative tech novices can have AI-enabled services right at their fingertips.
The changes brought this leap forward have been incredible. But, the more widespread use of LLMs has not been without issues. We can look at chargebacks as an illustrative example here.
Merchants have used AI-enabled technologies to fight fraud and manage chargeback activities for many years. Trying to review and respond to each claim is a daunting task for human staff, given the sheer volume of chargeback requests submitted daily by cardholders. Leveraging AI can streamline chargeback cases, helping merchants address inefficiencies in a flawed system and protect their from fraudulent claims.
At the same time, there’s a real danger that fraudsters could exploit this technology to help automate the process of filing bad chargeback claims. With AI at their disposal, scammers can broaden their operations and overwhelm merchants, as well as financial institutions, with junk chargeback claims.
Recommended reading
- Stop Buyer’s Remorse: Tips to Beat the Second-Glance Blues
- What is Chargeback Fraud? How do You Protect Yourself?
- What is Return Fraud? 10 Tips for Merchants to Fight Back
- What is First-Party Misuse? Accidental Chargebacks Explained
- Accidental Friendly Fraud: a Fast-Growing Online Threat
- The Top 10 Tips for Chargeback Fraud Prevention in 2024
Understanding How LLMs Work
Before we get into the weeds, let’s take a step back and make sure we’re clear on how large language models work. And, how LLMs differ from “true” artificial intelligence.
A large language model works by analyzing and annotating collections of written content, then identifying trends and associations in that content. Here’s an example: an LLM can recognize that the phrase “Declaration of Independence” frequently appears alongside terms like “Thomas Jefferson” and “1776.” This allows it to respond to questions about the year in which the Declaration of Independence was signed with “1776,” and questions about the author of the document with “Thomas Jefferson.”
However, we should also touch on hallucinations, too. This is a phenomenon in LLMs where inaccuracies may arise from flawed or insufficient training data, or from a misinterpretation of context.
LLMs can relate outputs to the questions posed. They lack lack genuine comprehension, though. They don’t “think” like humans, which distinguishes them from true AI. This can cause some issues; to illustrate, take the example of Air Canada, who were recently ordered to issue refunds to customers that the customers were not entitled to, but were promised by an AI chatbot.
These systems can be highly advanced and capable of self-improvement. But, the generation of substantial amounts of seemingly plausible (but often erroneous) information limits the usefulness of LLMs in many capacities.
Can LLMs Help Scammers Commit Fraud?
The short answer is “yes.”
Most chargebacks are initiated by individual cardholders. But, a notable — and by all accounts, growing — percentage are executed by organized criminal groups. For these fraudsters, the focus is on volume.
They can file hundreds of chargeback claims daily. That way, even if half of those disputes are rejected by banks or challenged successfully by merchants, they can still amass substantial profits at the expense of merchants and financial institutions.
Deliberate and organized chargeback abuse is nothing new. But, it’s historically been hard for this to be done at scale, due to challenges in managing the administrative aspects of these processes. After filing, scammers may need to submit responses to inquiries from card providers, necessitating a high level of accuracy to avoid detection. Moreover, the creation of synthetic identities is a crucial element, as no seasoned criminal would use their real identity; instead, they need to fabricate identities using stolen data.
These tasks are simplified by LLMs. The technology can generate vast amounts of persuasive text quickly, allowing fraudsters to interact with targets in a manner similar to that of an AI chatbot. While these efforts may not be flawless, that is inconsequential; like I said, the focus is on volume. And, with a sufficiently high volume of attempts, some portion are going to get by undetected.
Responding to the Threat of AI-Enabled Chargeback Fraud
It’s totally feasible for LLMs to produce significant volumes of convincingly crafted text that could assist in fraudulent activities. But, you shouldn’t make the mistake of assuming that anti-fraud firms are lagging behind.
On the contrary, the anti-fraud mechanisms employed by major payment processors scrutinize a lot more than just written content. Contemporary fraud detection technologies can look at thousands of indicators, regardless of how minor or trivial they may appear, and create a comprehensive threat evaluation for every transaction. The same applies to dispute claims.
At Chargebacks911®, our technology relies on a combination of machine learning and human expertise to identify and intercept both invalid claims, as well as identify recurring chargeback triggers. Our intelligent Source Detection technology can help merchants:
- identify the true reason for chargebacks
- see higher dispute win rates
- identify additional revenue opportunities
- minimize processing costs
- lower their overall number of chargebacks
- eliminate false positives and unnecessary declines
Even if the textual components of a fraudulent chargeback are flawless, there are still ample opportunities for fraudsters to make mistakes. Our proven record demonstrates that our continuously updated systems are well-equipped to tackle AI-driven fraud effectively. Click here to learn more and get started today.