We are living through a time described as the “exponential age.” The rapid rate at which technology is developing is staggering, particularly in relation to artificial intelligence (AI).
The International Compliance Association (ICA) is a professional membership and awarding body. ICA is the leading global provider of professional, certificated qualifications in anti-money laundering; governance, risk, and compliance; and financial crime prevention. ICA members are recognized globally for their commitment to best compliance practice and an enhanced professional reputation. To find out more, visit the ICA website.
Inevitably, this has led to the question: Can we, as compliance practitioners, keep up?
Pia Ffoulkes, vice president, United Kingdom and Ireland and Middle East and Africa at regtech vendor Silent Eight, has a key message on AI for financial crime practitioners looking to prepare themselves for the future: “Embrace it.”
“I came from a background of being at traditional screening vendors. When I took the plunge into startup, scale-up life, I was so excited to learn everything about AI,” she said. “The advice I would give to anybody that’s starting in AI—or if it’s starting to be brought into your organization—is consume everything you can. This is the time to be a sponge.”
Real-time results
AI encapsulates a range of technologies that have applications across all areas of modern life. Within the financial crime compliance context, natural language processing (NLP), natural language generation (NLG), and machine learning are rapidly coming to the fore.
NLP enables unstructured data—such as emails, chat messages, web pages, news articles, legal documents, and SWIFT transaction messages—to be analyzed quickly to extract useful information and insights. NLG can then be used to summarize key findings and decisions.
Meanwhile, machine learning solutions and algorithms can automatically classify and cluster transactions, customers, and other data points to rapidly identify anomalies and potential fraud.
“All of this happens in real time versus sitting in the analyst queues for days and months sometimes, enabling institutions to manage risks better,” Ffoulkes said. “Our clients have observed significant reduction in response times for escalating and managing financial crime exposure.
“Imagine finding a needle in a haystack. That’s how the current compliance processes usually come across. The analysts go through hundreds and thousands of case investigations to identify the true financial crime risks. Sometimes it’s already too late by the time these risks are identified and actioned.”
Job security
The flip side of these potential improvements in efficiency and effectiveness is a fear for some over job security. Ffoulkes is keen to debunk the idea AI is going to take people’s jobs.
“AI is not here to replace you,” she said. “It should be a tool that you can add to your skill set, so if you can understand it, you can work with it and you will become much more experienced.”
“I don’t see the financial crime practitioner disappearing,” she added. “I see them becoming more skilled and more specialized. There won’t be this need to take on the straight-forward investigative processes that they’re doing today. Hopefully, we’ll see Level 1 type tasks are all automated. There’s still going to be a need for practitioners, but they are going to be far more progressed in their skill set and far more focused on risk assessment, decisioning, and collating data.”
Pro-innovation path
This jives with the message currently coming from the U.K. government, which at the end of March published a policy paper on AI.
“[O]ur vision for a future AI-enabled country is one in which our ways of working are complemented by AI rather than disrupted by it,” Michelle Donelan, secretary of state for science, innovation, and technology, said in the paper. “In the modern world, too much of our professional lives is taken up by monotonous tasks—inputting data, filling out paperwork, scanning through documents for one piece of information, and so on. AI in the workplace has the potential to free us up from these tasks, allowing us to spend more time doing the things we trained for.”
AI anxieties
Nevertheless, there is growing unease around AI, including in an open letter published by a group of corporate technology leaders. The letter provoked fierce debate after calling on AI labs to immediately pause training of AI systems more powerful than GPT-4 for at least six months.
The open letter stated, “AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.”
A lot of concern around AI relates to the use of data to train it. Data privacy issues, for example, saw JPMorgan restrict staff use of ChatGPT, while Italy’s data protection authority blocked the popular chatbot for a period.
Ffoulkes explained, “AI solutions often require large amounts of training data to be successful, so before embarking on any AI journey, a financial institution should understand the applicable laws in the relevant jurisdictions that it operates in. Every country and every financial firm will have its own unique risk appetite when it comes to sharing and processing of client data, so it is important to make sure that any potential regtech partner is aware of those nuances and is willing to be flexible to meet those specific needs.”
Regulatory approaches
A variety of different approaches are being taken to AI regulation, ranging from light touch to calls for a full-scale halt of development.
With the AI Act, the European Union has proposed rules tailored on a risk-based approach, aiming to categorize AI technology into four levels of risk: unacceptable, high, limited, and minimal. Late last year, the European Council adopted a general approach position on the legislation, but it remains under discussion in the European Parliament.
In China, the Shanghai regulations on promoting the development of the AI industry went into effect in October. This provincial-level legislation introduced a graded management system, ethics council, and incentives for AI development. Also of note within these regulations is how relevant municipal departments will oversee creating a list of infraction behaviors, with a disclaimer stating there will be no administrative penalty for “minor infractions.”
Meanwhile, many governments are putting regulatory sandboxes at the heart of their AI strategies, allowing live testing of AI innovations within controlled, regulated environments.
Criminal capabilities
While the debate rages on around the best approach to AI regulation, its illegal uses continue to flourish. Just as the concepts of machine learning and NLP are moving into mainstream conversation, so too are the terms “deep fakes,” “bots,” and “zombies.”
Criminals are now able, with alarming ease and minimal financial cost, to use AI to create fake identity documents and open fraudulent accounts through which to launder illicit proceeds. Major institutions are also seeing an increase in coordinated attacks by cybercriminals using botnets: networks of infected computers controlled remotely. By increasing their own AI capabilities, criminals are making their attacks more sophisticated and harder to detect, including through “smart” malware that can adapt and evolve to override traditional antivirus software.
All of this is adding to the suspicion around AI, but Ffoulkes believes it’s the key to finding solutions.
“If you can prepare yourself by having the best technology and the best people, then at least you stand a chance of being able to fight some of these cybersecurity attacks that we’re seeing today,” she said.
In the right hands
Ffoulkes pointed out how AI is revolutionizing areas like complex transaction monitoring, where an AI model can investigate in minutes, if not seconds, thousands of alerts.
Further, supervised and unsupervised machine learning techniques have proven invaluable when studying large datasets of historical alert investigations and “learning” what suspicious (and nonsuspicious) alerts look like. These learnings can then be applied to anti-money laundering and sanctions detection to find new risks that were previously undetected. Or, in Silent Eight’s case, be used to learn and replicate the same decision-making process of a human investigator to automatically adjudicate and close alerts, according to Ffoulkes.
Yet, the human element is always still there and very much required.
“It’s the practitioners who will come up with the ideas for the AI developers to use. AI needs to learn from them,” Ffoulkes said.
“AI can do amazing things but remember it still learns from human behavior and decisions. I don’t see AI as a replacement for human investigators but instead a tool that the most successful investigators will wield to their benefit,” she said. “The more knowledge that the practitioners have, the more knowledge it feeds back. It’s just a constant, continuous loop of learning.”
The International Compliance Association is a sister company to Compliance Week. Both organizations are under the umbrella of Wilmington plc.
No comments yet