In recent years, there has been an explosion of artificial intelligence (AI)-powered products and services, some of which are amazing.
The International Compliance Association (ICA) is a professional membership and awarding body. ICA is the leading global provider of professional, certificated qualifications in anti-money laundering; governance, risk, and compliance; and financial crime prevention. ICA members are recognized globally for their commitment to best compliance practice and an enhanced professional reputation. To find out more, visit the ICA website.
Generative AI has helped us with content generation, code writing and reviewing, data analysis, generating professional headshots from a small sample of existing photos, and so much more. Everyone seems to be in love with what generative AI can do.
There is certainly no denying that generative AI has made some things easier, but with that convenience, there is a hefty price to pay because AI use is still largely unregulated. As a result, we are facing novel legal challenges and harms. One of these harms is AI-powered fraud.
We are not the only ones benefiting from the convenience of generative AI—scammers are, too. Bad phishing emails used to be easily identifiable through their poor spelling and grammar structure. But now, scammers can use ChatGPT to generate eloquently written content, which will undeniably make them appear more credible.
My research into sextortion scams found that many scammers gave an excuse for poor grammar and spelling, aware that poor spelling would serve as a warning that a correspondence is a scam. However, recent sextortion emails appear to be more sophisticated, with no spelling or grammar mistakes, making them feel a lot more believable and sinister.
This is only the tip of the iceberg. Generative AI has also made it extremely easy to fake a real human and enables fraudsters to create very clever scams that are impossible to tell from real situations.
There has been an explosion of scams in which fraudsters trick parents into thinking their child is in distress by cloning their child’s voice. Some parents have been told that their child has been kidnapped and will be harmed if they don’t pay the ransom.
Such situations are incredibly harmful, evoking intense fear that encourages quick compliance with requests. Fear affects our rational thinking, enforcing the fight-or-flight response. Those with low tolerance for fear will likely want to comply immediately, hoping the issue will be resolved. This primal process is intensified even more because hearing your child in distress can be gut-wrenching for any parent.
Telling deep fakes from reality can be difficult. While use of deep fakes for fraud is not a novel concept, it has definitely become more mainstream with easy and free access to generative AI. Romance scammers are now routinely relying on it to help them establish trust and support elaborate scenarios designed to encourage victims to part with very large sums of money.
Fraud is an ever-growing problem, but with the use of deep fakes, fraudsters are likely to profit even further because they are tapping into our reality. Reality is defined by what we hear and see; now, with the use of deep fakes, the lines between what is real and not real are blurred, playing into the hands of bad actors.
Deep fakes have been connected to fake news, the creation of harmful synthetic media, used for revenge or extortion, and bank account takeovers. They are also getting more advanced and becoming difficult to tell apart from the genuine content, without the use of detection tools. As such, they have the potential to irreparably affect people’s lives and reputation.
In their book, “Tools and Weapons: The Promise and the Peril of the Digital Age,” Brad Smith and Carol Ann Browne wrote, “The time has come to recognize a basic but vital tenet: When your technology changes the world, you bear a responsibility to help address the world that you have helped create.”
However, governments have been slow in addressing AI harms, possibly because the effects of AI are not immediately apparent or easily recognizable. While big companies talk about responsible AI use and acknowledge the dangers, many don’t follow best practices.
Laws are lagging behind technology by several years. Even with new legislation coming in, it is unclear what recourse victims may have, since companies are not always transparent when it comes to their AI algorithms.
Generative AI needs regulation, due to the amount of harm it can cause to unsuspecting individuals. This is especially the case with the use of deep fakes, which have the potential to harm individuals as well as society through erosion of trust and privacy and distortion of reality.
As AI technology moves forward, it is imperative that we follow swiftly with education on potential and existing harms, as well as with effective legislation to safeguard the integrity of information and to punish the misuse of biometric data accordingly.
Dr. Martina Dove is a researcher with a passion for fraud prevention. Her research concentrates on individual characteristics that make people susceptible to fraud, scam, and persuasion techniques used by fraudsters.
The International Compliance Association is a sister company to Compliance Week. Both organizations are under the umbrella of Wilmington plc.
No comments yet