Control and delete: How regulators can shutdown companies’ AI investments

AI_data_privacy

Companies are increasingly putting their faith in artificial intelligence to realize the kind of business benefits that the technology seems to promise, from increased productivity to smoother business operations and decisions. But they are also opening themselves up to new and potentially crippling sanctions if they are unable to answer questions that surround how AI operates, how it follows privacy laws, and whether it violates other rules around copyright and data provenance as well.

“Algorithmic disgorgement,” as the process is known, has the potential to disrupt many businesses, and not just those involved in AI development. Fines around data misuse for organizations large and small are already legend, thanks to legislation like the Europe Union’s General Data Protection Regulation (GDPR), and experts warn that the risk of such a sanction with AI is “substantial” because too few organizations know how the cutting-edge technology they are using works or what data it has been trained on. Without an answer to both, “companies may be pushing their luck,” said Clare Walsh, director of education at professional body the Institute of Analytics.

So far, this regulatory tool has seldom been used, and the times it has have only been when the degree of harm has been high. But algorithmic disgorgement may become a new regulatory trend to resolve abuses or noncompliance caused by AI adoption, particularly by companies that don’t understand the underlying technology.

lock iconTHIS IS MEMBERS-ONLY CONTENT. To continue reading, choose one of the options below.