Artificial intelligence tools are being adopted at a rapid clip in many organizations, in some cases without proper guardrails and without taking ethical considerations into account.

The first question many organizations ask when it comes to using AI is, “Can we do it?”

Rather they should ask, “Should we do it?” Or more to the point, “How can we use AI in a way that fits with the culture and goals of our organization?”

Using AI in business presents opportunities and risks in equal measure. Organizations should craft practical frameworks that foster responsible AI adoption that aligns with organizational values and societal expectations. Those that have implemented thoughtful and ethical policies and procedures on AI use will have something to refer to when questions inevitably arise.

Sahil Agarwal, co-founder and CEO of Enkrypt AI, will discuss ethical considerations for AI use during a panel at Compliance Week’s Ethics & Compliance Summit, held March 19-20 at Boston University.

Enkrypt AI offers tools and resources that help companies navigate AI use by ensuring the responsible and secure use of AI technology, “empowering individuals and enterprises alike to harness its potential for the greater good,” according to its website.

Agarwal’s session will examine key ethical principles for AI use, including fairness, transparency, accountability, and bias prevention through real-world examples. Attendees will gain practical strategies for integrating ethical AI practices into compliance workflows and engage in scenario-based learning to apply ethical principles to AI implementation.

“Attendees will leave equipped with practical frameworks to foster responsible AI adoption that aligns with organizational values and societal expectations,” the description of the panel states.

Large organizations, particularly in financial services, have embraced AI tools to find competitive advantages by expanding efficiencies in fraud and money laundering monitoring and detection processes, credit decisioning, and in customer service through the use of chatbots and virtual assistants, according to the American Bankers Association.

But there are indications that smaller firms are also finding ways to gain the benefits of AI, even if they are not sure how best to do it.

According to a survey released Thursday by security and automation platform Drata, “The State of GRC 2025,” all 300 business executives who responded said they expect employees to increase their use of AI technologies in the next 12 months. The firms surveyed each had less than 2,500 employees.

However, the survey found that only 10 percent of those firms have a governance, risk, and compliance (GRC) program fully prepared to manage AI use.

In the same survey, 44 percent of companies also anticipate a massive or complete overhaul of the GRC function itself as a result of AI, while 73 percent expect to see the shift in the next six months or sooner.

At Compliance Week’s AI & Compliance Summit held at Boston University in October, panelists discussed how AI could be deployed safely and ethically as well as how to adopt AI tools the right way.

Compliance Week and subject matter experts have also discussed how the Department of Justice, in its Evaluation of Corporate Compliance Programs (ECCP), believes compliance should have input in how AI is used, as well as whether the DOJ has perhaps overstepped.

How are compliance teams using AI?

Compliance professionals said they are already using AI in transaction monitoring and to analyze enforcement actions for analytic ideas. Others are conducting AI-powered searches for conflicts of interest, bribery risk, and other corporate policy violations; in behavioral analytics; using policy chatbots; and in several other ways.

In a recent survey conducted by Compliance Week and Resolver, nearly one in five respondents (19 percent) said they were using AI tools to drive efficiency in regulatory reporting. Other popular uses included policy generation (17 percent), scoping obligations into assessments (16 percent) and controls suggestions, and mapping (15 percent). In all, compliance teams reported that AI tools are helping them spend less time reacting and more time planning ahead.

Downloads