Artificial intelligence (AI) is rapidly transforming the business landscape, and this is especially true for anyone working in compliance. But while AI offers immense potential to streamline processes, enhance decision-making, and mitigate risks, it also introduces a new set of challenges that compliance professionals must navigate.

Those challenges include determining what is the bigger risk for your organization: Using AI or not using it? Finding your roadmap to a complete AI policy might be impossible as the technology continues to evolve, but refusing to start the journey isn’t an option either.

For many companies, AI risk visibility is opaque as the technology goes through its growing pains, but for others being left behind while a competitor ethically innovates their way to the top is a risk they can’t take. These issues and more will be addressed at Compliance Week’s AI & Compliance Summit, held Oct. 8-9, at Boston University.

AI as a compliance catalyst

Many companies already leverage AI, or their earlier incarnations, to enhance operations.

Eric Brotten, healthcare compliance and privacy executive, said “before AI was AI” it was called “predictive analytics,” and used for health insurance claims and suggesting proactive treatments and interventions.

With anti-fraud units using AI to continuously monitor and flag suspicious transactions, long-standing integration of AI-like technologies across sectors has laid the groundwork for its adoption in compliance.

The evolution and easy accessibility of generative AI–when a machine mimics human cognitive functions, such as “learning” and “problem solving” and uses this to produce various types of content–is rapidly making AI a daily workplace tool.

Anghrija Chakraborty, compliance business partner at AstraZeneca, has been educating herself on a personal level with AI, finding its utility in “distilling lengthy notes into concise insights” and experimenting with compliance case study generation.

These capabilities could free up valuable time for compliance teams to focus on strategic initiatives and complex issues. She advised, however, to approach the output with a healthy dose of skepticism and meticulous fact-checking.

Setting aside the important issue of accuracy for a moment, AI’s ability to process vast amounts of data quickly offers potential advantages for compliance functions. Beyond automating routine tasks, AI can be a powerful tool for identifying patterns, anomalies, and potential compliance risks.

For example, AI-powered systems can analyze large datasets to detect unusual transactions, identify potential conflicts of interest, or assess the effectiveness of compliance training programs.

AI can also improve accessibility and understanding of compliance information.

Diana Kelley, chief information security officer at Protect AI, shared an example of a compliance lead using AI to provide easy-to-understand policy summaries in response to natural language queries. This can enhance employee engagement and understanding of compliance requirements.

Another area where AI can have an impact is in risk assessment. By leveraging machine learning algorithms, compliance teams can develop more sophisticated and accurate risk models.

AI can analyze historical data to identify trends and patterns, enabling organizations to proactively address emerging risks. Additionally, AI-powered tools can be used to assess the potential ethical implications of new products, services, or business practices.

Perhaps most significantly, AI is already being put to good use with regard to data breach mitigation. A recent IBM report found that 67 percent of companies relying on AI and automation to assist with security and prevention spent $2.2 million less when a breach did occur. They also detected and contained incidents 98 days faster than those not relying on AI. With the average breach cost hitting $4.9M in 2024, the business case for deploying AI is a no-brainer.

AI as a risk creator

While AI offers promising opportunities, it also presents significant challenges.

Cricket Snyder, chief compliance officer at the Jefferson County Commission in Alabama, noted, “The pace at which AI is advancing brings up some real concerns about ethical usage, data privacy, and security.”

“The pace at which AI is advancing brings up some real concerns about ethical usage, data privacy, and security.”

– Cricket Snyder, Chief Compliance Officer, Jefferson County Commission

Versions of ChatGPT and Google Gemini, for example, are free–at a privacy cost. While conversations with a chatbot may feel like a one-on-one affair, interactions are not private.

Large language models (LLM) increasingly require explicit permission from users to access the data you submit for training purposes.

Avoid sharing sensitive or private information with any of the publicly available chatbots.

As Kelley pointed out, “If companies are using public LLMs or compromised open-source models for their AI solutions, there is a real risk of data leakage.”

Compliance professionals can lead the way for their organizations to get ahead of AI risk, which should be familiar from our experiences with social media (e.g. protection of intellectual property, confidential information, data privacy). Do you know what your employees are using and how? Training and communication plans, acceptable use policies, and case studies can be implemented by compliance to increase awareness and reduce risk.

The World Economic Forum has helpful resources for organizations, detailing eight principles for the ethical use of AI, as enumerated by Barbara Cosgrove, vice president and chief privacy officer at Workday: 

  1. Define what “AI ethics” means in your organization.
  2. Build ethical AI into the product development and release framework.
  3. Create cross-functional groups of experts.
  4. Bring customer collaboration into the design, development, and deployment of responsible AI.
  5. Take a lifecycle approach to bias in machine learning.
  6. Be transparent.
  7. Empower your employees to design responsible products.
  8. Share what you know and learn from others in the industry.

Moreover, Chakraborty highlights another personal concern of importance, stating, “AI adoption in compliance raises concerns around bias.”

If the data used to train an AI model is biased, the model’s outputs will likely also be biased. Ensuring AI systems are fair and accurate is crucial to maintaining trust and avoiding discriminatory outcomes.

Finally, the complex nature of AI algorithms can create challenges in understanding how decisions are made, leading to concerns about transparency and accountability. It is essential to establish clear guidelines for determining who is responsible for the decisions made by AI systems and to provide transparency into how these systems operate.

Snyder stressed the importance of a proactive approach to managing AI-related risks. She recommends developing a framework to assess potential issues such as data privacy, algorithm bias, and system integrity.

A Microsoft case study on operationalized technology ethics laid out four lessons on designing responsible and ethical technology. By identifying and implementing an accountability infrastructure for these risks upfront, organizations can mitigate potential harm and build trust in their AI systems.

Key actions for compliance professionals

To effectively navigate the AI landscape, compliance professionals should prioritize the following actions:

  1. Develop AI literacy: You need to use AI at a basic level to evaluate its potential and risks. Sign up for a free Claude or Gemini account and start playing around with it. Have some fun experimenting.
  2. Understand your organization’s AI landscape: Gain visibility into current and potential AI applications within your organization.
  3. Build a robust compliance framework: Develop clear policies, procedures, and controls for AI. For starters, a simple AI acceptable use policy (similar to social media use) could be implemented as an initial step.
  4. Foster collaboration: Effective AI governance requires collaboration among compliance, IT, legal, and business functions. By working together, organizations can identify potential issues early on and develop solutions that balance innovation with risk mitigation. Consider a cross-functional working group that meets periodically.
  5. Prioritize continuous learning: The AI landscape is rapidly evolving. Stay informed about emerging technologies, best practices, and regulatory developments to maintain your expertise. Follow a few trusted AI experts on LinkedIn, attend occasional webinars from your industry association, and read up on developments from validated news sources.

As with all technological innovation, AI represents a huge opportunity for compliance functions to enhance efficiency, effectiveness, and risk management. So long as we approach AI with a balanced perspective, recognizing both its potential benefits and associated challenges, we can play a critical role in shepherding our organizations’ ethical approach to AI.