Webcast details: July 25, 2024 – 2 p.m. ET 
CPE Credit(s): 1

Register for this webcast

As artificial intelligence (AI) continues to advance rapidly and organizations expand their usage to optimize efficiency and productivity, implementing internal AI policies to ensure regulatory compliance and minimize exposure remains a hot topic.

Unfortunately, understanding organizational usage, where to start with policy implementation, and how to work with other departments is exceedingly difficult. Plus, AI regulations continue to expand and evolve–especially for compliance and audit professionals–meaning strategies change before ever being actioned.

Making things even more complicated, the use of AI by third-party vendors is a growing black hole for most organizations. As your third-party relationships grow, so does your exposure to bad actors. If ignored, the unknowns surrounding third-party AI usage can become as problematic, if not more problematic, as internal usage.

Join LogicGate’s Senior Director of Solutions Engineering & Enablement Annmarie Rombalski, GRC Content & Strategy Manager Elli Sullivan, and Senior Manager of Enterprise Security Anthony Matar as they explore emerging AI rules, like NIST AI RMF, and how compliance teams can implement effective processes and controls to proactively identify, assess, and mitigate the largest AI risks, ensuring compliance with internal policies and external regulations.

Steps will include how to:

  • Gain clarity around AI governance policies and procedures
  • Implement controls for AI risk mitigation
  • Thoughtfully integrating AI governance into cybersecurity
  • Assess AI risk from third-party vendor

Speakers:
Annmarie Rombalski - Senior Director, Solutions Engineering & Enablement
Elli Sullivan - GRC Content & Strategy Manager
Anthony Matar - Senior Manager, Enterprise Security