How to adopt AI tools the right way at your company, from people who’ve tried

AI_Compliance_17

Companies are adopting artificial intelligence (AI) tools at a breakneck pace, but it’s increasingly clear that they set guardrails early. AI leaders say that approaching the technology with safety and ethics in mind will help ensure its upside benefits, while avoiding the significant risks it poses as well.

Executives who have started or are starting AI efforts at their various companies shared stories about how they’re exploring the potential power of AI-powered virtual assistants, while also taking stock of the risks posed by the technology, particularly generative AI (GenAI).

GenAI has the ability to create new content like text, art, and simulations based on user prompts, often entered in a chatbot. Companies in industries as varied as banking and financial services, pharmaceuticals, retail, and manufacturing are testing effective use cases. Early adopters shared stories of how AI could smooth workflows, reduce manual processes, summarize documents, and find new patterns in data. But, they also increasingly recognize the risks.

At Compliance Week’s inaugural AI & Compliance Summit, held at Boston University earlier this month, experts discussed key focus areas of GenAI compliance risks, ranging from data protection and privacy to algorithmic bias and fairness. The event was held under Chatham House Rule to encourage collaboration and sharing. Attendees also heard a presentation about creating an AI acceptable use policy.

lock iconTHIS IS MEMBERS-ONLY CONTENT. To continue reading, choose one of the options below.