Speakers at Compliance Week AI & Compliance Summit talk future rules around technology

AI_Compliance_15

While companies are exploring and building artificial intelligence (AI) technology, lawmakers and regulators are trying to identify what ground rules they need to set. These guardrails are what companies and governments alike believe are essential parts of ensuring safe and responsible use of the technology.

Much of the debate about AI lately has been focused on virtual assistants, which are often powered by a technology called generative AI (GenAI). GenAI, as it’s known, has the ability to create new content like text, art, and simulations based on user prompts, often entered in a chat box.

Companies are testing its effectiveness ranging from banking and financial services to pharmaceuticals, retail, and manufacturing. Early adopters shared stories of how AI could smooth workflows, reduce manual processes, summarize documents, and find new patterns in data. But, they also increasingly recognize the risks.

At Compliance Week’s inaugural AI & Compliance Summit, held at Boston University earlier this month under Chatham House Rule, compliance experts discussed key focus areas of GenAI compliance risks, ranging from data protection and privacy to algorithmic bias and fairness.

lock iconTHIS IS MEMBERS-ONLY CONTENT. To continue reading, choose one of the options below.