News and analysis for the well-informed compliance or audit exec. Select an option and click continue.
Annual Membership $499 Value offer
Full price one year membership with auto-renewal.
Membership $599
One-year only, no auto-renewal.
- Chief Compliance Officer and VP of Legal Affairs, Arrow Electronics
By Aaron Nicodemus2023-06-06T12:00:00
Generative artificial intelligence like OpenAI’s ChatGPT, Microsoft’s ChatGPT-powered Bing, and Google’s Bard have a lot of potential—and risks—that require thorough assessments to implement.
But can—or should—generative AI be used by the compliance department?
Compliance professionals must determine whether any potential uses of generative AI by their employer violate state or federal laws, rules, or regulations. They should insist safeguards be implemented to prevent or detect plagiarism and the improper use of intellectual property, as well as violations of individual privacy.
THIS IS MEMBERS-ONLY CONTENT. To continue reading, choose one of the options below.
News and analysis for the well-informed compliance or audit exec. Select an option and click continue.
Annual Membership $499 Value offer
Full price one year membership with auto-renewal.
Membership $599
One-year only, no auto-renewal.
2024-06-07T22:34:00Z By Adrianne Appel
Compliance has been “sleeping on” artificial intelligence, two panelists discussed at Compliance Week’s Women in Compliance Summit. The profession should be positioned to lead on AI governance at the business level.
2023-07-21T15:29:00Z By Kyle Brasseur
Technology companies including Google, Meta, and OpenAI agreed to a series of voluntary commitments they’ll make regarding their management of risks when developing artificial intelligence systems.
2023-07-06T15:33:00Z By Neil Hodge
Not all companies can rely on bans or restrictions to employee use of generative artificial intelligence like ChatGPT. Instead of telling people what they can’t do, focus on what they can do.
2024-12-13T16:47:00Z By Aaron Nicodemus
When the DOJ released its revised Evaluation of Corporate Compliance Programs, it turned some heads. Tucked into a section on risk assessments was a strongly worded series of questions that appeared to shoulder compliance teams with the responsibility for ensuring the safe use of AI tools by their firms.
2024-12-12T14:32:00Z By Aaron Nicodemus
The Department of Justice’s Evaluation of Corporate Compliance Programs has made the importance of artificial intelligence governance frameworks clear, but it didn’t say what role compliance should play. Here’s the answer.
2024-11-14T20:36:00Z By Adrianne Appel
The U.S. Department of the Treasury’s Financial Crimes Enforcement Network issued an alert to financial institutions about their obligations to report deepfakes, warning artificial intelligence has given bad actors additional tools in their arsenal.
Site powered by Webvision Cloud