News and analysis for the well-informed compliance or audit exec. Select an option and click continue.
Annual Membership $499 Value offer
Full price one year membership with auto-renewal.
Membership $599
One-year only, no auto-renewal.
- Chief Compliance Officer and VP of Legal Affairs, Arrow Electronics
By Aaron Nicodemus2023-06-06T12:00:00
Generative artificial intelligence like OpenAI’s ChatGPT, Microsoft’s ChatGPT-powered Bing, and Google’s Bard have a lot of potential—and risks—that require thorough assessments to implement.
But can—or should—generative AI be used by the compliance department?
Compliance professionals must determine whether any potential uses of generative AI by their employer violate state or federal laws, rules, or regulations. They should insist safeguards be implemented to prevent or detect plagiarism and the improper use of intellectual property, as well as violations of individual privacy.
THIS IS MEMBERS-ONLY CONTENT. To continue reading, choose one of the options below.
News and analysis for the well-informed compliance or audit exec. Select an option and click continue.
Annual Membership $499 Value offer
Full price one year membership with auto-renewal.
Membership $599
One-year only, no auto-renewal.
2024-06-07T22:34:00Z By Adrianne Appel
Compliance has been “sleeping on” artificial intelligence, two panelists discussed at Compliance Week’s Women in Compliance Summit. The profession should be positioned to lead on AI governance at the business level.
2023-07-21T15:29:00Z By Kyle Brasseur
Technology companies including Google, Meta, and OpenAI agreed to a series of voluntary commitments they’ll make regarding their management of risks when developing artificial intelligence systems.
2023-07-06T15:33:00Z By Neil Hodge
Not all companies can rely on bans or restrictions to employee use of generative artificial intelligence like ChatGPT. Instead of telling people what they can’t do, focus on what they can do.
2024-11-14T20:36:00Z By Adrianne Appel
The U.S. Department of the Treasury’s Financial Crimes Enforcement Network issued an alert to financial institutions about their obligations to report deepfakes, warning artificial intelligence has given bad actors additional tools in their arsenal.
2024-07-31T15:31:00Z By Adrianne Appel
A nationwide rental outlet affiliated with Rent-a-Center and its chief executive have been sued by the Consumer Financial Protection Bureau for allegedly deceiving five million consumers about the terms of credit agreements.
2024-07-24T17:54:00Z By Neil Hodge
A lack of risk visibility is causing companies to reject customers–and potentially lose money–over fears they might be in danger of violating rules around anti-money laundering and sanctions regulations.
Site powered by Webvision Cloud