Artificial intelligence (AI) may be one of the hottest things in the business world as the latest tech trend du jour, politicians like former President Donald Trump and current Vice President Kamala Harris haven’t devoted much time to discussing their respective views on the technology while campaigning to be the next President of the United States. Still, each candidate has offered a few clues as to how they would govern over this next wave of innovation.

During the Republican National Convention in July, for example, Trump brought up AI just once during his 92-minute speech. When he touched on the topic, Trump didn’t discuss AI’s potential to upend labor markets, remake artistry, improve education, or even the philosophical concerns raised by movies like “The Terminator.” Instead, he told the more than 25 million people who’d tuned in that when he thinks of AI, he thinks about its energy consumption.

“AI will need twice the electricity that’s available now in our country. Can you imagine?” he questioned, in his short aside about the technology.

Harris, meanwhile, appears to have taken a broader view, saying in a speech last November that AI has the potential to “fight the climate crisis, make medical and scientific breakthroughs, explore our universe, and improve everyday life for people around the world.”

Understanding how either a Trump or Harris White House will govern AI will be a critical component to how companies navigate the increasingly frenzied enthusiasm around the technology. The promises and risks associated with AI appear massive and growing each day, with companies and governments quickly working to offer their own answer to the seemingly life-changing technology. For Trump and Harris, though, the technology represents a potentially hot-button topic, intersecting other political footballs from the business world like diversity, equity and inclusion initiatives and programs aimed at investing in socially conscious environmental, social, and governance (ESG) causes.

How the next president will tackle these questions will be a central theme of discussions at Compliance Week’s AI & Compliance Summit, which is being held Oct. 8-9 at Boston University. The event will gather business leaders, academics, and government officials to discuss some of the biggest questions around AI, including business adoption standards, ethical guardrails, and its application in decision making.

The topic of AI and government is broad, touching on questions about everything from national security to the economy to the operations of the government workforce itself. In one panel, “The Future of AI Executive Orders: Scenarios for 2024 and Beyond,” attendees will attempt to map out how nascent government interactions with the technology might evolve.

“A lot of regulation and legislation comes out of catastrophes. If there is a catastrophe, I think you could assume that either or both administrations would jump on it and do something.”

– Mark Quinn, Director of Regulatory Affairs, Cetera Financial Group

Mark Quinn blue portrait -2[5]

Mark Quinn

Some attendees are already somewhat convinced about the speed at which the U.S. government will act, after having watched its failures to meaningfully rein the tech industry in on issues such as data privacy and the internet’s impact on mental health.

“A lot of regulation and legislation comes out of catastrophes,” said Mark Quinn, director of regulatory affairs at Cetera Financial Group, who will be speaking at the summit. “If there is a catastrophe, I think you could assume that either or both administrations would jump on it and do something.”

What type of catastrophe though is hard to gauge. Peter Cohan, an associate professor of management at Babson College who will be discussing the government’s potential approach alongside Quinn, said the gauge could be anywhere between “the world blowing up or where we are now.”

Powering AI demand

While the federal government’s approach to AI may not be clear, AI’s crossover with key government and business issues increasingly is. On Sept. 17, two months after Trump expressed his concerns about AI’s energy needs, Microsoft announced a $100 billion investment from BlackRock and others to support AI power infrastructure that will include reopening nuclear generation units within the partially melted down plant known as Three Mile Island, in Pennsylvania. The agreement, set under a long-term contract with its new owner, Constellation Energy, could run through 2054.

Though nuclear energy is considered a safe and renewable resource under ESG, Three Mile Island in particular is a sore subject. The plant’s 1979 partial meltdown is the only nuclear disaster in American history.

So far, though some U.S. states such as California and Colorado have begun enacting laws around AI, the U.S. Congress hasn’t passed legislation about the technology since its most recent boom. Lawmakers also haven’t expressed much opinion about Microsoft’s investment in Three Mile Island, yet. 

“Any policies that would have substance would probably have to result in some sort of legislation rather than some sort of an executive order.” 

– Peter Cohan, Associate Professor of Management, Babson College

Peter_Cohan_headshot

Peter Cohan

In fact, some of the federal government’s only official actions regarding AI came last October, when President Joe Biden issued an executive order that called for tech developers to share information regarding AI projects that use immense computing power, according to a White House fact sheet. The move was largely lambasted by Republican lawmakers as executive overreach, with Republican lawmakers reportedly planning to defang its impacts should they win the White House in November.

“We will repeal Joe Biden’s dangerous executive order that hinders AI Innovation, and imposes radical leftwing ideas on the development of this technology. In its place, Republicans support AI development rooted in free speech and human flourishing,” Republican operatives wrote in the political platform document approved by the Republican National Committee this summer.

The fracas, though relatively small compared to larger debates over immigration, crime, and civil rights, underscored how executive orders alone will not meet the challenges presented by AI.

“Any policies that would have substance would probably have to result in some sort of legislation rather than some sort of an executive order,” Cohan said.

Still, while Trump has broadly stayed out of the AI debate, Harris has indicated where she would likely lean, depicting the technology as heightening already key existential questions facing the country.

“When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him?” Harris asked during her November 2023 speech. “When a woman is threatened by an abusive partner with explicit, deep-fake photographs, is that not existential for her?”

Still AI early days

For now, AI appears still in the testing-phase for many businesses. Those that have rolled the technology out, such as Google with its AI Summaries or Air Canada with its customer service, have sparked backlash. Google, which made its name as a top destination on the internet by directing people to relevant and accurate information they were seeking, began recommending that people add glue to pizza to keep the cheese from falling off. Air Canada, meanwhile, was found liable in court after it refused to honor discounts its AI-powered chatbot promised a customer who’d bought last-minute tickets to attend a funeral.

Quinn said companies need to be given latitude and support to experiment. “We can’t close our eyes to it and be naive and act like it doesn’t exist, we’ll be left in the dust by all of our competitor nations and competitive businesses,” he warned.

On at least some parts of that, Trump and Harris may agree. In December 2020, Trump issued an executive order promoting the use of “trustworthy” AI in the U.S. government.

“The ongoing adoption and acceptance of AI will depend significantly on public trust,” Trump wrote in the order nearly four years ago. “Agencies must therefore design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values.”

Two years later, the Biden administration used Trump’s executive order as a building block of its Blueprint for an AI Bill of Rights.