In March, a number of tech and business executives — Elon Musk and Steve Wozniak, included — signed an open letter calling for a pause on advanced AI development as the launch of OpenAI's GPT-4 sent shockwaves through the industry.
The letter came as alarm bells were sounded about everything from AI's capacity to disrupt jobs to more existential fears around AI causing widespread disinformation.
But Alex Karp, Palantir's billionaire CEO, says these fears are secondary to the benefits of AI, particularly when it comes to using the technology to protect the United States through military applications.
In a recent New York Times op-ed, Karp outlined why he believes we need to continue moving forward with AI development in order to properly integrate the technology with the country's "electrical grids, defense and intelligence networks."
"In the absence of understanding, the collective reaction to early encounters with this novel technology has been marked by an uneasy blend of wonder and fear," he wrote. "We must not, however, shy away from building sharp tools for fear they may be turned against us."
"Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed," he added.
He said the current debate was similar to the "Oppenheimer moment," likening the development of AI to that of nuclear devices.
"We must not grow complacent," he wrote. "We will again have to choose whether to proceed with the development of a technology whose power and potential we do not yet fully apprehend."
While there have been some efforts to assuage the existing fears around AI — seven of the biggest AI companies, including OpenAI, Google, and Amazon, agreed to make "voluntary commitments" to the Biden administration as it pertains to safer AI tools — other experts have echoed Karp's sentiments more broadly.
Yann LeCun, Meta's chief AI scientist, referred to the idea that rampant AI development could threaten humanity as "preposterously ridiculous," while Bill Gates and Marc Andreessen have expressed the need to continue pushing forward with development.
"We should not try to temporarily keep people from implementing new developments in AI, as some have proposed," Gates wrote earlier this month. "Cyber-criminals won't stop making new tools. Nor will people who want to use AI to design nuclear weapons and bioterror attacks. The effort to stop them needs to continue at the same pace."
Andreessen, who is an investor in a number of AI startups, has said the single greatest AI-related risk right now is that "China wins global AI dominance," adding that the "United States and the West should lean into AI as hard as we possibly can."
Karp, of course, has particular interest in implementing AI into the military. As he wrote in the Times, his company's platforms are used for "target selection, mission planning and satellite reconnaissance."
"The depth of engagement with and demand for our new Artificial Intelligence Platform is without precedent," Karp said in Palantir's first quarter earnings report released in May.
The company's stock is expected to soar by 54% over the course of the next year, Dan Ives, managing director of equity research at Wedbush Securities, predicted, saying the company's "AI fortress" is "unmatched."