Sam Altman: Back at OpenAI, but questions on initial firing and future of AI persist

The news of his ouster had sent shockwaves throughout the AI world, raising trust concerns around the burgeoning technology.

Published : Nov 25, 2023 17:33 IST - 7 MINS READ

OpenAI’s Sam Altman speaks during a conference in Laguna Beach, California on October 17. OpenAI announced on November 21 that Altman would return as its CEO, days after his shock dismissal plunged the pioneering artificial intelligence firm into crisis.

OpenAI’s Sam Altman speaks during a conference in Laguna Beach, California on October 17. OpenAI announced on November 21 that Altman would return as its CEO, days after his shock dismissal plunged the pioneering artificial intelligence firm into crisis. | Photo Credit: PATRICK T. FALLON/AFP

On November 21, Sam Altman returned to lead OpenAI less than five days after his surprise dismissal, which kicked off a tug of war for his talent, left the company in disarray, and laid bare deep board divisions over the mission of one of the world’s most valuable startups.

OpenAI’s new interim board, which will not include Altman at the outset, will be led by Bret Taylor, a former co-CEO of Salesforce Inc. The other directors are Larry Summers, the former US treasury secretary, and existing member Adam D’Angelo, the co-founder and CEO of Quora Inc.

Altman had been fired on November 17 after clashing with the board over his drive to transform OpenAI from a nonprofit organisation focused on the scientific exploration of artificial intelligence into a business that builds products, attracts customers, and lines up the funding needed to power AI tools. Members of the former board harbored concerns about the potential harms done by powerful, unchecked AI.

Also Read | Why did ChatGPT creator OpenAI oust its CEO Sam Altman?

Greater diversity

Job one for the interim board will be finding new directors who can strike a better balance between OpenAI’s business imperatives and the need to protect the public from tools capable of creating content that misinforms, worsens inequality, or makes it easier for bad actors to inflict violence.

The reconstituted board should reflect greater diversity, said many people, including Ashley Mayer, CEO of Coalition Operators, a venture capital firm. “I’m thrilled for OpenAI employees that Sam is back, but it feels very 2023 that our happy ending is three white men on a board charged with ensuring AI benefits all of humanity,” she wrote on the social media site X. “Hoping there’s more to come soon.”

A person close to the negotiations said that several women were suggested as possible interim directors, but parties could not come to a consensus. Both Laurene Powell Jobs, the billionaire philanthropist and widow of Steve Jobs, and former Yahoo CEO Marissa Mayer were floated, but deemed to be too close to Altman, this person said. Former Secretary of State Condoleezza Rice was also considered, but her name was dismissed as well. Ultimately, the board will include women, this person said.

Communication issues

Investors will also expect changes in the ways the board communicates with stakeholders. Executives at Microsoft Corp., which has said it will invest as much as $13 billion in OpenAI, were outraged after being given only a brief heads-up about the board’s plans to fire Altman, people with knowledge of the matter have said.

Some investors and executives at OpenAI have also complained that the board has not sufficiently explained its rationale for dismissing Altman. Board members said Altman was not “consistently candid in his communications”.

In the days since, board members and staffers have said that the CEO’s removal was unrelated to “malfeasance” or “safety,” leaving an information vacuum. Microsoft CEO Satya Nadella said publicly that he has not been given an explanation.

Emmett Shear, who was named interim CEO by the board on November 19, told people close to OpenAI that he did not plan to stay in the role if the board could not clearly communicate to him in writing its reasoning for Altman’s sudden firing, according to people with knowledge of the matter.

Microsoft’s role in OpenAI

Microsoft, whose AI strategy hinges on the startup’s technology, will likely have representation on the new board, whether as an observer or, possibly, with one or more seats, according to one person with knowledge of the matter. Although Altman agreed not to take a board seat initially in order to get the deal done, he too will probably join the board eventually, another person said.

Altman also agreed to an internal investigation into the conduct that led to his dismissal, another person said. OpenAI’s earlier board members included D’Angelo, OpenAI co-founder and Chief Scientist Ilya Sutskever, Tasha McCauley of GeoSim Systems, and Helen Toner, director at Georgetown University’s Center for Security and Emerging Technology.

Other investors beyond Microsoft were incensed by the board’s move. That included Vinod Khosla of Khosla Ventures. “I have not talked to the board members who participated” in the decision to fire Altman, Khosla said Wednesday in an interview with Bloomberg Technology. “I think it’s errant behavior on their part.” McCauley and Toner have declined to comment on the firing and its fallout. Altman also declined to comment.

Sutskever—who is renowned in the field of AI, dating back to his research at the University of Toronto—later apologised for his role in the dismissal of Altman and went so far as to sign a letter threatening to leave OpenAI unless the board resigned.

Groundbreaking research to which Sutskever contributed is credited with helping usher in the modern AI age. A lawyer for Sutskever said the executive is “thrilled that Sam is back as CEO” and said he is still employed at the company. “I admire Ilya a lot” for changing his mind, Khosla said. He “absolutely” deserves a second chance, he added.

Challenges ahead

One of the big questions for OpenAI is to what extent Altman can continue pursuing outside ventures. In the months before he was booted from the company, he was travelling the globe to raise billions of dollars from some of the world’s largest investors for a new chip venture, codenamed Tigris, people with knowledge of the matter have said.

The idea was to spin up an AI-focussed chip company that could produce semiconductors to compete against those from Nvidia Corp., which currently dominates the market for artificial intelligence processors, these people said.

Altman has also been looking to raise money for an AI-focussed hardware device that he’s been developing in tandem with former Apple Inc. design chief Jony Ive. Those side projects will be another issue the board will have to consider as he settles back into the CEO role. “Sam has got broad interests and broad investments,” Nadella said.

Future of AI

There is a lot that remains unknown about Altman’s initial ousting. Regardless, the news sent shockwaves throughout the AI world—and, because OpenAI and Altman are such leading players in this space, may raise trust concerns around a burgeoning technology that many people still have questions about.

“The OpenAI episode shows how fragile the AI ecosystem is right now, including addressing AI’s risks,” said Johann Laux, an expert at the Oxford Internet Institute focussing on human oversight of artificial intelligence.

The turmoil also accentuated the differences between Altman and members of the company’s previous board, who have expressed various views on the safety risks posed by AI as the technology advances. Multiple experts add that this drama highlights how it should be governments—and not big tech companies—that should be calling the shots on AI regulation, particularly for fast-evolving technologies like generative AI.

“The events of the last few days have not only jeopardised OpenAI’s attempt to introduce more ethical corporate governance in the management of their company, but it also shows that corporate governance alone, even when well-intended, can easily end up cannibalised by other corporate’s dynamics and interests,” said Enza Iannopollo, principal analyst at Forrester.

The lesson, Iannopollo said, is that companies cannot alone deliver the level of safety and trust in AI that society needs. “Rules and guardrails, designed with companies and enforced by regulators with rigour, are crucial if we are to benefit from AI,” he added.

Unlike traditional AI, which processes data and completes tasks using predetermined rules, generative AI (including chatbots like ChatGPT) can create something new.

Regulating AI

Tech companies are still leading the show when it comes to governing AI and its risks, while governments around the world work to catch up. In the European Union, negotiators are putting the final touches on what is expected to be the world’s first comprehensive AI regulations. But they have reportedly been bogged down over whether and how to include the most contentious and revolutionary AI products, the commercialised large-language models that underpin generative AI systems including ChatGPT.

Also Read | ChatGPT to now have customised versions in bid to beat back competition

Chatbots were barely mentioned when Brussels first laid out its initial draft legislation in 2021, which focussed on AI with specific uses. But officials have been racing to figure out how to incorporate these systems, also known as foundation models, into the final version.

Meanwhile, in the US, President Joe Biden signed an ambitious executive order in October seeking to balance the needs of cutting-edge technology companies with national security and consumer rights.

The order—which will likely need to be augmented by congressional action—is an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive. It seeks to steer how AI is developed so that companies can profit without putting public safety in jeopardy.

(with inputs from Bloomberg and AP)

Sign in to Unlock member-only benefits!
  • Bookmark stories to read later.
  • Comment on stories to start conversations.
  • Subscribe to our newsletters.
  • Get notified about discounts and offers to our products.
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide to our community guidelines for posting your comment