When OpenAI’s board asked Sam Altman to return a day after they fired him, he initially felt defiant, hurt and angry.

“It took me a few minutes to snap out of it and get over the ego and emotions to then be like, ‘Yeah, of course I want to do that,’” he told me by phone on Wednesday. “Obviously, I really loved the company and had poured my life force into this for the last four and a half years full time, but really longer than that with most of my time. And we’re making such great progress on the mission that I care so much about, the mission of safe and beneficial AGI.”

After an attempted boardroom coup that lasted five days, Altman officially returned as CEO of OpenAI on Wednesday. The company’s biggest investor, Microsoft, is planning to take a non-voting board seat as well.

During our interview, Altman repeatedly declined to answer the main question on everyones’ minds: exactly why he was fired to begin with. OpenAI’s new board, led by Bret Taylor, is going to conduct an independent investigation into what went down. “I very much welcome that,” Altman told me.

Below is my full interview with OpenAI CEO Sam Altman and CTO Mira Murati, lightly edited for clarity:

Sam, I would like to address first the elephant in the room, which is that we still don’t know exactly why you were fired to begin with. Why do you think you were fired?

Sam Altman: The board is gonna do an independent review here. I very much welcome that. I don’t have much else to say now but I’m looking forward to learning more.

Why do you think the board said it lost trust in you?

That will be a better question for them.

You said on X just now that “it’s clear that there were real misunderstandings” between yourself and members of the board. What were those misunderstandings?

I don’t feel ready to go talk about that yet. I think it’s very important to let this review process run. I’m happy to talk about anything forward looking. And I imagine there’ll be some time where I’m very happy to talk about what happened here, but not now.

Can you tell me why you can’t talk about it right now?

I just want to like let this process go and not interfere.

You talked about Ilya Sutskever [OpenAI’s chief scientist] in your note [to employees]. Can you let me in a little bit on why he changed his mind and decided to side with everyone else?

Mira Murati: We don’t know. You’d have to ask Ilya that.

Sam, what was, in hindsight, the main driving force here that got you to come back?

Altman: It was really interesting. Saturday morning, some of the board called me and asked if I’d be up for talking about it. And my immediate reaction was sort of one of defiance, it was like, “Man, I’m hurt and angry, and I think this sucks.”

“It took me a few minutes to snap out of it and get over the ego and emotions”

And then pretty immediately I started thinking about like, obviously, I really loved the company and had poured my life force into this for the last four and a half years full time, but really longer than that with most of my time. And we’re making such great progress on the mission I care so much about, the mission of safe and beneficial AGI. But also the people here and all of the partners who have taken such big bets on us, and Mira and the leadership team and all of the people here who do incredible work. It took me a few minutes to snap out of it and get over the ego and emotions to then be like, “Yeah, of course I want to do that.”

So the board asked you to come back?

Yeah.

And you were initially hesitant?

Not for long. There’s a lot of feelings there after that happened to me.

It was clear that the employees were with you. How big of a factor do you think that was?

Definitely we have come through this with a stronger and more unified and focused and committed team. I thought we had great conviction and focus before and now I think we have like way, way, way more. So that’s my silver lining to all of this.

Throughout this whole thing, we did not lose a single employee, a single customer. Not only did they keep the products up even in the face of very difficult to manage growth, they also shipped new features. Research progress continued.

Do you want back on the board?

This is gonna sound like a PR talking point: it’s not my area of focus right now. I have a mountain of very difficult, important, and urgent work. I want to be able to do my job well, but it’s not like [being] on the board or not. That’s not the thing I’m spending my time thinking about right now.

What does “improving our governance structure” mean? Is the nonprofit holding company structure going to change?

“I totally get why people want an answer right now. But I also think it’s totally unreasonable to expect it.”

It’s a better question for the board members, but also not right now. The honest answer is they need time and we will support them in this to really go off and think about it. Clearly our governance structure had a problem. And the best way to fix that problem is gonna take a while. And I totally get why people want an answer right now. But I also think it’s totally unreasonable to expect it.

Why do you think that’s unreasonable? I think people see a lot of vagaries about what happened. And it seems like it was disagreements, not malfeasance, or anything like that.

Oh, just because designing a really good governance structure, especially for such an impactful technology is not a one week question. It’s gonna take a real amount of time for people to think through this, to debate, to get outside perspectives, for pressure testing. That just takes a while.

Is anything about OpenAI’s approach to safety work changing as a result of the events that just unfolded?

Murati: No. This has nothing to do with safety.

The reports about the Q* model breakthrough that you all recently made, what’s going on there?

Altman: No particular comment on that unfortunate leak. But what we have been saying — two weeks ago, what we are saying today, what we’ve been saying a year ago, what we were saying earlier on — is that we expect progress in this technology to continue to be rapid, and also that we expect to continue to work very hard to figure out how to make it safe and beneficial. That’s why we got up every day before. That’s why we will get up every day in the future. I think we have been extraordinarily consistent on that.

Without commenting on any specific thing or project or whatever, we believe that progress is research. You can always hit a wall, but we expect that progress will continue to be significant. And we want to engage with the world about that and figure out how to make this as good as we possibly can.

Last question: I am sure you’re still thinking through all this. I know it’s very fresh. What lesson have you learned from this whole saga?

I think I don’t yet have like a neat pithy soundbite answer there. Clearly a lot, but I’m still stumbling through it all. I mean, definitely there will be a lot to say there, but I don’t think I have a ready to go... all I would have is like a long rambling answer at this point.

Okay, we’ll save for it another time.

After we hang up, Altman calls back moments later.

I learned that the company can truly function without me, and that’s a very nice thing. I’m very happy to be back, don’t get me wrong on that. But I come back without any of the stress of “Oh man, I gotta do this, or the company needs me or whatever.” I selfishly feel good because either I picked great leaders or I mentored them well. It’s very nice to feel like the company will be totally fine without me, and the team is ready and has leveled up.

Sign up for Command Line, a paid weekly newsletter from Alex Heath about the tech industry’s inside conversation.

Monthly

$7/month

A flexible plan you can cancel anytime.

Annual

$70/year

A discounted plan to keep you up to date all year.

Corporate

$60/person/year

Keep your team informed on the inside conversation.

We accept credit card, Apple Pay and Google Pay.