In an interview with the Big Technology Podcast, Sam Altman seemed to struggle answering the tough questions about OpenAI’s path to profitability.
At about the 36 minute mark the interviewer asked the big question about revenues and spending. Sam Altman said OpenAI’s losses are tied to continued increases in training costs while revenue is growing. He said the company would be profitable much earlier if it were not continuing to grow its training spend so aggressively.
Altman said concern about OpenAI’s spending would be reasonable only if the company reached a point where it had large amounts of computing it could not monetize profitably.
The interviewer asked:
“Let’s, let’s talk about numbers since you brought it up. Revenue’s growing, compute spend is growing, but compute spend still outpaces revenue growth. I think the numbers that have been reported are OpenAI is supposed to lose something like 120 billion between now and 2028, 29, where you’re going to become profitable.
So talk a little bit about like, how does that change? Where does the turn happen?”
Sam Altman responded:
“I mean, as revenue grows and as inference becomes a larger and larger part of the fleet, it eventually subsumes the training expense. So that’s the plan. Spend a lot of money training, but make more and more.
If we weren’t continuing to grow our training costs by so much, we would be profitable way, way earlier. But the bet we’re making is to invest very aggressively in training these big models.”
At this point the interviewer pressed Altman harder about the path to profitability, this time mentioning the spending commitment of $1.4 trillion dollars versus the $20 billion dollars in revenue. This was not a softball question.
The interviewer pushed back:
“I think it would be great just to lay it out for everyone once and for all how those numbers are gonna work.”
Sam Altman’s first attempt to answer seemed to stumble in a word salad kind of way:
“It’s very hard to like really, I find that one thing I certainly can’t do it and very few people I’ve ever met can do it.
You know, you can like, you have good intuition for a lot of mathematical things in your head, but exponential growth is usually very hard for people to do a good quick mental framework on.
Like for whatever reason, there were a lot of things that evolution needed us to be able to do well with math in our heads. Modeling exponential growth doesn’t seem to be one of them.”
Altman then regained his footing with a more coherent answer:
“The thing we believe is that we can stay on a very steep growth curve of revenue for quite a while. And everything we see right now continues to indicate that we cannot do it if we don’t have the compute.
Again, we’re so compute constrained, and it hits the revenue line so hard that I think if we get to a point where we have like a lot of compute sitting around that we can’t monetize on a profitable per unit of compute basis, it’d be very reasonable to say, okay, this is like a little, how’s this all going to work?
But we’ve penciled this out a bunch of ways. We will of course also get more efficient on like a flops per dollar basis, as you know, all of the work we’ve been doing to make compute cheaper comes to pass.
But we see this consumer growth, we see this enterprise growth. There’s a whole bunch of new kinds of businesses that, that we haven’t even launched yet, but will. But compute is really the lifeblood that enables all of this.
We have always been in a compute deficit. It has always constrained what we’re able to do.
I unfortunately think that will always be the case, but I wish it were less the case, and I’d like to get it to be less of the case over time, because I think there’s so many great products and services that we can deliver, and it’ll be a great business.”
The interviewer then sought to clarify the answer, asking:
“And then your expectation is through things like this enterprise push, through things like people being willing to pay for ChatGPT through the API, OpenAI will be able to grow revenue enough to pay for it with revenue.”
Sam Altman responded:
“Yeah, that is the plan.”
Altman’s comments define a specific threshold for evaluating whether OpenAI’s spending is a problem. He points to unused or unmonetizable computing power as the point at which concern would be justified, rather than current losses or large capital commitments.
In his explanation, the limiting factor is not willingness to pay, but how much computing capacity OpenAI can bring online and use. The follow-up question makes that explicit, and Altman’s confirmation makes clear that the company is relying on revenue growth from consumer use, enterprise adoption, and additional products to cover its costs over time.
Altman’s path to profitability rests on a simple bet: that OpenAI can keep finding buyers for its computing as fast as it can build it. Eventually, that bet either keeps winning or the chips run out.
Watch the interview starting at about the 36 minute mark:
Featured Image/Screenshot