HN.zip

OpenAI Is Just Another Boring, Desperate AI Startup

176 points by speckx - 144 comments
vessenes [3 hidden]5 mins ago
Ed... I wrote him a long note about how wrong his analysis on oAI was earlier this year. He wrote back and said "LOL, too long." I was like "Sir, have you read your posts? Here's a short version of why you're wrong." (In brief, if you depreciate models over even say 12 months they are already profitable. Given they still offer 3.5, three years is probably a more fair depreciation schedule. On those terms, they're super profitable)

No answer.

llbbdd [3 hidden]5 mins ago
That's disappointing to hear. I've generally liked Ed's writing but all his posts on AI / OAI specifically feel like they come from a place of seething animosity more than an interest in being critical or objective. At least one of his posts repeated the claim that essentially all AI breakthroughs in the last few years are completely useless, which is just trainwreck hyperbole no matter where you lie on the spectrum as far as its utility or potential. I regularly use it for things now that feel like genuine magic, in wonder to the point of annoying my non-technical spouse, for whom it's just all stuff computers can do. I don't know if OpenAI is going to be a gazillion dollar business in ten years but they've certainly locked in enough customers - who are getting value out of it - to sustain for a while.
bayarearefugee [3 hidden]5 mins ago
If you spend more money training the model and offering it as a service (with all the costs that that entails) than you earn back directly from that model's usage, it can only be profitable if you use voodoo economics to fudge it.

Luckily we live in a time period where voodoo economics is the norm, though eventually it will all come crashing down.

vessenes [3 hidden]5 mins ago
You’re right, but that’s not what’s happening. Every major model trained at Anthropic and oAI have been profitable. Inference margins are on the order of 80%.
true_religion [3 hidden]5 mins ago
That’s true, but OpenAI and its proponents say each model is individually profitable so if R&D ever stops then the company as a whole will be profitable.
nickff [3 hidden]5 mins ago
The problem with this argument is that if R&D ever stops, OpenAI will not be differentiated (because everyone else will be able to catch up), so their pricing power will disappear, and they won't be able to charge much more than the inference costs.
singron [3 hidden]5 mins ago
The depreciation schedule doesn't affect long term profitability. It just shifts the profits/loss in time. It's a tool to make it appear like you paid for something while it's generating revenue. Any company would look really profitable for a while if it chose long enough depreciation schedules (e.g. 1000 years), but that's just deferring losses until later.
vessenes [3 hidden]5 mins ago
No it would in fact be appropriate to match the costs of the model training (incurred over a few months) with the lifetime of its revenue. That’s not some weird shifting - it helps you understand where the business is at. In this case on a per model basis, very profitable.
mrandish [3 hidden]5 mins ago
> I wrote him a long note about how wrong his analysis on oAI was earlier this year.

Why don't you consider posting it on HN either as a response in this thread or as it's own post. There's clearly interest in teasing out how much of OAI's unprecedented valuation is hype and/or justified.

EvgeniyZh [3 hidden]5 mins ago
> Given they still offer 3.5, three years is probably a more fair depreciation schedule.

But the usage should drop considerably as soon as next model is released. Many startups down the line are existing in hope of better model. Many others can switch to a better/cheaper model quite easily. I'd be very surprised if the usage of 3.5 is anywhere near what it was before release of the next generation, even given all the growth. New users just use the new models

vessenes [3 hidden]5 mins ago
Probably true! If revs shift to a new cheaper model that’s not bad though.
atonse [3 hidden]5 mins ago
I agree with you. Even Anthropic's CEO said EXACTLY this. He said, if you actually look at the lifecycle of each model as its own business, then they are all very profitable. It's just that while we're making money from Model A, we've started spending 10x on Model B.
jfengel [3 hidden]5 mins ago
Exponential cost growth with, at best, linear improvement is not a promising business projection.

Perhaps at some point we'll say "this model is profitable and we're just gonna stick with that".

pmdr [3 hidden]5 mins ago
Well have they actually come up with data supporting this?
vessenes [3 hidden]5 mins ago
Yes and it’s furnished to some of the worlds best investors who are happy to pay valuations between 150 and 500b to access it
treadump [3 hidden]5 mins ago
OpenAI expects multi-year losses before turning consistently profitable, so saying they are already profitable based solely on an aggressive depreciation assumption overstates the case
surgical_fire [3 hidden]5 mins ago
The problem of this "depreciation" rationale is that it presumes that all the cost is in training, ignoring that actually serving the models is also very expensive. I certainly don't believe they would be profitable, and vague gestures at some hypothetical depreciation sounds like accounting shenanigans.

Also, the whole LLM industry is mostly trying to generate hype, at a possible future where it is vastly more capable than it currently is. It's unclear if they would still be generating as much revenue without this promise.

BoorishBears [3 hidden]5 mins ago
Your brief doesn't make sense, maybe you need to expand?

They're only offering 3.5 for legacy reasons: pre-Deepseek, 3.5 did legitimately have some things that open source hadn't caught up on (like world knowledge, even as an old model), but that's done.

Now the wins come from relatively cheap post-training, and a random Chinese food delivery companies can spit out 500B parameter LLMs that beats what OpenAI released a year ago for free with an MIT license.

Also as you release models you're enabling both distillation of your own models, and more efficent creation of new models (as the capabilities of the LLM themselves are increasingly useful for building, data labeling, etc.)

I think the title is inflammatory, but the reality is if AGI is really around the corner, none of OpenAI's actions are consistent with that.

Utilizing compute that should be catapulting you towards the imminent AGI to run AI TikTok and extract $20 from people doesn't add up.

They're on a treadmill with more competent competitors than anyone probably expected grabbing at their ankles, and I don't think any model that relies on them pausing to cash in on their progress actually works out.

giorgioz [3 hidden]5 mins ago
Why 3.5 or 3 years for depreciations? Models have been retrained much faster than that. I would guess more in the 3 months range.
t-writescode [3 hidden]5 mins ago
Because businesses and people rely on consistentish responses, which you can get on models you’ve already validated your prompts on.

Dropping old models means breaking paying customers, which is bad for business.

boringg [3 hidden]5 mins ago
I mean they are clearly trying to get some attention here. I wouldn't bite.

OpenAI is many things but I don't think I would call it boring or desperate. The title seems more desperate to me.

vessenes [3 hidden]5 mins ago
The lad doth continue to protest as valuations reach $1T. I wonder if he passed on some early stock and just can't get over it
triceratops [3 hidden]5 mins ago
> He wrote back and said "LOL, too long."

Some nerve

gorjusborg [3 hidden]5 mins ago
Seriously. It took more time to respond with disrespect than to just ignore it.
triceratops [3 hidden]5 mins ago
I meant more the nerve to say it was "too long". Ed Zitron may be right but he's got no right to accuse anyone else of writing too many words.
sharkjacobs [3 hidden]5 mins ago
It's a rude response from someone whose public persona is famously rude and abrasive. It's also worth considering the difference between publishing 10000 words to an audience of subscribers, and sending 10000 words unsolicited to a stranger.
atonse [3 hidden]5 mins ago
Especially rude given, if he was feeling it was too long, he could've had an AI summarize it.

But this shows a certain intellectual laziness/dishonesty and immaturity in the response.

Someone's taken the time to write a response to your article, you can choose to learn from it (assuming it's not an angry rant), or you could just ignore it.

In fact, that completely dismisses this stupid article for me.

gip [3 hidden]5 mins ago
Fully disagree. OpenAI has 800 millions active users and has effectively democratized cutting-edge AI to an amazing number of people everywhere. It took much longer for the Internet or Mobile Internet to have such an impact.

So "boring" ? Definitely not.

justapassenger [3 hidden]5 mins ago
They moved the industry, that's for sure.

But at this point - there's nothing really THAT special about them compared to their competition.

rco8786 [3 hidden]5 mins ago
You can say the same thing about any number of hugely popular and profitable companies in the world.
criley2 [3 hidden]5 mins ago
You could say the same about Apple, word for word.
IgorPartola [3 hidden]5 mins ago
Apple has physical stores that will provide you timely top notch customer service. While not perfect, their mobile App Store is the best available in terms of curation and quality. Their hardware is not so diverse so is stable for long term use. And they have the mindshare in way that is hard to move off of.

Let’s say Google or Anthropic release a new model that is significantly cheaper and/or smarter that an OpenAI one, nobody would stick to OpenAI. There is nearly zero cost to switching and it is a commodity product.

rco8786 [3 hidden]5 mins ago
Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple. There is nearly zero cost to switching and it is a commodity product.

The AI market, much like the phone market, is not a winner take all. There's plenty of room for multiple $100B/$T companies to "win" together.

trenchpilgrim [3 hidden]5 mins ago
> Let's say Google release a new phone that is significantly cheaper and/or smarter than an Apple one. nobody would stick to apple.

I don't think this is true over the short to mid term. Apple is a status symbol to the point that Android users are bullied over it in schools and dating apps. It would take years ti reverse the perception.

vessenes [3 hidden]5 mins ago
And it's up to a $1bn+ monthly revenue run rate, with no ads turned on. It's the first major consumer tech brand to launch since Facebook. It's an incredible business.
myroon5 [3 hidden]5 mins ago
> first major consumer tech brand to launch since Facebook

from my recollection, post-FB $75B+ market cap consumer tech companies (excluding financial ones like Robinhood and Coinbase) include:

Uber, Airbnb, Doordash, Spotify (all also have ~$1bn+ monthly revenue run rate)

scarface_74 [3 hidden]5 mins ago
Spotify goes back and forth from barely profitable to losing money every quarter. They have to give 70% of their revenue to the record labels and that doesn’t count operating expenses.

As Jobs said about Dropbox, music streaming is a feature not a product

myroon5 [3 hidden]5 mins ago
I listed multiple candidates so disputing one wouldn't dispute my main point ;)

Hyperbole to say no major consumer tech brands have launched for decades

aprilthird2021 [3 hidden]5 mins ago
> It's the first major consumer tech brand to launch since Facebook.

Wrong

throw219080123 [3 hidden]5 mins ago
Zero moat.
jamesjyu [3 hidden]5 mins ago
Huge brand moat. Consumers around the world equate AI with ChatGPT. That kind of recognition is an extremely difficult thing to pull off, and also hard to unseat as long as they play their cards right.
minimaxir [3 hidden]5 mins ago
"Brand moat" is not an actual economic concept. Moats indicate how easy/hard it is to switch to a competitor. If OpenAI does something user-adversarial, it takes two seconds to switch to Anthropic/Gemini (the exception being Enterprise contracts/lock-in, which is exactly why AI companies prioritize that). The entire reason that there are race-to-the-bottom price wars among LLM companies is that it's trivial for most people to switch to whatever's cheapest.

Brand loyalty and users not having sufficient incentive by default to switch to a competitor is something else. OpenAI has lost a lot of money to ensure no such incentive forms.

corentin88 [3 hidden]5 mins ago
McDonald’s has brand moat. So does Coca-Cola. And many more products. The switching cost is null, but the brand does it all.
minimaxir [3 hidden]5 mins ago
Again, that's brand loyalty, not a brand moat.
vessenes [3 hidden]5 mins ago
I don't completely agree. Brand value is huge. Product culture matters.

But say you're correct, and follow the reasoning from there: posit "All frontier model companies are in a red queen's race."

If it's a true red queen's race, then some firms (those with the worst capital structure / costs) will drop out. The remaining firms will trend toward 10%-ish net income - just over cost of capital, basically.

Do you think inference demand and spend will stay stable, or grow? Raw profits could increase from here: if inference demand 8x, then oAI, as margins go down from 80% to 10%, would keep making $10bn or so a year in FCF at current spend; they'd decide if they wanted that to go into R&D or just enjoy it, or acquire smaller competitors.

Things you'd have to believe for it to be a true red queen's race:

* There is no liftoff - AGI and ASI will not happen; instead we'll just incrementally get logarithmically better.

* There is no efficiency edge possible for R&D teams to create/discover that would make for a training / inference breakaway in terms of economics

* All product delivery will become truly commoditized, and customers will not care what brand AI they are delivered

* The world's inference demand will not be a case of Jevon's paradox as competition and innovation drives inference costs down, and therefore we are close to peak inference demand.

Anyway, based on my answers to the above questions, oAI seems like a nice bet, and I'd make it if I could. The most "inference doomerish" scenario: capital markets dry up, inference demand stabilizes, R&D progress stops still leaves oAI in a very, very good position in the US, in my opinion.

dabockster [3 hidden]5 mins ago
The moat, imo, is mostly the tooling on top of the model. ChatGPT's thinking and deep research modes are still superior to the competition. But as the models themselves get more and more efficient to run, you won't necessarily need to rent them or rent a data center to run them. Alibaba's Qwen mixture of experts models are living proof that you can have GPT levels of raw inference on a gaming computer right now. How are these AI firms going to adapt once someone is able to run about 90% of raw OpenAI capability on a quad core laptop at 250-300 watts max power consumption?
Handy-Man [3 hidden]5 mins ago
Having sticky 800M WAU is a moat.
trcf22 [3 hidden]5 mins ago
I wouldn’t necessarily say so. I guess that’s what they are trying to « pulse » people and « learn » from you instead of just providing decent unbiased answers.

In Europe, most companies and Gov are pushing for either mistral or os models.

Most dev, which, if I understand it correctly, are pretty much the only customers willing to pay +100$ a month, will change in a matter of minutes if a better model kicks in.

And they loose money on pretty much all usage.

To me a company like Antropics which mostly focus on a target audience + does research on bias, equity and such (very leading research but still) has a much better moat.

didip [3 hidden]5 mins ago
I am with you, and they still have so many dials to tweak. Ads is one of the big dials.

Training costs can be brought down. New algorithm can still be invented. So many headrooms.

And this is not just for OpenAI. I think Anthropic and Gemini also have similar room to grow.

ActionHank [3 hidden]5 mins ago
Using inventions from other people or those who have now left the company.

They have no moat, their competitors are building equivalent or better products.

The point of the article is that they are a bad business because it doesn't pan out long term if they follow the same path.

llmslave [3 hidden]5 mins ago
its going to be one of the most consequential companies in human history
zapataband2 [3 hidden]5 mins ago
You mean it's thanks to the incredible invention known as the Internet that they were able to "democratize cutting-edge AI to an amazing number of people"

OpenAI didn't build the delivery system they built a chat app.

epolanski [3 hidden]5 mins ago
Idk, it's a company with 4.5B in revenues in H1 2025.

It's not insane numbers but it's not bad either. YouTube had those revenues in...2018. 12 years after launching.

There's definitely a huge upside potential in openai. Of course they are burning money at crazy rates, but it's not that strange to see why investors are pouring money into it.

raincole [3 hidden]5 mins ago
It's an insane number considering how little they monetize it. Free users are not even seeing ads rn and they already have 4.5B revenue. I think 100B by 2029 is a very conservative number.
silisili [3 hidden]5 mins ago
I'm in awe they are still allowing free users at all. And I'm one of them. The free tier is enough for me to use it as a helper at work, and I'd probably pay for it tomorrow if they cut off the free tier.
sdesol [3 hidden]5 mins ago
> I'm in awe they are still allowing free users at all.

I am not.

> The free tier is enough for me to use it as a helper at work, and I'd probably pay for it tomorrow if they cut off the free tier.

You are sort of proving the point that thid isn't crazy. They want to be the dealer of choice and they can afford to give you the hit now for free.

dfsegoat [3 hidden]5 mins ago
...not monetized yet: Can't find the post, but a prev. HN post had a link to an article showing that OpenAI had hired someone from Meta's ad service leadership - so I took that to mean it's a matter of time.

edit: believe it was Fidji Simo et al.

https://www.pymnts.com/artificial-intelligence-2/2025/openai...

blibble [3 hidden]5 mins ago
> Idk, it's a company with 4.5B in revenues in H1 2025.

giving away dollar bills for a nickel each is not particularly impressive

kevmo314 [3 hidden]5 mins ago
I would be pretty impressed by anyone who managed to do that nominally. Moving a dozen billion dollars alone seems not trivial to do.
dreamcompiler [3 hidden]5 mins ago
Blowing a giant hole in Hoover Dam while somebody pees in Lake Mead would also be impressive. It just won't stay impressive for very long.

Even if the guy peeing is a world champion urinator named Sam.

mossTechnician [3 hidden]5 mins ago
The insane numbers are the ones you find when you look at their promises, like reaching $125 billion in revenue by 2029 (which they predict will be the first year they are profitable) https://www.reuters.com/technology/artificial-intelligence/o...
pizlonator [3 hidden]5 mins ago
> it's a company with 4.5B in revenues in H1 2025

That's a lot of money to be getting from a subscription business and no ads for the free tier

Not hard to see upside here

scarface_74 [3 hidden]5 mins ago
It’s when they are losing four times as much. Are their marginal costs per subscriber even positive?
dabockster [3 hidden]5 mins ago
Yeah, how much profit will they make if they're able to go for-profit? Revenue doesn't tell me anything.
thinkingtoilet [3 hidden]5 mins ago
It's not hard to make 4.5B when you lose 13.5B. If you give me 18B, I would bet I could lose 13.5B no problem.
simonw [3 hidden]5 mins ago
It is hard though. Getting people to hand $4.5B to a company is difficult no matter how much money you are losing in the process.

I mean sure, you can get there instantly if you say "click here to buy $100 for $50", but that's not what's happening here - at least not that blatantly.

wang_li [3 hidden]5 mins ago
didn't the post a loss of $5 billion last year and are on track for a loss of $8-9 billion this year?
MaysonL [3 hidden]5 mins ago
No: they’re on track to lose $30B or so (they lost 13.5B H1 2025)
saltyoldman [3 hidden]5 mins ago
even if thats the case, they have eaten multiple times that amount of other companies lunch. Companies that currently use ads, whereas cgpt does not.(but will).
davidcbc [3 hidden]5 mins ago
Have they?

GOOG is at record highs, FB is at record highs, MSFT is at record highs

jklinger410 [3 hidden]5 mins ago
I feel like this guy is getting a bit too big for his britches.
dcre [3 hidden]5 mins ago
This is not a direct response to this piece, but I wrote a short post about the kinds of egregious errors Zitron is comfortable making, in this case his insistence that the "cost of inference" is going up.

https://crespo.business/posts/cost-of-inference/

rkho [3 hidden]5 mins ago
> Edward Benjamin Zitron (born 1986 or 1987) is an English technology writer, podcaster, and public relations specialist. He is a critic of the technology industry, particularly of artificial intelligence companies and the 2020s AI boom.
alkyon [3 hidden]5 mins ago
Their only moat is that they started 'AI revolution'. More shock waves like DeepSeek release are still to come. Not too mention that LLM->AGI transition in near future is a moot point. They're riding the wave but for how much longer?
jemiluv8 [3 hidden]5 mins ago
Most things written about this subject is already polarizing. I'd believe this if there was more internal company data than just some outsider using the same secondary data that openai seemingly manipulates to draw conclusions that have so many logical holes in them, they won't hold half a litre of water for 5 minutes.

I'm filing this under click-bait.

jmyeet [3 hidden]5 mins ago
I, personally, think that OpenAI is way overhyped and will never deliver on that promise. But... it might not matter.

There is going to be an awful lot of disruption to the economy caused by displacing workers with AI. That's going to be a massive political problem. If these people get their way, in the future AI will do all the work but there'll be no one to buy their products because nobody is employed and has money.

But I just don't see one company dominating that space. As soon as you have an AI, you can duplicate it. We've seen with efforts like DeepSeek that replicating it once it's done is going to require significantly less effort. So that means you just don't have the moat you think you do.

Imagine the training costs get to $100M and require thousands of machines. Well, within a few years it's going to be $1M or less.

So the question is: can OpenAI (or any other company) keep advancing to outpace Moore's Law? I'm not convinced.

But here's why it might not matter: Tesla. Tesla should not be a trillion dollar company. No matter how you value it on fundamentals, it should be a fraction of that. Value it as a car maker, an energy company or whatever and it gets nowhere near $1T. Yet, it has defied gravity for years.

Why? IMHO because it's become too large to fail and, in part, it's now an investment in the wealth transfer that is going on and will continue from the government to the already wealthy. The government will make sure Tesla won't fail as long as it's friendly to the administration.

As much as AI is hyped, it's still incredibly stupid and limited. We may get to the point where it's "smart" enough to displace a ton of people but it's so expensive to run it's cheaper to employ humans.

bluelightning2k [3 hidden]5 mins ago
I usually like his analysis but this one smacks of extreme & unnecessary bias.

Describing GPT5 as underwhelming seems subjective and somehow also wrong. It's won me and many other devs over. And Sora 2 is also clearly impressive.

herpdyderp [3 hidden]5 mins ago
GPT5 completely demolishes every other AI model for me. With minimal prompting it repeatedly and correctly makes massive refactors for me. All the other models pump out garbage on similar tasks.
techblueberry [3 hidden]5 mins ago
One of the consequences of this is that is there actual economic growth here? If it becomes a browser / social network / workplace productivity etc. company, it's basically becoming another google/microsoft. While great for OpenAI, is there a lot of space for them to find new money rather than just take it from Google/Microsoft/facebook?
taylorius [3 hidden]5 mins ago
Sounds like they've realised AGI isn't just round the corner, and are retrenching to productise the things their tech can do well. (which is a lot).
afinlayson [3 hidden]5 mins ago
I wouldn't write them off yet - but if their funding dries up and there's no more money to support their spending habits this will seem like a great prediction. Giving away stuff that's usually expensive for free is a great way to get numbers - It worked for facebook, uber and many others but it doesn't mean you'll become a profitable company.

Anyone with enough money can buy users - example they could start an airline tomorrow where flights are free and get a lot of riders - but if they don't figure out how to monetize, it'll be a very short experiment.

whiplash451 [3 hidden]5 mins ago
A bunch of this week’s OpenAI announcements address monetization, actually.
leptons [3 hidden]5 mins ago
If they charged users what it actually costs to run their service, almost nobody would use it.
fzeroracer [3 hidden]5 mins ago
It's this and it's really funny to see users here argue about how the revenue is really good and what not.

OpenAI is only alive because it's heavily subsidizing the actual cost of the service they provide using investor money. The moment investor money dries up, or the tech industry stops trading money to artificially pump the market or people realize they've hit a dead end it crashes and burns with the intensity of a large bomb.

Imustaskforhelp [3 hidden]5 mins ago
> The moment investor money dries up, or the tech industry stops trading money to artificially pump the market or people realize they've hit a dead end it crashes and burns with the intensity of a large bomb.

You have hit the nail on the coffin To me, it is natural for investor money to dry up as nobody should believe that things would always go the right way yet it seems that openAI and many other are just on the edge... so really its a matter of when and not if

So in essense this is a time bomb, tick tock, the time starts now and they might be desperate because of it as the article notes.

tobias3 [3 hidden]5 mins ago
I thought it was starting when Ilya said that scaling has plateaued. Now confirmed with GPT-5. Now they'll need to sell a pivot from AGI to productization of what they already have with a valuation that implies reaching AGI?
AlexandrB [3 hidden]5 mins ago
Reminds me of MoviePass or early stage Uber. Everything is revolutionary and amazing when VCs are footing the bill. Once you have to contend with market pricing things tend to change.
jemiluv8 [3 hidden]5 mins ago
I feel like people speculating on the unsustainability of their losses probably value what they know more than what they don't know.

In this case however, what you don't know is more relevant than what you do know. Despite the author's knowledge of publicly available information, I believe there is more the author is not aware of that might sway their arguments. Most firms keep a lot of things under wraps. Sure they are making lots of noise - everyone does.

thenaturalist [3 hidden]5 mins ago
I guess rather they value... math?

The numbers don't add up and there are typical signs of the Magnificent 7 engaging in behavior to hide financials/ economics from their official balance sheets and investors.

PE & M7s are teaming up creating SPACS which then build and operate data centers.

By wonders of regulation and financial alchemy, that debt/ expenditure doesn't need to be reported as infra invest in their books then.

It's like the subprime mortgage mix all over again just this time it's about selling lofty future promises to enterprises who're gonna be left holding the bag on outdated chips or compute capacity without a path to ROI.

And there are multiple financial industry analysts besides Ed Zitron who raise the same topics.

Worthwhile listen: https://www.theringer.com/podcasts/plain-english-with-derek-...

jemiluv8 [3 hidden]5 mins ago
Enterprises are always selling lofty future promises.

And your subprime mortgage reference - suggesting they are manipulating information to inflate the value of the firm - doesn't cleanly apply here. For once, here is a company that seems to have faithfully represented their obscene losses and here we are already comparing them to the likes of enron. Enron never reported financial data that can be categorized as losses.

I see lots of people speculating about these losses and I really wish someone investing in openai could come out and say something vague about why they are investing.

Once again, I need not tell you, the information available to the general public is not the same as that which is available to anyone that has invested a significant amount into openai.

So once again, reign in your tendencies to draw conclusion from the obscene losses they have reported - especially since I'm positive you do not have the right context to be able to properly evaluate whether these losses make sense or not.

timkam [3 hidden]5 mins ago
What would be a balanced perspective? Perhaps that oAI may now be another "boring" startup in that it is no longer primarily about moving the technology frontier, but about further scaling while keeping churn low, with margins (in the broader sense, i.e. for now prospective margins) becoming increasingly important?
intalentive [3 hidden]5 mins ago
>the only real difference is the amount of money backing it

Judging by how often Sam Altman makes appearances in DC, it's not just money that sets OpenAI apart. It's likely also a strategically important research and development vehicle with implicit state backing, like Intel or Boeing or Palantir or SpaceX. The losses don't matter, they can be covered by a keystroke at the Fed if necessary.

bmau5 [3 hidden]5 mins ago
>Post ragebaiting bearish article on AI to hackernews

>Make front page

redwood [3 hidden]5 mins ago
Ads or the obvious path, they just haven't had time to pull it off yet.. plus it's going to be hard to pull it off without weakening the experience so they'd like to push that out as much as possible similar to how Google has only eroded the experience slowly over time. Their biggest competitor is Google
exitb [3 hidden]5 mins ago
They don’t really need to make much money on ads. They just need to weaken the free user experience to convert as many as possible into paid subscribers, then shake off the rest.
cityzen [3 hidden]5 mins ago
Why do you think they are starting these video sites? Ads are most certainly going to blow up on all the AI video platforms.
abalashov [3 hidden]5 mins ago
I liked the broader article, "Why Everybody is Losing Money on AI", more for the overhead perspective:

https://www.wheresyoured.at/why-everybody-is-losing-money-on...

shredprez [3 hidden]5 mins ago
Three inline subscription CTAs, a subscription pop-up, and a subscribe-wall a few paragraphs in.

Oof!

Reacting to what I could read without subscribing: turns out profitably applying AI to status-quo reality is way less exciting than exploring the edges of its capabilities. Go figure!

GuinansEyebrows [3 hidden]5 mins ago
it's a shame; i generally agree with most of what ed has to say and i think his arguments come from a good place, but the website is pretty irritating and i find his delivery to be breathless and melodramatic to the point of cliche (not befitting the serious nature of the topics he argues). i had to stop listening to his podcast because of the delivery; its not an uncommon situation for other CZM podcasts but at least some of them handle their editorial content with a little more maturity (shout out to Molly Conger's Weird Little Guys podcast).

I hate to make the comparison between two left-ish people who yell for a living just because they're both British, but it kinda feels like ed is going for a john oliver type of delivery, which only really works well when you have a whole team of writers behind you.

nickledave [3 hidden]5 mins ago
I love almost all of the CZM shows, and even I have a hard time making it all the way through a full-on rant from Ed :/ and I agree with him. Sorry, Ed.
g8oz [3 hidden]5 mins ago
He's a performative contrarian. His arguments are...fine but not worth the spittle that comes along with it.
johanvts [3 hidden]5 mins ago
Is GPT-5 so bad? I often find it pretty impressive, if slow.
tibbar [3 hidden]5 mins ago
The intensely negative reaction to GPT-5 is a bit weird to me. It tops the charts in an elaborate new third-party evaluation of model performance at law/medicine, etc [0]. It's true that it was a bit of an incremental improvement to o3, but o3 was a huge leap in capabilities to GPT4, a completely worthy claimant to be the next generation of models.

I will be the first person to say that AI models have not yet realized the economic impact they promised - not even close. Still, there are reasons to think that there's at least one more impressive leap in capabilities coming, based on both frontier model performance in high-level math and CS competitions, and the current focus of training models on more complex real-world tasks that take longer to do and require using more tools.

I agree with the article that OpenAI seems a bit unfocused and I would be very surprised if all of these product bets play out. But all they need is one or two more ChatGPT-level successes for all these bets to be worth it.

[0] https://mercor.com/apex/

HarHarVeryFunny [3 hidden]5 mins ago
> based on both frontier model performance in high-level math and CS competitions

IMO the only takeaway from those successes is that RL for reasoning works when you have a clear reward signal. Whether this RL-based approach to reasoning can be made to work in more general cases remains to be seen.

There is also a big disconnect between how these models do so well in benchmark tasks like these that they've been specifically trained for, and how easily they still fail in everyday tasks. Yesterday I had the just released Sonnet 4.5 fail to properly do a units conversion from radians to arcsec as part of a simple problem - it was off by a factor of 3. Not exactly a PhD level math performance!

tibbar [3 hidden]5 mins ago
I mean, I agree. There is not yet a clear path/story as to how a model can provide a consistently expert-performance on real-world tasks, and the various breakthroughs we hear about don't address that. I think the industry consensus is more just that we haven't correctly measured/targeted those abilities yet, and there is now a big push to do so. We'll see if that works out.
strongpigeon [3 hidden]5 mins ago
I think a lot of it is a reaction to the hype before the launch of GPT-5. People were sold and were expecting a noticeable big step (akin to GPT 3.5-4), but in reality it's not that much noticeably better for the majority of use cases.

Don't get me wrong, I actually quite like GPT-5, but this is how I understand the backlash it has received.

tibbar [3 hidden]5 mins ago
Yeah that is fair. I admit to being a bit bummed out as well. One might almost say that if O3 was effectively GPT5 in terms of performance improvement, that we were all really hoping for a GPT6, and that's not here yet. I am pretty optimistic, based on the information I have, that we will see GPT6-class models which are correspondingly impressive. Not sure about GPT-7 though.
vessenes [3 hidden]5 mins ago
I agree. I mean, I can get o3 right from the API if I choose, but 5-Thinking is better than o3, and 5-Research is definitely better than o3 pro in both ergonomics and output quality. If you read reddit about 4o, the group that formed a parasocial relationship with 4o and relied on its sycophancy seems to be the main group complaining. Interesting from a product market fit perspective, but not worrying as to "Is 5 on the whole significantly better than 4 / o1 / o3?" It is. Well, 5-mini is a dumpster fire, and awful. But I do not use it. I'm sure it's super cheap to run.

Another way to think of oAI the business situation is: are customers using more inference minutes than a year ago? I definitely am. Most definitely. For multiple reasons: agent round trip interactions, multimodal parsing, parallel codex runs..

wahnfrieden [3 hidden]5 mins ago
> The GPT-5 upgrade for ChatGPT was a dud

All Claude Code users are moving to Codex as a result. I don't call that a dud

tptacek [3 hidden]5 mins ago
40 minute read.
qsort [3 hidden]5 mins ago
The problem with this guy is that it's always the same 40 minutes on loop.
tptacek [3 hidden]5 mins ago
I'm not even saying he's "wrong"; I wouldn't want to be long OpenAI (I don't think they're doomed but that's too much risk for my blood). But I would bet all my money that Zitron has no idea what he's talking about here.
qsort [3 hidden]5 mins ago
Yeah, I'm also not saying that, I'm not "OMG AGI tomorrow" either. I think he was one of the first to voice concerns about the financial situation of AI companies and that was valuable, but if you look at his blog he's basically written the same post nonstop for two years. How many times do you need to say that?
tptacek [3 hidden]5 mins ago
(I also flatly disbelieve in AGI).
x0x0 [3 hidden]5 mins ago
Mind sharing why you think that (genuinely curious)?

I think Ed hit some broad points, mostly (i) there were some breathless predictions (human level intelligence) that aren't panning out; (ii) oh wow they burn a ton of cash. A ton; (iii) and they're very Musky: lots of hype, way less product. Buttressed with lots of people saying that if AI did a thing, then that would be super useful; much less showing of the thing being done or evidence that it's likely to happen soon.

None of which says these tools aren't super useful for coding. But I'm missing the link between super useful for coding and a business making $100B / year or more which is what these investments need. And my experience is more like... a 20% speed improvement? Which, again, yes please... but not a fundamental rewriting of software economics.

ctoth [3 hidden]5 mins ago
I have been strongly-tempted to make Zitron and Marcus GPTs... But every time I think about getting started I realize a simple shell script would work better.

Oh wait Claude did a better job than I would have:

https://claude.ai/share/32c5967a-1acc-450a-945a-04f6c554f752

SpaceManNabs [3 hidden]5 mins ago
wow claude gave them both pretty scathing descriptions and you didn't even provide much context lol.

maybe claude is funny.

jihadjihad [3 hidden]5 mins ago
Maybe if you click on all the links and go down rabbit holes. It doesn't take more than a few minutes to get through otherwise.
calmworm [3 hidden]5 mins ago
Read time estimated by AI*
dankobgd [3 hidden]5 mins ago
they failed to estimate that when they create a popup, i will close the website so it's 0 min read
rahkiin [3 hidden]5 mins ago
That can’t be right. It is not that long at all.
AstroBen [3 hidden]5 mins ago
The rest is paywalled
teacpde [3 hidden]5 mins ago
didn't even realize that, have to double check and saw the "read full story" banner
outside1234 [3 hidden]5 mins ago
It also has no moat. I started using Gemini this month and honestly haven't missed OpenAI for a second.
pmdr [3 hidden]5 mins ago
Ed's newsletter, on HN, unflagged?! Maybe the bubble really is about to pop.

I've read pretty much all his posts on AI. The economics of it are worrying, to say the least. What's even more worrying is how much the media isn't talking about it. One thing Ed's spot on about: the media loved parroting everything Sam and Dario and Jensen had to say.

kkukshtel [3 hidden]5 mins ago
How long until disciples of Zitron realize that he is just feeding them sensationalist doomer slop in order to drive his own subscription business. Maybe never!

I find he exhbits the same characteristics of things that drove people like red letter media in the early aughts to be "successful". Make something so long and tedious that the idea of arguring with its own points would require something twice as long, and as such the ability to instead just motion to an uncontested 40 minute longread is then used as a surrogate for any actual arguement. Said diffferently, it's easy for AI skeptics to share this as some way of proving backing up their own point. It's 40 minutes long, how could it be wrong!

sidibe [3 hidden]5 mins ago
I don't disagree that openai is desperate, this is a fierce competition and Google has a pretty huge head start in a lot of ways, but I wonder at what point these people who constantly dismiss LLMs and AI will change their tune? I understand hating it and wishing we could all agree to stop things, I do too, but if you can't find any uses for it at this point it's clear you're not trying
danielmarkbruce [3 hidden]5 mins ago
Yup, what a pack of desperate losers. They should already be at $50 bill revenue, 90% gross margin, 60% operating margin, no capex. Unit economics can't possibly change, they have eaked out every last ounce of efficiency in training, inference, caching and hit every use case possible. It's all just really terrible.
verylargeprime [3 hidden]5 mins ago
Speaking of boring and desperate, if you browse the posts on this "newsletter" for more than 2 minutes it's clear that the sole author is a giant bozo who also happens to be in love with himself.

I'd rather read a trillion lines of AI slop.

trhway [3 hidden]5 mins ago
not surprising giving that in the internal fight the boring execs won, and the excitingly brilliant researchers lost and left the company.
SpaceManNabs [3 hidden]5 mins ago
It would be awesome if this blog post was made by an OpenAI [investor / stakeholder / whatever that non profit has] in order to drive up engagement for defending or hyping up OpenAI's efforts.

Epic ragebait dude.

gfodor [3 hidden]5 mins ago
Do people really buy this nonsense? I mean just this week Sora 2 is creating videos that were unimaginable a few months ago. People writing these screeds at this point to me seem like they’re going through some kind of coping mechanism that has nothing to do with the financials of AI companies and everything to do with their own personal fears around what’s happening with machine intelligence.
jstanley [3 hidden]5 mins ago
So, wait, you're saying that these guys just aren't impressed by the AI technology, and that is blinding them to the fact that the AI companies' economics look really good?

That is a laughable take.

The AI technology is very very impressive. But that doesn't mean you can recover the hundreds of billions of dollars that you invested in it.

World-changing new technology excites everyone and leads to overinvestment. It's a tale as old as time.

gfodor [3 hidden]5 mins ago
I’m saying that seeing dubious economics is blinding people from accepting what’s actually going on with neural networks, and it leads to them having a profoundly miscalibrated mental model. This is not like analyzing a typical tech cycle. We are dealing with something here that we don’t really understand and transcends basic models like “it’s just a really good tool.”
saubeidl [3 hidden]5 mins ago
That pseudo-religious angle that seems to have infected a lot of the tech industry is part of what "doomers" like myself or Zitron criticise.

It is just a really good tool. And that's fine. Really good tools are awesome!

But they're not AGI - which is basically the tech-religious equivalent to the Second Coming of Christ and about as real.

The fear isn't about the practicability of the tool. It's about the mania caused by the religious component.

gfodor [3 hidden]5 mins ago
It’s not a religious angle, we literally don’t know how or why these models work.

Yes we know how to grow them, but we don’t know what is actually going on inside of them. This is why Anthropic’s CEO wrote the post he did about the need for massive investment in interpretability.

It should rattle you that deep learning has these emergent capabilities. I don’t see any reason to think we will see another winter.

jstanley [3 hidden]5 mins ago
100% agreed, but that is not a reason to spend this much money on it.
jstanley [3 hidden]5 mins ago
Ah, this time it's different. Understood.

(To be clear, I do agree that AI is going to drastically change the world, but I don't agree that that means the economics of it magically make sense. The internet drastically changed the world but we still had a dotcom bubble.)

gfodor [3 hidden]5 mins ago
Yep, this time it’s different.
zapataband2 [3 hidden]5 mins ago
lol "unimaginable" aka boring creepy slop that drives engagement on facebook for old people.
jennyholzer [3 hidden]5 mins ago
I buy it. I perceive you and people who talk like you (read: LLM Boosters) as literal cult members.
gfodor [3 hidden]5 mins ago
Yeah, I know. It’s weird to admit this kind of obvious error in public though, speaks to a very big epistemological hole on your part.
dcreater [3 hidden]5 mins ago
There's no point wasting time on a blatantly biased opinion, even if it has some truths to some extent somewhere in the tirade

And no we didn't need a subscription reminder every 10s of interaction

periodjet [3 hidden]5 mins ago
oh god no please, not this guy again