Define “talent”. Software developers focused on squeezing llms into everything regardless of necessity? I’m likely jaded as an infra monkey but you’ll have to pay me x3 as much to fix your broken servers after the bubble pops.
“AI businesses are putting profits over sanity and safety”.
Remove “AI” from that sentence and you’ll see that’s just more normal business practices that have been going on for decades.
The real question is: what profit? It’s a sink hole in a hope to maximize reach
Ho, you can trust OpenAI and Nvidia to make a profit. You and your pension funds? What pension funds?
This is a ball and cup game designed to create cashflow where there is none. The ‘profit’ is the assets, dividends and capital gains the shareholder class will syphon out before the bubble bursts.
I sure hope they secretly sabotage on their way out to protect society
What profits?
Now what does that tell us about the sanity and safety?
This is many companies right now.
Mine is run from the top down (all executives) by people who use LLM’s for fucking everything.
Everyone fucking hates them at this point. We all think they are fucking trash.
my older bro who WAS in tech uses it the same way, no thought of his own, he always use CHATGPT to find answers.
I used to find answers on Google, but Google Web search doesn’t return good results anymore. So I end up using ai.
The MBA class has long been this way.
Maybe it’s time to grab some fellow employees and make an employee owned business.
I would love for this to happen in so many industries but everything takes so much capital to get started :/
What industry is your company in?
After $1T poured into global warming, why can’t AI replace them ?
thats why they are trying to peddle data centers to india, for low cost maintenance.
And it’s not even working. Not one of the AI companies is profitable. So they’re putting the hope for profits some time in the future over sanity and safety.
“Fuck me Sam, I don’t have anymore ideas on how to turn a profit. We’ve tried everything. How about we just give the AI its own infrastructure and bank account with the instructions ‘make money’ and see what it does? I know that safety guy advised against it before, but he no long works here. I mean if it becomes a singularity event, at least it’s our singularity event to control”
Didn’t they do this with an AI vending machine already and it started selling tungsten cubes at a massive loss?
In a way this is what’s most scary. Because they are desperate. Any safety concerns will be damned and they are all racing to be the first who makes a breakthrough in the direction of AGI.
If we ever get there, this is not the way it should be done. I hope they remember that we need to have a world where they can spend their money.
That’s a talk ask for desperate people to consider.
Steve Burke (of GN) described the absurdity pretty well, within the context of the currently uncertain Nvidia and OpenAI deal:
Nvidia offered OpenAI $100B in investment, money that it didn’t have, as long as OpenAI gave that money back to Nvidia to lease GPUs that haven’t been made, to then put in data centres that haven’t been constructed, which will be powered by electricity that hasn’t come online, to then rent to users who haven’t subscribed, to provide them features that haven’t come to fruition.
And hope you’ve propped up the economy enough by the end of it that the government has to
bail you out…sorry i meant provide a “backstop”.
I don’t think they even care about profits anymore.
Billionaires live in the balance sheet, not the P/L statements.
What about all the user data they sell to third parties? I’d be interested in knowing how that contributes to this
Midjourney is profitable
Source?
Midjourney are the worse of the worse when it comes specifically training on stolen work from artists who dedicated their lives to it. They opened the floodgates to what we see now with mass theft of content by not getting sued into oblivion. Fuck them and their creepy little fuck face of a CEO.
They’re also not providing a large language model, so they actually did have a path to profitability. It’s keeping LLMs updated and running that costs so much money that companies trying to do so are losing billions, and Midjourney doesn’t have that problem.
It’s just that their path to profitability was built on plagiarism on an astonishing scale. You’re spot on, they should have been utterly destroyed right at the start.
Couldn’t agree more. They did really help greenlight “stealing everything is fair game” mentality. They came out before chatGPT gpt3 at the time LLMs were not hoovering everything including copyrighted content.
So what’s the problem? This looks self-correcting to me, if none of the AI companies are profitable then they’re going to go away. Short their stock and make a fortune.
Shorting isn’t just a bet that a stock will fail but also when
Then invest in competitors, they’ve got a more flexible timeframe.
what competitors? competitors to AI as a technology?
The companies that continue with human staff where others are replacing theirs, for example. Outsourcers providing those staff.
I’m not sure what bubble you live in, but not everybody can be a Bay Area nepo baby like you, and not being one doesn’t make them less correct.
If you’re not investing anything in anything then this really isn’t your problem.
Imagine your entire life is viewed through the lens of actions you can take in the stock market. What a sad life.
The original comment that this subthread descends from was about the profitability of AI companies.
Profitability has pretty much nothing to do with stock prices these days.
The problem is the cost of that correction is going to fall on us. Or did we forget that the flavor of capitalism we live under is the “Privatize the profits, socialize the losses” kind. We’re not the ones in the casino, but we’re the ones who will lose our shirts when they lose.
So what’s the problem?
“What’s the problem” with the entire American economy being moored to a bunch of companies all acting as flaky as Enron and friends during the dot-com crash?
Edit: just realized FaceDeer is obsessed with AI stuff, so he’s probably here just to troll with questions he already knows the answers to.
It’s only self correcting if the powers that be are losing money, which they aren’t because they are either liquidating important assets to pad their pockets or just using economic magic to make trillions appear out of no where. They’ll only feel it when their company or the economy collapses, and at that point they make off with their ill gotten gains
Where’s this infinite well of investment money coming from? “Economic magic” is pretty vague.
I wrote it intentionally vague because the reality is that we don’t know exactly how they are doing it, but we don’t need to to be able to see that there is a positive feedback loop between ai companies, cloud service providers and compute hardware manufacturers. Likely what they are doing is simply lying about their books and since the regulators have been bought already, or are simply incompotent, there is no one to say otherwise.
Have you not heard of inflation? It is literally the creation of new monies that the government gives to banks to give loans.
that’s… not at all what inflation means… it is 1 of many causes of inflation
and the government doesn’t “give” it to banks
the reality is far more complex in both cases, even if sometimes the simplified version of things looks like that
I’m talking monetary inflation, not price inflation.
In the mean time they’re soaking up all the RAM, SSD and silicon processing which makes basically everything with any of those cost a lot more (like the RAM I bought for $99, 4 yrs ago thats now $560). Not to mention the power requirements and costs being passed on to the consumers that don’t want it anyway.
They’re also screwing up the environment in ways it won’t recover from.
Which is pouring money into the manufacturers of those things. If you’re convinced the AI companies are going to collapse then just wait a little and you’ll get all those things way cheaper than they were before.
At the moment, most of that “money” is just stock in the other company. And the type of RAM and “GPU”'s being manufactured are not ones that normal consumers will use. They’re very specialized for AI en masse.
Another thing around that is that the major manufacturers being leveraged for that gear have stated that they are not increasing production in the near future because of this. It seems they’re mostly in a “wait and see, it might just be a bubble” mode as scale up takes a lot of time and only pays off with continued demand over a long period of time.
I’d love if it was going to be flooding the market with cheaper tech, but thats not been shown to be the case. And it’s really not worth the environmental impact in either case.
And the type of RAM and “GPU”'s being manufactured are not ones that normal consumers will use.
They’re using the same foundries that would make those things. I’m not saying that there’ll be a flood of “used” equipment (though there would indeed be some of that too, other companies could set up data centers much more cheaply), I’m saying that the foundries will switch back to consumer products.
The stock is worth a lot because it can be sold for a lot. If the manufacturers don’t think the AI companies will stick around they should be selling the stock they’re receiving from them. It’s money either way. What do you think they’re doing with that money?
They’ll just declare bankruptcy and get away scott-free
That’s not how bankruptcy works. The investors don’t get their money back.
thanks for pointing that out.
I’m not sure how this benefits the big corporations then. Surely they don’t have a deathwish? And aren’t stupid/incompetent enough to not see that? Else they’d not have gotten this big, right?
AI businesses are putting profits over sanity and safety.
All businesses are. That’s what a business is. A legal entity that seeks to extract as much wealth from people as possible. They put profits before people as a matter of policy.
Limited liability corporations should be outlawed.
That’s not what business is inherently; that’s what capitalism is.
It’s not even strictly what capitalism is about. It’s some stupid bullshit interpretation that came out of the University of Chicago economics department.
Seriously, go look at Adam Smiths wealth of nations. The only mention of the “invisible hand” is so different from what is taught in economics now
… by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention.
It’s not that we need no regulation. It’s just trying to say that if we set things up correctly, we don’t need to worry about people pursuing SOLELY their own personal gain. Because the market seeks out the “greatest value”, which is not just about money. It’s also the value to society as a whole.
Instead, we got the fucking bullshit from Chicago saying that the only / best way of measuring value is by profit.
Spot on. I wrote an essay on this exact topic
It’s kind of funny how Adam Smith saw the need for moral philosophy to constrain greed in the economics of capitalism yet for most modern capitalists, capitalism is their moral philosophy 🤣
As a legal requirement.
Interestingly its not.
I believe that’s only if you’re publicly traded.
Even then the legal requirement is about working in the companies interest, not strictly get more profit. It’s become shorthand to just say profit, because it’s a simple thing to point at.
Yeah, but it might depend on the company bylaws even if its privately owned.
If anybody leaves an AI company with a fat paycheck, promises to “be honest about the real problems,” and then proceeds to regurgitate things the AI company CEOs say: be suspicious.
Exhibit A is Anthropic millionaire Mrinank Sharma, who only mentioned (future) peril from AI and AI-made bioweapons, two fictional scenarios on the short list that Anthropic officially endorses. It’s a list of things that please Anthropic investors.
Real-world stuff like AI psychosis, poisoning people’s air, or generating CSAM doesn’t get a mention from him. There’s no profit in acknowledging those things, so he won’t.
Yep there is a deep attachment amongst AI boosters to certain doom and gloom scenarios that stem from their philosophical “think tanks” which use imaginary problems (e.g. “AGI ‘misalignment’ destroying the world through turning everything into paperclips”) to lobby and alter our laws to benefit their real bosses – anthropic, open ai, and friends.
Maybe Claude Code learned this from its creators: I’ve noticed that when it says “the real issue is…”, that means it has no clue and is about barf out a bunch of slop that I’m going to have to revert.
The only time it ever says “the real issue is” and is right is when I’ve just corrected it and told it the real issue which it then wastes tokens regurgitating back to me. Gods I miss not having to use this crap at work
You HAVE to use it?
Yes, our usage is tracked. Managers have statistics broken down by daily usage per agent
Yup. Lots of managers are forcing people to use AI so they can report doing so in their metrics.
Just vibe code the AI. I’m sure it’ll work perfectly.
That’s just every company isn’t it?
Not in my experience. Once their clients budgets get cut by funding cuts due to reality and they notice it doesn’t do anything of benefit (on the clients side), they will be all “client first”.

They are leaving because they are getting much better offers elsewhere.
or they see that things will collapse soon enough and are bailing like rats leaving a sinking ship
Just another day in corporate America. Putting fake profits over employees’s well-beings, and customer safety and privacy.
The thing that gets me is they just want to get in on this $2Billion investment that keeps changing hands without changing hands. They really are just taking IOUs from… elsewhere? Because apparently, according to Jensen, that money hasn’t actually exchanged hands. Investors are just buying into the new Ponzi Scheme.
The alternative prediction is that this is in fact sustainable and AI companies will in fact have revenue to keep the bubble inflated for a lot longer, just in the worst way - by extracting the value of human-created reliability and trust from the market:
CEOs have also bought into AI almost to a person, and are using it to replace workers, results be damned. AI can’t do the things they believe it can, but to them, if they can fake satisfying a need with AI for $5, that is preferable to actually satisfying a need with a real employee for $10.
The CEO is happy because his company saved $5 and he’s met his stock option incentive target, the AI companies are happy to pocket that $5 instead of the employee getting $10. Maybe they even raise the customer’s price to $12 as AI rent-seeking starts rising, and both companies get $6 each. Win-win, life will go on, just worse for everyone else.
The truth about the “replacing everyone with AI” is even more boggling. Bezos didn’t “replace” anyone with AI when he laid off all those people. What he did was look at the cost of “salaries” vs the cost of “building an infrastructure” for AI, and decided gambling on AI was cheaper than paying the workers he’s employed.
CEOs have also bought into AI almost to a person, and are using it to replace workers, results be damned. AI can’t do the things they believe it can, but to them, if they can fake satisfying a need with AI for $5, that is preferable to actually satisfying a need with a real employee for $10.
Yep that’s exactly how me and my entire team were laid off. “Automatization”.
I’m sorry. Recently laid off myself and management avoided directly saying AI was the reason, but other statements (C-suite talking about whether AI can do other work months before the layoffs, in front of me) convince me that was the reasoning.
Yeah for us it was “return to office”. Entire team was remote and has been the 9 years I worked there.
There’s a huge list of eye rolls here but also…what profits?










