Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they’re building “advanced artificial intelligence” that can take “human-like actions,” but when you look at any of this shit for more than two seconds it’s abundantly clear that it absolutely isn’t and absolutely can’t.
Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble. People like Marc Benioff claiming that “today’s CEOs are the last to manage all-human workforces” are doing so to pump up their stocks rather than build anything approaching a real product. These men are constantly lying as a means of sustaining hype, never actually discussing the products they sell in the year 2025, because then they’d have to say “what if a chatbot, a thing you already have, was more expensive?”
↫ Edward Zitron
Looking at the data and numbers, as Zitron did for this article, the conclusions are sobering and harsh for anyone still pushing the “AI” bubble. Products aren’t really getting any better, they’re not making any money because very few people are paying for them, conversion rates are abysmal, the reported user numbers don’t add up, the projections from “AI” companies are batshit insane, new products they’re releasing are shit, and the media are eating it up because they stand to benefit from the empty promises.
Generative AI is a financial, ecological and social time bomb, and I believe that it’s fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we’re all fucking morons.
This entire bubble has been inflated by hype, and by outright lies by people like Sam Altman and Dario Amodei, their lies perpetuated by a tech media that’s incapable of writing down what’s happening in front of their faces. Altman and Amodei are raising billions and burning our planet based on the idea that their mediocre cloud software products will somehow wake up and automate our entire lives.
↫ Edward Zitron
In a just world, these 21st century snake oil salesmen would be in prison.
The only purpose of AI as it stands is legalized copyright theft. It just changes things enough to avoid paying royalties and getting permission
And a lot of the problems it’s “solving” already had working solutions years ago. There are only a few applications where it tends to be legal, moral and useful
This is basically block chain 2.0, but at least block chain didn’t break laws
If you change a work “enough”, it stops being copyright theft (legalized or otherwise). Remember when everyone around here liked the concept fair use? I do.
When a human generates a fair use sci-fi version of Harry Potter set in the future, over how ever long it takes a human to do that, it’s fine. When a machine does it, on masse, in just a few minutes, and generates 10 different versions. It’s less fine.
Honestly, it’s weird that so many folks on here don’t apply much in the way of measured judgement. It’s all or none kinds of expression. Very disappointing.
If speed is the issue… Here’s a reductio ad absurdum:
It took Sumerian authors to write a clay tablet a whole day, probably.
The invention of dip pen and paper accelerated the creation process exponentially. Shouldn’t fair use apply to dip pen users, as they could generate stories much more quickly?
How about typewriter users? They generate stories even more quickly.
The issue here isn’t speed. It’s the unprecedented concentration of means of production in the hands of ultrawealthy, depriving the remaining 8 billion, of work and options.
Speed isn’t the entire issue. It’s speed and machine, and really this specific use of machine, which doesn’t need to be broadened or generalized in order for an evaluation or. judgement to apply. It’s fine to just say, using LLM in this way is unfair
And what you don’t understand is that I don’t want your “this specific use of machine” carveout on fair-use, no matter how specific you think it is. Copyright holders already have enough rights and already own enough of culture, don’t propose to give them more.
kurkosdr,
I agree. Publishers have already been moving copyright boundaries in their favor for far too long and I’d hate to have fair use stripped even more. Today’s AI isn’t doing more than humans have been doing for centuries already, only the work is being more automated.
In other news, I wonder if I should switch my subscription from Claude to SuperGrok. I hear it’s awesome!
If you use Claude for coding, don’t bother with Grok 3 (at least for now). Right now it tends to generate quite buggy code compared to Claude or even ChatGPT.
I really like Claude and use it for coding too, but I’m curious about Grok. It seems to be generating text very fast, so thinking process doesn’t seem to be very painful. We’ll see.
If you’re going to use an AI, at the very least don’t use the AI owned by the nazi.
AI is here to be the Ministry of Truth. And if you don’t think this thing will eventually be put in charge of everything (think courtrooms, juries, etc) you’re not thinking hard enough.
The AI “revolution” is not a real revolution because it’s currently in the hands of the few.
The reason is hardware: right now, if I want to run a large LLM locally at a decent speed, I must buy a €1.000 GPU. And, in order to train one from scratch, I need a data center with several thousands of them, together with a host of employees to help me with the logistics of the data scraping. It’s not surprising then that most people end up running these models on “the cloud”, i.e. on computers owned by their billionaire inventors. With all the privacy and security implications this entails.
Hopefully, technological advancement will give to the common folk true access to AI one day, like it it gave them the personal computer in the 1980s and the Internet in the 1990s. At that point, the real “revolution” will begin, and we’ll start seeing the first truly useful applications.
I agree. Right now AI is in the “mainframe” phase. Eventually we’ll get a consumer AI accelerator for a decent price which will make it feasible to use it locally.
darkhog,
I agree the technology will keep improving.
However if you’re just running a pre-generated model then commodity hardware may already be good enough. I’ve been running llama3.1 70b on a ryzen 9 5950x (a high end CPU from 4 years ago I bought used). CPU won’t be as fast as a GPU but regardless it provides responses faster than I could type them.
If I’m willing to use a smaller model, it can already generate a screenful of text way faster than I can consume it. A good application for this could be a video game with more dynamic quests and characters who are highly interactive. This should be workable today.
As for training the AI models, obviously that’s much too intensive for current consumer hardware. Generating models remains out of reach for individuals, but even so I think there are good opportunities to pool resources and generate open source models with no hidden variables or proprietary agendas. An open source AI model that can provide source citations would be a killer feature IMHO. This is something that the proprietary models lack, in part because they don’t want others to reverse engineer the model. A genuine open source model could help solve this.
I wouldn’t hold my breath, Moors law no longer applies and 2nm and lower production is not getting any cheaper any more.
dsmogor,
While I agree that CPU/GPU node improvements are running out of steam, I think we should factor in the fact that most consumers haven’t gotten there yet anyway. I’m still running intel’s 14nm node, which has been good enough for my desktop needs and when I do upgrade I expect to get quite the performance boost from both smaller nodes as well as new tensor cores.
It’s not just about the hardware either, China have demonstrated that model optimization can produce better results with less hardware. So there may be an end of the road, we’re not there yet and the technology is still improving for the time being.
Excuse me? AI not getting better? In the span of just few years we’ve went from AI that can barely string together two coherent sentences, then completely loses the plot (GPT2) to AI that is so good some people literally fall in love with it despite knowing full well it isn’t real (as Replika kerfuffle had shown). Not to mention it’s a great help in helping you learn stuff (thanks to ChatGPT I now know trigonometry, despite completely failing to understand it previously due to my math teacher failing to teach me – and trigonometry is quite useful in game development). It had also helped me understand a variety of subjects I previously had problems with.
But that’s just the thing right? Most of the really impressive improvements happened with the jump to GPT4. That was 2 years ago now, and despite the absolutely ridiculous mountains of cash going toward the industry since, with a bunch of new model releases, noone has made a significant breakthrough in quality since then, including OpenAI.
Most people who use the things still just use GPT4 for whatever they actually find it to be useful for.
And this shouldn’t be a surprise. This is the history of pretty much every technology: A significant initial jump in usefulness that can often make us erroneously think that we’re on the road to persistent exponential improvement… but then it hits the top of what turns out to be a logarithmic curve, because nothing in real life can grow or improve at an exponential rate for very long. Technology isn’t magic.
The most significant improvement we’ve seen since is probably DeepSeek, which made the compute part much cheaper, undercutting the idea that we need huge datacenters for GPT4-level quality. And maybe that could help from a profitability perspective, but also, maybe it’ll just tank the industry itself in the end if it can’t justify spending as much as it does now.
And well, something like Replika existed before all this. People were falling in love with it 4 years ago from what I can tell from a cursory search?
Book Squirrel,
It isn’t magic, but it is improving and I think that is the main point. Personally I am only interested in AI than runs locally, and not through corporate service providers. Service providers tend to move in the direction of more advertising and very creepy privacy & tracking issues. I have a strong opposition to AI becoming the domain of proprietary gatekeepers.
Obviously many of our giant corporations don’t like competition. But as a consumer I think more competition/lower costs/better accessibility are all good things.
So much wow productivity gains everywhere, and I still earn 30% less in real terms than I did before COVID, will probably pay rent and work until I die, no matter how much I try to save.
Big fun.
Sorry… but “closed” AI companies are yesterday’s news.
We have many open, fully open, MIT or Apache licensed models. Not only the code, but the “weights” as well.
What does it mean?
If you have a “better than average” PC, you can currently run models near the level which OpenAI used to charge $15 last year. (GPT-3.5 vs llama 3.1 here): https://context.ai/compare/llama3-1-8b-instruct-v1/gpt-3-5-turbo
If you spend a few dollars more, you can build a rig with several (older) GPUs, or a mac studio with large amounts of VRAM, and run even better models locally.
And you can even use a veery nice UI that mimics ChatGPT, again, entirely locally:
https://github.com/open-webui/open-webui
So, AI revolution is real, and is here.
But how much value “ClosedAI” can extract from it is an open question.
While I also don’t buy all the AGI coolaid and over inflated statements, I believe the impact genai will have on the domain we all care about that is software development is definitely tangible. It did have huge impact on the domain you care about Thom already. Astonishingly it looks like the domain of translation and programming have more in common that one could have thought.
dsmogor,
Yeah, it’s not AGI. But then I don’t think AI has to be AGI to significantly disrupt industries. I think that so many jobs are at risk of being displaced/culled by specialized not generalized AI. This has me very concerned about the future of jobs. The entire premise of capitalism raising everyone’s boats goes out the window if owners are no longer dependent on human employees. Historically employees would just have to train in new automation-proof skills….but AI is encroaching on even these jobs. Many of their jobs can, and I believe will be replaced by (non-AGI) AI.
All too often people rebut this by claiming AI can’t do 100% of the job, but that doesn’t mean AI won’t displace tons of employees even if some employees are still needed to oversee the AI.