And the “copilot” branding. A real copilot? That’s a peer. That’s a certified operator who can fly the bird if you pass out from bad taco bell. They train. They practice. They review checklists with you. GitHub Copilot is more like some guy who played Arma 3 for 200 hours and thinks he can land a 747. He read the manual once. In Mandarin. Backwards. And now he’s shouting over your shoulder, “Let me code that bit real quick, I saw it in a Slashdot comment!”
At that point, you’re not working with a copilot. You’re playing Russian roulette with a loaded dependency graph.
You want to be a real programmer? Use your head. Respect the machine. Or get out of the cockpit.
↫ Jj at Blogmobly
The world has no clue yet that we’re about to enter a period of incredible decline in software quality. “AI” is going to do more damage to this industry than ten Electron frameworks and 100 managers combined.
It’s called marketing. Copilot is a cool name (though a shitty product).
I won’t even respond to Thom’s AI crusade anymore. Let him fight his demons. But the quote above is just hilarious. You really want to be a “programmer”?! That’s like competing over who is the best janitor or lumber jack. Literally competing with billions of Indians, Chinese and AI on top. Good luck with that.
Nobody really cares about syntax or frameworks. The interesting stuff is about work flows, use cases and algorithms based on a deep understanding of the business case.
Mr. “real programmer”, please start your company with your superior C inline assembly skills. Let’s see where you are in 5 years from now, I wish you best of luck.
Oh your comment reminded me “The Story of Mel”.
Yeah, “Real Programmers Don’t Use Pascal” 😀
Although solutions like “Fast inverse square root” have nothing but my admiration and respect, in real life I also appreciate portability, maintenance and documentation as well as the commercial potential.
The only question should always be: what was the best tool for the given problem. If your problem depends on carving hexadecimals with a needle to the magnetic spin disk, then be my guest and welcome.
But if I can solve my problems successfully with ChatGPT, Claude and Gemini and you come and tell me that this was not real programming then maybe don’t expect to be taken very serious.
Important lesson in life: don’t tell a successful man what he should do. It will never work out.
The problem is that if you need to solve your problems with ChatGPT et al, you’re probably not much of a programmer, and the resulting code is bad. And don’t expect me to take you seriously if your avatar is a guy in a business suite. People like you are exactly the cause of the shit we’re in right now.
Lets try something else.
Developers like to be paid for their job.. right ? .. i can code.. more than i need to – but when a developer asks for a 3 months development and a fat budget of XXXX – while i can use tools like lovable to design, dump it to git, go thru the code, run it locally at 1/3 the budget, be faster to market – then that will be my goto – and any developer that can’t cope with that – will be without a job in 2-3 years.
Because this how it is: Money talks, bullshit walks – thats business and thats what is going to pay your salery – and just because you dont like it – dosent mean the competion wont use it.
jalnl,
You are overlooking efficiency. The use of efficient tools doesn’t automatically equate to an inadequate programmer. You wouldn’t shun a carpenter for using a table saw would you? It can save a lot of time and as Valkin said money too. Sure it can be dangerous in the wrong hands, and I wouldn’t one an unqualified person using one, but for a professional it can improve productivity.
Also there was no need to go accusing Andreas Reichel of being personally incapable/unprofessional, most of us were professionals before LLMs came around.
Here’s an example: 100% of people reading this manpage will lack the information on how to use this command correctly. Take a look at the -s parameter, what the hell is that? And it’s not just the manpage, the documentation in the software is equally lacking unless you look at the source code.
https://manpages.debian.org/testing/can-utils/slcand.1.en.html
I asked copilot for an example and how to use -s and frankly it did a much better job than the official documentation could. The stack exchange site had the answer too, but TBH copilot can provide a more tailored answer much faster
I’m not here to pitch copilot and other AI services or to claim they can fully replace programmers, but I do think a lot of people are painting AI & LLM in the worse possible light and deliberately overlooking use cases where it is proving useful today.
You are most welcome to not like my work and/or my appearance.
Although I don’t share your sentiment, but feel very comfortable at my spot since some people seem to like my work and/or appearance.
I wish you a beautiful and successful day! Cheers.
AI is slop but it can be useful when it does context replacements or audits for errors or performs menial plumbing. For everything else, be careful
ALWAYS be careful, with everything. Ask yourself “can this be right” and “what will happen at the extrema”. Always!
AI is no exception.
Andreas Reichel,
Agreed.
AI is just a tool and like any tool it can be used well or used poorly. A desire to blame AI for whats wrong with the industry does seem to be on full display. The software industry already had many problems, but now we have something to vilify for it. If software is bad, it’s because of AI. But it’s just a tool and nothing more. The fact that a hammer doesn’t work well with screws doesn’t mean we should blame the hammer or the screws…you need to know how and when to use your tools.
The issue here is that tools/services vendors have been aggressively upselling their “AI” products to execs. Most of these execs have 5 years of hands-on experience at most, if any. They’ve spent most of their time managing software projects and are far removed from the actual work that goes on, for decades. They have pushed projects to be delivered at an inhumane pace and are ever emboldened to do more. Once the project has been pushed to GA or production go-live, they move on to the next thing. To quote a DevOps engineer on a recent project: “These vendors do not need to convince you that their product works or can replace you, they only need to convince your pointy haired boss.”
adkilla,
Oh there’s no doubt about it. It’s a gold rush for many…and most will fail. Some will stand the test of time. Tech giants are trying to outspend each other to be the monopoly that ends up holding all the cards after the great AI consolidation eventually happens.
That’s been my message on osnews for a while now. We all have opinions but it doesn’t matter what we think, at least not that much, because we’re not steering the ship.
Andreas Reichel,
I like low level system programming and optimization, but it’s clearly a niche for which it can be hard to find widespread demand. Embedded is probably your best bet though since embedded resource constraints put more pressure to be lean. On computers and mobile most of those skills have gone out the window.. It’s not so much that low level programming/optimization can’t yield improvements, but that nobody cares when ram/disk/cpus have so much capacity these days that the bloat is just taken for granted and companies don’t really see a point in paying to fix it. When I was in school in the 00s they were saying “don’t bother optimizing if there’s no need to” and if anything things have gone more in this direction.
Spot-on “embedded” but we all know that this is not the normal “solve a business case” programming but a purpose on its own. (In fact I see embedded as the last domain of real programming and I am curious what AI has to offer in that niche too. Not that I was educated enough to discuss, I am just Java/SQL after all.)
Actually, nobody cares about the things you’ve mentioned either. In fact, most devs just use the best framework for the job. Even the data science folks just use stuff like pandas, pytorch, etc. Do you think they are constantly peeking under the hood to look at algorithms, work flows, etc. Maybe in a classroom setting, but not in the real world.
You do realize that there are other languages besides C and assembly right? If you want to eek the best performance and optimum compute resource usage, you will need to care about these things eventually. I’ve been working with ML engineers on an anomaly detection engine, since 2022. They initially started with Python but have now transitioned to Kotlin/Java. If you want to use stuff out of the box to get up and running quickly, sure, just use the easiest tool for the job. But, if you want to maximize what Spark, Storm, etc., has to offer, then you’ll realize pretty quickly how you’ve boxed yourself into a corner. Corporations are always looking for ways to cut costs, they are not going to ignore a large compute bill for long.
I am waiting for the delicious moment where Thom links to an “ai”-complaints article that was generated by one.
As evidenced by the one linked to above, he’s already linking to ones that are not very good. ^_^
Yeah… there are plenty of much-better articles there critical of the AI goldrush out there, E.g. one of my personal favs: https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/
StephenBeDoper,
He’s either playing up the drama or has some severe anger issues. Probably both, haha.
I respect his desire to live in peace away from AI. But from the sounds of it, he hasn’t been successful in getting away, and unfortunately I don’t think it’s going away.
Edit: I skimmed through the blog, it’s the same angry rant tone – a bit much for me.
That’s not what I took away from it, to me it seemed less an “angry rant” and more using deliberately over-the-top hyperbole for comedic effect; at least, I’m pretty sure that when he promises that “the next person to talk about rolling out AI is going to receive a complimentary chiropractic adjustment in the style of Dr. Bourne”, he’s not actually expressing a sincere desire to break anyone’s neck.
Underneath the hyperbole, he does seem to have a cogent point — namely that the current AI push seems to be in large part a grift, or at least the biggest cheerleaders for it have all of the hallmarks of being grifters.* As opposed to most of the other anti-AI discourse I’ve seen recently, which comes off like buggy-whip manufacturers decrying the invention of the automobile.
*Admittedly, it also holds appeal by matching up with my own impression. AI seems like yet another in a long line of tech bubbles: dot-com, social media, crypto, streaming, etc. But the main difference is that the AI bubble seems to be driven by people also familiar with the previous bubbles, and trying to push the previous limits as much as possible, in terms of “sell at a loss, while crossing our fingers & hoping we’ll find a way to bring costs down before venture capital moves on to the next new shiny thing”/”fake-it-till-you-make-it”.
StephenBeDoper,
Ok so it’s comedic effect.
Yes, there are AI grifters and part of me is conflicted about AI. Anyone who’s looking for bad players and poor use cases is going to find them. Anyone can find bad examples to fit predetermined conclusions, but doing so is not really good science. One can make the math work for oneself mentally by filtering out competing viewpoints, but the real world is yin and yang: one without the other is not balanced.
Maybe I am misunderstanding but to me “bubble” refers to something destined to die out….Yet I don’t think that’s AI’s fate.because it does some tasks more efficiently than humans can. and this is a key ingredient for survival of the fittest. To be clear, it’s a slow change, but ultimately I really thing it’s a one-directional process. Future corporations aren’t going to want pay human labor to replace the computers.
He said “c0ck”
Thom can be pedantic with his Ai rants (and his xorg and apple rants) but this time he is spot on and the article is spot on, too.
We are approaching the time when programming will finally become “real” discipline. The only way to ensure quality is ensure that provider of “something” is liable for quality of “something”. And, historically, that only happens when sad achievement is unlocked: hundreds or thousands of people die.
And while I agree that AI is awful… long-term it may, surprisingly, save lives: if we achieve moment when software quality would lead to actual data loss quickly enough… AI would be outlawed and many currently allowed practices would be outlawed, too.
I really hoped that software would be able to, somehow, cheat that rule and instead of loss of lives that other major industries have passed… but AI may be a way to reduce damage on that path, instead: none of AI providers would ever dare to push their slop on people if it would have to come with insurance against bugs created by AI.
AI outlawed? The owners of the farm want it essentially running it all on their behalf. It’s pretty obvious that’s where things are headed.
AI let me prompt chatgpt and get it to create a GUI for configuring Pipewire/PulseAudio sound devices under GNUstep SystemPreferences.app Something which GNUstep developers never got around to making in 20 years of the project existing. Next I’ll make a Wayland display configuration tool and then Network-Manager and we’ll finally have a decent knock off of OSX on Linux. This stuff is useful when used properly.
In many ways, the ship containing traditional software quality sank some time ago. Most of these “cloud” things aren’t well vetted (at all).