Software gets more complicated. All of this complexity is there for a reason. But what happened to specializing? When a house is being built, tons of people are involved: architects, civil engineers, plumbers, electricians, bricklayers, interior designers, roofers, surveyors, pavers, you name it. You don’t expect a single person, or even a whole single company, to be able to do all of those.
↫ Vitor M. de Sousa Pereira
I’ve always found that software development gets a ton of special treatment and leeway in quality expectations, and this has allowed the kind of stuff the linked article is writing about to become the norm. Corporations can demand so much from developers and programmers to the point where expecting quality is wholly unreasonable, because there’s basically no consequences for delivering a shit product. Bugs, crashes, security issues, lack of documentation, horrid localisation – it’s all par for the course in software, yet we would not tolerate any of that in almost any other type of product.
While I’m sure some of this can be attributed to developers themselves, most of it seems to stem from incompetent managers imposing impossible deadlines downwards and setting unrealistic expectations upwards – you know, kick down, lick up – creating a perfect storm of incompetence. We all know it, we all experience it every day, and we all hate it – but we’ve just accepted it. As consumers, as developers, as regulatory bodies.
It’s too late to fix this now. Software development will forever exist as a sort of no man’s land of quality expectations, free from regulations, warranties, and consumer protections, and imposing them now after the fact is never going to be accepted by the industry and won’t ever make it through any lawmaking process of any country, and we all suffer from it, both as users of software and as makers of it.
The problem is software is – well, soft. It’s not hardware. There have been a lot of attempts to put a hardware process on top of software development, and while it could work, it isn’t necessary. It would add layers of management and process, and balloon the cost of everything generally by multiples. Writing software is also a creative art, more than it is engineering, even though that’s involved for sure (especially at the systems level.) That’s not to say there isn’t room for improvement in process and specialization, but a lot of the top down MBA spreadsheet management mindset, doesn’t work on top of a creative en-devour like writing software.
Yes, sadly software is not hardware.
When you pay bricklayers to lay bricks, electricians to connect wires in your walls, roofers to set up the shingles on your roof, they’ve all done it before. The just come and (hopefully) expertly do it again on your house. You pay labor and materials for creating another copy of a physical object.
However in software the duplication costs are zero. So no one will pay you for something that has been done before! They will pay for something new, because if they want something already done they’ll just license it for a pittance compared to the development costs. So in most cases you have no idea how the system will work until you try, while in most cases any construction engineer will be able to tell you exactly how a wall with given specs will fare.
It’s true that there is always constant pressure to cut costs, but this is not the only reason software is so unstable. We simply don’t have the tools and processes to predict how it will work without actually writing it and observing its behaviour.
My original faculty is mechanical engineering. After finishing University (20 year ago) I have worked as software engineer, started in company that main focus was quite close to my faculty, later I moved to company selling telecommunication products and services (both software and hardware).
What @torp is referring to, bringing examples of bricklayers and electricians, is more similar to deploying already released software systems for new customers, or maybe extending capacity of some data center. Creating software, and integrating systems is more like designing new building that has to serve particular function in particular place for particular community. Seldom this means just duplication of something existing building.
I have a friend from my university whose professional career is much closer to our original faculty. Most of the time he has been working as a contractor to refine some existing physical manufacturing process. Usually his ultimate objective is to cause that manufacturing will be cheaper or faster, production line(s) will occupy less space, some bottlenecks will be eliminated, some times old machines were just worn out and have to be replaced by newer or even different.
His work is not much different to work of software engineers, we have to fight some bottlenecks (often just exposing other bottlenecks so far guarded by original bottleneck). We have to migrate to newer frameworks. Seldom we create something from scratch, so we have our legacy codebases (as existing manufacturing lines).
All this similarities doesn’t map into similar expectations from customers. Of course we programmers have much more possibilities to do some cheap prototyping, but this also leave an impression on customers that process we’re working with are not so complex.
Customers of my friend are aware of complexities he is facing doing his work. They just feel it, they see all the belt conveyors, machine tools, whole burden of moving things in and out, handling litter. The same things are completely hidden from the eyes of most of our customers seing.
The real sad thing is this: “no one will pay you for something that has been done before”. That’s complete bullshit. As in: as far from reality as can be. 99% untrue. The whole reason Copilot works is because software engineers are, usually, doing something that was already done hundreds, thousands, maybe million times before! People like to pretend that their needs are unique and they need a software engineer and couldn’t use ready-made solution… and then slowly, painstakingly, over many years… they create POOR COPY OF SOMETHING THAT WAS ALREADY DONE before.
I have no idea how to fix that, though: people LIKE to pretend that they are unique and their needs are unique… and then developers have to redo thing that was already done 10, 20, 30 years before with a new tools… producing something that’s worse then was produced 10, 20, 30 years ago… and then everyone pretends these were money well spent because if they admit that they paid $1000000 to a bunch of developer instead of purchasing ready-made package for $5000 or $10000… they would feel stupid, right?
And so the wheels keep turning and developers produce more awful pretend-new creations… wonder where would this lead, though. Will people ever understand that they are paying for the exact same thing in a slightly new guise or not? Probably not: it’s to ego-crushing to admit that.
Another “AI evangelist”. Have you actually used those LLMs in a non trivial application?
Something there isn’t a tutorial for on W3Schools and Geeks4Geeks.
Generated by real business requirements.
Not to mention systems programming, particularly something as complex and precise as space navigation systems, vehicle and heavy equipment systems, and all of the other mission critical software that lives depend on. I don’t think we will ever achieve a level of competence and precision in AI software engineering that comes close to human systems programmers. As has been said many times, what we call “AI” is currently no smarter than a spell check program or autocorrect, it’s literally garbage in/garbage out. Feed it bad data and it will vomit bad results, and that simply cannot be relied upon for anything serious. By absorbing all the code on Stack Overflow and similar sites, it’s absorbing just as much bad code as good, and it’s not smart enough to tell the difference…and it’s not smart enough to know that it’s not smart enough!
No idea what are you talking about. AI (as it exists today) couldn’t write anything novel, couldn’t write anything that it haven’t seen bazillion times, it’s pretty dumb and stupid, by human standards (but very knowledgeable!).
Perfect student who was getting straight AAAs yet couldn’t find work because no one wants someone who doesn’t think.
Yet AI solves programming tasks pretty decently – which means that software developers are precisely writing “something that has been done before” again and again and again.
As someone who works in a narrow niche (systems programming, otimizing bytecode JIT translator) I can assert you that AI is pretty much entirely useless when you go a tiny bit outside of primitive CRUD… try to do something novel, that wasn’t done before… and yet Microsoft asserts that 40% of Copilot-produced code is committed verbatim.
That enough to know that this ”no one will pay you for something that has been done before” is a myth: people THINK that they are paying for something new and novel… but no, they are not getting anything truly new, 90% of time it’s something that was already done, 1001st version.
Morgan: nobody is asserting that using AI is good. In fact the evidence points out that the use of AI is net negative (you can read more here: https://softwarecrisis.dev/letters/ai-and-software-quality/ … I think I got the link from some comment on OSNEWS, but don’t remember where). But the mere fact that it provides a short-term gain and can “solve” lots of programming task well enough that 40% of Copilot-vomited code is used verbatim, without changes shows that “nobody would pay to write something that was already written thousand of times before” is a myth. People do write the exact same code again and again, or else AI (as it exist today) would have been entirely useless. Yes, AI steals code and regurgitates it (that’s why Google and Microsoft don’t use their own code to train their models… pretty damning fact to any lawyer, if you’ll think about it), but what it produces is then, accepted by developers! That’s the critical point there.
zde,
I’d say it’s the “creative” work that AI currently excels at. For example it’s gotten a lot harder to distinguish between AI and human generated songs.
It would not be that hard for AI to create novel works that aren’t based on something else. But the challenge is creating something meaningful for humans. We need something that excites our neurons, which means using recognizable patterns even though they’re less original. This is exactly why AI models need to be trained on human works. This is not a deficiency of AI that people make it out to be, emulating patterns that humans recognize is the whole point. It may not be a conscious act but human creators do this as well. Nearly all human artwork/songs/writing is rehashing existing works.
I agree with your assessment that LLMs can struggle at technical problems like coding. They have cracked the problem of interfacing with humans, however it doesn’t mean LLMs are going to be the best tool for solving technical problems. Consider how bad LLMs are at playing chess, yet we do have other types of AI have far surpassed the best humans. This is because computers are very good at solving fitness functions and adversarial machine learning. This is the kind of AI that I anticipate will ultimately beat us at programming tasks too. Some day I expect we’ll have AI that writes very competitive software by way of fitness functions and adversarial machine learning. Solving problems this way takes a lot of resources, but is a proven way to surpass humans trainers.
I expect that future AI will not rely on LLMs to solve problems by themselves, but instead call on other specialized AI modules. This combination will bring a whole new level of expertise to interactive AI models. This will probably be the moment when AI beats human programmers.
Where did you hear that? I haven’t heard either way, but I’m just curious. I would think it would be advantageous for them to do so, training on their own domain. Maybe they wouldn’t want to share it publicly.
@zde:
I’m not sure why your reply is directed at me, I was replying to @torp not you. But yes, there are lots of people asserting that using AI is “good”, and most of them have some sort of investment in the future of AI.
I believe you and I are in complete agreement so I’m not sure why you’re being so adversarial in your reply to me.
You’ll be happy to hear about the September 2024 Directive of the European Parliament and of the Council on liability for defective products and repealing Council Directive 85/374/EEC.
https://data.consilium.europa.eu/doc/document/PE-7-2024-INIT/en/pdf
Yeah, that’s the directive that made Google close the Android development. So now, instead of full history of development, we would just get occasional opaque blobs dropped once a while.
How exactly have that helped anyone?
How is this directive responsible when it wasn’t even implemented yet and what’s the causal relationship here anyway? Deeply confused.
Lawyers look out for the future lawsuits, not for what may happen “here and now”. Directive makes manufacturer liable for the code from ”the
moment in time when the product left the control of the manufacturer” – and for anything developed in the open it’s a problem because any defect in any random piece of code that you are putting in the open makes you liable from the moment this piece of code is published on AOSP (or becomes available to any third party, really).
Solution is simple: don’t publish anything as individual patches, squash changes into one large blob like RedHat is doing with kernel patches.
This reduces the liability… and makes life hell for people who use AOSP as base for anything… but that’s now lawyers problem.
That’s false, see article 2.
I think it’s all to easy to jump to the cliche of “managers”. But the reality is it lies with us consumers. We simply aren’t willing to pay for bugless software, so it’s not economical to try to create it.
Then you have the other model, non commercial model. FOSS. This model has proven that even without financial constraints people dont/wont/can’t put in the time and effort required to build bugless software at any level. Despite decades of FOSS not a single (complex) project exists that has achieved (or even has?) It as a goal.
Adurbe,
Practically all the software I write is for corporate users rather than the home users I believe you are talking about. In any case I experience a lot of pressure to keep costs down. I can’t compete on price of offshore developers and I’ve lost many jobs to offshoring. Most businesses only look at the price and are always attracted to cheaper options. While that’s human nature, they don’t seem to place much value on factors such as quality, experience, and even language barriers. I’ve seen first hand how frequently these lead to failed projects, but they never learn. It always about the price. I believe this bodes poorly for the quality of software in general.
It’s a matter of competency and motivation, regardless of proprietary vs FOSS. Motivation can mean different things to different people. If you motivate a competent developer you’ll get a better outcome than incompetently unmotivated developer.
Personally I feel more competent than I am motivated. If anything I’ve been unmotivated by project managers who explicitly told me to stop finding/fixing vulnerabilities because its not money well spent. As a software dev I feel an innate responsibility to make software robust, but if the people paying me don’t even agree with this then it’s up to me to give them what they want and not what I think they should want :-/
But FOSS, especially personal projects, are All motivation driven. And yet the level of “bugless quality” is on par with professional/sold software.
I struggle to buy the argument that those working on these projects don’t have the capability, but maybe that’s it. We (collectively) don’t actually know how to code like hardware. So keep making the same mistakes.
Adurbe,
I’m not sure how to interpret your point now because a lot of the FOSS software we all use is created by paid professional developers through both corporate and non-profit money (firefox, AOSP, redhat, ubuntu, etc). While there are unpaid developers who’s motivations can be different, I’m not keen on using “FOSS” to describe developer compensation, if that is what you meant.
Both paid software and free software have extremely wide range of quality. As a general rule I don’t think we can generalize one is better than the other because it depends more on developer competency and motivation. Motivation varies a lot between individuals. Someone might be much more motivated to write high quality code for a personal project. The same developer who is unmotivated by corporate office work could be more motivated working for a cutting edge science lab even if the payment is less because the developer feels more passionate about the work.
Personally I have troubling envisioning this “hardware versus software” dichotomy. Hardware has “bugs” too, especially if you commission new untested hardware. I bought this wire stripper thinking that it would do a better job than trying to expose the conductor using scissors or a knife, which will really help me out for building cables.
ebay.com/itm/302872975256
So far I’ve used it twice and not only did it have trouble cutting the tough rubber off of a cable, but it simultaneously severed strands of wire. Maybe it’s not compatible with my wire, Maybe I could have bought something more expensive that works better. It’s always hard to say because sometimes the more expensive products are identical, produced in the same assembly line with different branding after the fact. This isn’t just a one off, there have been many times that I felt hardware came below my expectations.
I’d put it to you that designing hardware and software may not be so different. In both cases there is a problem to solve and there are constraints that may prevent workers from being able to do a perfect job. The boss may focus on things like keeping costs down. Sometimes poor products make it to market and you need a v2 or v3 to fix the “bugs”.
Edit: Wow this ended up being a long post, sorry if it’s too much, haha.
FOSS has much better quality, there were many studies. It’s not ideal, but it’s much better.
What FOSS lacks is polish… but that’s understandable: if you don’t need to sell software to Joe Ignoramus you wouldn’t add various tutorials, wouldn’t change design every few years to attract new users, etc. And even when users complains are genuine… developer may not be motivated enough to fix deficiency that doesn’t affect developer, personally.
even if you write the most perfect program, you are still running it on top of other people’s code. or hardware. which has a host of its own bugs. ( not to mention that electrical glitches can introduce bugs of their own as well).
and if the darkest scenario comes to pass – people die because of problems in your program (regardless of where they stem from) – you are most likely to take the blame. since your is the user-facing application.
something worth thinking about on a plane, or when you have a pacemaker.
yoshi314@gmail.com,
Electrical and hardware bugs are rare and not really something most software devs have to care about. But if you do have to care about them then you probably need specialized redundant hardware.
I don’t consider standard industry practices good enough for life saving/mission critical applications. Life critical systems really need to be held to a different standard. MHO keeping things simple is that much more important when software controls a critical system like a medical device or aircraft component, etc. It can help to break things down into simpler components with strong isolation like in a microkernel. Operating systems and applications based on monolithic designs are just too dangerous. While the notion of breaking things down into simpler components is taught to all software engineers, these typically still run in a monolithic kernel or process address space, which means a bug in any one of them can crash the entire application/system.
I think greed is the main fundamental issue here, the Swiss Army knife approach comes about because people do not want to share the wealth, they all want the biggest possible slice of the pie.
And of course, those who will get the highest wages aren’t the software engineers, but the CEO, as a bait to “incentive” them for “performance”.
Technical qualification ? Muhahahaha !
I read once many years ago that there was a lawsuit in the US when some bad software caused serious damage and the end result of that is that “software engineering” is not treated the same way as “real physical stuff engineering”.
The world would be a way nicer place (with slower updates, less planned obsolescence), had that lawsuit gone the other way.
And let’s not forget: https://en.wikipedia.org/wiki/Therac-25
First of all, i really feel this talk about everything being buggy and shitty is hugely blown out of proportion and is getting tiring, It is like the nuances are being ignored and everything from tiny insignificant issues to huge real issues are all treated the same. How often do you really hit a bug? And how often is it actually a problem, and how often are you just angry because something changed?
The narrative about it being the bad managers fault is way too simplistic, but i can understand why someone unexperienced may think so.
Regarding complexity, we really have two main types of complexity in software development. Inherent complexity which is needed because what we are trying to solve requires it. And then we have accidental complexity, this could have been avoided but was created during the software development process. A whole lot of troubles within software development you will find here. It would be easy to point fingers at developers always wanting to rewrite things and overengineer things, which are for sure part of it, but that too would be too simple an explanation.
Some of the problems, in no particular order, are:
From managers/the business:
– Not educating the developers in the business domain
– Not including developers, from the beginning, in talks about features
– Micro managing who works on what instead of empowering the teams
– Not trying to understand the software development process and challenges at all
– Enforcing random deadlines
– Shifting prioritizes around too often
– Spending way too little effort thinking about what we are building and why, leading to a lot of extra work chasing the wrong goals.
From developers/technical staff
– Running with things that are new and shiny without really understanding it
– Accepting that they don’t understand the systems or the code they work on, including mindlessly pasting in code from stack overflow or AI
– Thinking quality is something you build after you made it work
– Thinking you are netflix or facebook when choosing technologies and architectures.
– Not making an effort of understanding the business side
– Not talking about business value when they try to get things prioritized. (everything has business value)
– Working solo instead of as part of a team.
– Worrying about tiny things that don’t matter at all during code review theater, while not at all thinking about the big picture.
Everyone: Not working to reduce context switches and lessen cognitive load. Code is written for other humans to read and work with. Getting the computer to do what you want is the easy part. “Modern” async workflows with pull based code reviews that force everyone to jump back and forth between tasks is hurtful. For teams that are not super distributed, i think git has done a lot of damage to how most teams work. Finally, writing things in the wrong languages – Developer tooling, debuggers, profilers, instrumentation and monitoring, and ease of builds and deployments matter a lot more than the language itself.
I have been writing software for more than 30 years, and at the end of the day, the hard part is exactly the same today as it was when i started: Keep it simple stupid, controlling coupling, and building the right thing.
I’m not sure I agree with you, in essence.
I think the problem comes from ABOVE managers. I don’t think any developer or manager wants to deliver a buggy or not-ready product. Probably the problem lives in the C-suite.
You know this new feature that is 30% ready? Well, we are delivering in a month anyway.
You know this new telemetry/ad-selling feature that no customer wants and makes the application respond 30% slower? Well, we are keeping it. And no, you have no time to try to improve it. It is already working.
You know this migration from A to B? Well, finish it already.
Multiply this by each feature in a complex project, and we get to the mess we are now. Doing things this way was simply not an option when you couldn’t deliver updates via Internet or when software was stored in ROM cartridges. It had to be well done!
But yes, I agree that constant context switch is the devil. Feature bloat is evil. As a software consumer (and really beginner developer), I am very tired of paying top money for software that is much slower than it should have been, or that is unusable out-of-the-box until the first 32 rounds of patches.
And I also hate that we update things all the time. Email has been the same thing for decades… why are we still updating email servers? Fix all the security holes and let’s be done with it already.
My neighbour is an accountant and works FULL TIME with Excel for 22 years and she can’t name a single feature in the last 15 years that make a difference in her work, and interacting with Excel is slower than it used to be. It surely calculates complex spreadsheets faster, but the UI is much slower now, so she gets less done in a day. So…?
If “constant context switch is the devil” then we are back to managers being the culprit. Because managers DON’T EVEN THINK IT’S THE PROBLEM. For them it’s as natural as breathing. It was written many years ago, but nothing have changed since then: https://paulgraham.com/makersschedule.html
Managers go from what they don’t like (opaque development process that’s not under their control by talented developers who work on their own schedule) to something the do like (well-behaved agile process with tons of artifacts that help them control the process: bugtrackers, chats, regular reviews and meetups…). And then… everything is just perfect… only the users of the shitty code complain, for some reason.
After that they try to “improve” that process in the only way they know how: add MORE formality, MORE accountability, MORE disruptions… which, for some crazy and unfathomable reason doesn’t help.
Eventually, after adding more and more developers and squeezing the ones who understand how things work out (because sane developers look for jobs less infected by manager decisions) they arrive at something bloated, hardly working, yet… somehow, surviving.
And that’s how we end up with all that mess.
Sadly, kicking out managers and leaving developers in charge doesn’t work, either: when left to their own devices software developers tend to produce beautiful things… without any business value and with no users.
As for Excel… I recommend to read https://hardcoresoftware.learningbyshipping.com/p/080-progress-from-vision-to-beta to understand how crazy the whole thing have become: when Microsoft have found out, after release of Office 2003, that 90% of feature requests are already in the MS Office… they spent 4 years redesigning the interface to make them more discoverable. That means that it DOESN’T FEEL like Excel was finished 18 years ago, It was ACTUALLY FINISHED 18 years ago! The only reason new releases are being made is to be able to provide updates for paying customers!
zde,
Several of your points made me chuckle!
The need to keep keep selling the same software is quite a big problem for software companies.They want software to never be finished so they always have something to sell you again. Once it’s finished most consumers don’t want the product to keep changing for no good reason. So many software companies have turned to using more gimmicks to address the drop in demand by (rational) consumers who don’t care to “upgrade”.
1) Planned obsolescence – you have to upgrade to overcome the problems we introduced.
2) Recurring subscriptions – consumers have to re-buy the same software indefinitely.
3) Forced online dependencies that aren’t technically justified.
4) OEM business models models where customers are typically forced to purchase the same software repeatedly.
Yeah, but story with Steven Sinofsky is a bit different here. I always felt that MS Office 2003 was “finished” (and I really hate ribbon), but here we have a confirmation, FROM THE GUY WHO WAS IN CHARGE about that fact that MS Office 2003 was, essentially, finished (if 90% of feature requests are from people simply couldn’t find them then it mostly means that it’s not possible to help them by adding more features to such a software).
Apparently I’m rare guy who knows that Google exists and thus MS Office 2007 was designed to make these features accessible for people who have no idea about that wonderful fact… Ok.
But what can you do going forward from that? Maybe make it faster, less resource-hungry? That, again is WHAT PEOPLE WERE ASKING FOR (from that same book)… but that makes no sense from the business perspective, you wouldn’t really be able to sell THAT because compared to other bloatware that exists today MS Office is surprisingly lean already (which is really sad, but I have to admit that it’s true).
I really want to applaud the guy: he managed to present all the facts that make you understand how all these things are true – without ever allowing himself to fell to the temptation to connect these dots and write that explicitly… but that’s why I’m not a business person: it’s really hard for me to see some kind of bullshit without calling it bullshit… Steven Sinofsky does that easily.
zde,
I’m in the same boat.
I haven’t used ms office in a very long time though, moved completely to libreoffice. By and large most modern software is not optimized, for better or worse we’ve grown accustomed to letting faster hardware compensate for the lack of software optimization.
Not really familiar with this guy, but I say call a duck a duck. I think the software industry would be quite a bit different if we didn’t prioritize business agendas and gimmicks.
My grandfather did ALL of that. Everything from design to plumbing and electricity. He did not need others. He even cut the wood. Yeah sure he bought nails and screws.
It is my dream that before I die, there will actually be such a thing as a Software engineer. This would mean required certifications and a license, as well as education requirements and continuing education. You would also be accountable to legal requirements and a professional organization.
It is a joke to call the vast majority of software development software engineering. If building were constructed like modern software, the week before the owners move in, we would be swapping the second and fourth floors of the building.
A “manager” once said to me: “I need coders. I don’t need ‘architects’ or ‘engineers.’ I need coders.” Their comment has stuck with me.
Unfortunately, there are many groups who think otherwise. Companies don’t want to certify their software, and many developers are indifferent or against it.
Responding to the sentiment, a licensed software engineer profession sounds ideal for ensuring accountability and quality, mirroring established fields. Just as a doctor requires licensing, software professionals handling critical systems should demonstrate competence. Speaking of critical systems, navigating the online world requires precision, much like mastering levels in geometry dash. Sharpen your reflexes and explore challenging levels at https://geometrydash-game.one/, a fantastic place for all things Geometry Dash
AI generated spam. Ugh.
The thing I really hate is that, due to poor quality on software, there was a time it was decided it was a great idea to have people specialized in creating and executing tests. Then, when keeping the infrastructure became too complex, the specialists in that area were added as infrastructure supporters.
Nowadays, companies do not want to pay for roles, they want the developer do everything. Learn not only different programming languages such as TypeScript/Angular, Javascript and Rust, know test frameworks such as JUnit, Cypress and Cucumber, and also keep the infrastructure in Azure AWS and others.
THIS IS NUTS! Quality os going downhill from there, and they do not care.