I’ve been programming for 15 years now. Recently our industry’s lack of care for efficiency, simplicity, and excellence started really getting to me, to the point of me getting depressed by my own career and the IT in general.
Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design. Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. All planes converged to the optimal size/form/load and basically look the same.
Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it. People are often even proud about how much inefficient it is, as in “why should we worry, computers are fast enough”.
A bit ranty here and there, but this entire “old man yells at cloud” article is very much music to my ears. Software is bad. We expect software to be bad. We accept that software is bad. We make excuses why software is bad. We tell people it’s okay that software is bad. We say it is inevitable that software is bad.
If any other industry were as lax about quality and performance as the software industry, we’d be up in arms.
This sort of thing happens in other engineering principles. Some new material is developed, or some new process, and there is a lax period where engineering to optimized costs and efficiency in relation to it are loosened.
But at a certain point things will catch up. If nothing else, the power costs of inefficient code will begin to make a notable difference.
I beg to differ, when a program works and “fits the bill”, provided no one complains, no one will ever get a look on it to further enhance its performances. As the hardware progress (a bit less these days) it usually hide the slowness of the program.
Why would a manager invest more money into the “optimization” of a software just for the sake of it ? When it works, don’t touch it, is the usual motto. That’s why we get old and abandoned pieces of code that still runs after years without having been updated in ages.
And no, open source is not the solution, considering the mess the open source community is, but yeah, at least you can update/optimize it yourself. If you have the budget. And if you aren’t harassed by foss-boys to release your changes for the benefit of the community.
The main problem is that programming languages are too much flexible and forgiving, allowing people with little to no logic to glue libraries together and patch an application and call themselves a coder. Nope, doing it right actually requires some skills and talent.
You can also patch various parts into a car and call it a Bugatti, I wouldn’t drive in it. As long as people don’t get the real feeling of what coding a complex application and the consequences are, we’ll still get blobs of binaries that interact badly with each other.
And with the user.
The interesting thing is that ancient programs (Paint Shop Pro 6, I’m looking at you, old pal) manage to cram so much functionality into 70MB and, even now, work so well on Win10
Their current product is >ten times the size and adds about 10% functionality.
Why did I bother?
Mac
Eric S. Raymond worked on the problem long ago. Is Stupid we still pullulating around.
Platforms to be not only finite, but small. And build up from there. Only so, quality can be uphold. 3 answers: abstraction, abstraction & abstraction.
The f_ck with performance. TRex was high performance.
Evolution in Software happens like waves at sea. From unsuspected sides. Back and forth, and with massive extinctions.
Edited 2018-09-27 17:31 UTC
Are you sure about that? Hardware engineering has problems all the time, but now they fix it with software engineering. Don’t get me started on civil engineering, it’s almost guaranteed that homes will have some sort of defect sooner or later. What other industries have the challenges that engineering has?
Software engineering has evolved and kind of is mature, it’s just that some people don’t follow standards or best practices and that gives a bad rep to the entire industry. More critical engineering projects have more controlled processes and regulations to avoid havoc, and that’s OK. A simple mobile app does’t have to.
Having said that, most problem would go away if we just stopped using C or other memory unsafe languages where it’s not required. Yes, good tools can stop some bad practices in the same way that ABS and traction control can stop some car accidents.
Hi,
I’m sure you didn’t read the same article I did. I read an article that was (in part) about hideous bloated messes (e.g. like Javascript and Java and C# with huge run-times and massive libraries) being used to save the programmer a tiny amount of time (e.g. the time to find/fix a rare bad pointer or a trivial memory leak).
Most of the problems mentioned in the article would go away if we banned all languages except C, because this would force programmers to see what their code actually asks the CPU to do (instead of burying everything under 200 abstractions deliberately designed to prevent the programmer from being able to care).
– Brendan
Brendan,
That’s true. I have mixed feelings about abstractions. They can be very powerful, but they can be over-engineered and if we’re not careful they can lead to a great deal of bloat and dependencies.
Edited 2018-09-25 07:19 UTC
Hi,
I think part of the problem is the way code re-use is done (for all languages). Often we’re re-using small pieces (e.g. classes, functions) that have to be “general purpose, optimised for no specific use” when we should be re-using much larger pieces (e.g. entire spell checker, entire word-processor front-end, entire graphics file format format converter complete with wizards/dialog boxes, entire mathematical function evaluator, …) that communicate via. open interfaces; so that far more code can be optimised for its specific use, and so that there’s a lot more competition between alternative implementations because people can replace a fraction of an application (e.g. the user interface for Microsoft Excel on its own) without replacing a whole massive monolithic blob.
– Brendan
Sounds a lot like COM. I like the ideas behind COM, the definition of interfaces via structures of a known format/layout, upon which large functionality can be hidden. Also a lot like Java interfaces.
C’s weakness is clearly the C runtime, which isn’t surprising as it was evolved (rather than designed) in 60/70s era computing and storage capabilities.
The first thing I’d add to the C standard is garbage collection, which would take out a whole class of memory errors (leaks, double frees etc.) It could be entirely optional, just use malloc()/free() if you prefer, but just add new functions that work with a GC based allocator instead.
The next thing I’d add is a chord based string/blob library, so that strings/blob can be safely used with reduced fear of buffer overruns. (so far sounds a lot like http://www.hboehm.info/gc/)
Finally, a simple container library that defines a standard map interface, which can be instantiated via tree/hash/vector based constructors, would provide useful generic functionality that every other language already provides (including C++), so everyone doesn’t have to half bake their own implementations, warts and all.
My own toy OS (WIP) adds much of the above in kernel mode, allowing me to not worry about many low level details. Caches are map structures, and I can change from a hash to a tree based implementation with a single constructor change. I never free memory explicitly, it’s handled by a GC that works with the slab based allocater.
Just did that in my company, the IoT-oriented OS I created features a complete revamped runtime in C, not C++, with thread aware memory allocator, objects (data structures), mutable strings and several other features that comes included into what I always expected from an OS. And everything fits in 192 KB. Typical application weight around 5 KB since it basically consists of business logic as the drivers are integrated into the OS.
Is this an OS you can share? Personally (rather than professionally) curious.
Nope, not yet, sadly.
Brendan,
I’ll take a guess that GC is why someone downvoted your post. That’s very controversial, especially for a low level language like C.
Anyways, garbage collection isn’t too difficult except for one fundamental missing feature in C/C++: no reflection. If we had runtime object reflection, it would be trivial to identify the structures of ever object on the heap, which is a crucial step in garbage collection. I do think C++ should support reflection.
IMHO the weakest aspect of C programming is not the explicit frees, which honestly don’t bother me, but rather the lack of strings, so I’d definitely put this on top. As C developers we put in a lot of effort into string manipulation, string operations that are trivial to do in other languages can be very tedious in C because there’s a lot of boilerplate code.
Conservative GC fixes that problem by speculatively interpreting integers as pointers, and marking the memory they point to as in use (if applicable). Of course, this is not a problem if there is little overlap between the heap space pointers and regularly used small integers. I can’t see it being a problem in my kernel, as the heap is near the top of the address space, whereas most integers are smallish integers.
My slab allocater also provides custom per slab type mark functions, which can selectively mark only those fields that should be interpreted as pointers, and ignore other fields. But by default, it does the conservative GC of speculatively assuming blocks of memory are simply arrays of pointers, and marking them as such.
The selection of GC roots is very important as well. My kernel doesn’t have much in the way of global data, so the roots are actually pretty easy (all live data is reachable from just a few global variables.) The hboehm GC gets round this in a bit of a scatter gun approach by not having any roots, and instead just scanning the whole heap (I understand, it’s been a while since I looked at the details.)
christian,
You use this technique for a kernel? I’ll be honest, I don’t like it. Having to do it manually or having the GC assume every field is a potential pointer are both bad options.
Maybe it’s good enough for your needs, but the fact that it’s probabilistic and cannot eliminate memory leaks 100% of the time would make me go a different direction with it.
Have you considered other possibilities?
I’d prefer pre-processing the source code files to obtain reflection data consistently and automatically. Make no mistake, this is also a kludge compared to having native support for reflection, but it’s certainly better than manually doing it or assuming everything is a pointer.
http://garret.ru/cppreflection/docs/reflect.html
I’m more of an 90% effect with 10% effort sort of a guy. If a kludge is simple, but safe and without undefined side-effects, I’ll take the kludge. Reflection would be great in C/C++, but it looks like a lot of extra work just to determine if some bag of bits should be treated like a pointer or not.
I guess I just need to get the code into a shareable state, and let people decide for themselves. If it’s ever released, GC will be a good differentiating feature. My aim is to make code easier to write and support, as I’m a big fan of exception handling as well, so hope to remove loads of boiler plate code C currently requires to catch/handle errors and manage object memory life cycles.
christian,
I concur on the difficulties of doing it in C, which is why I’d consider programming it in dlang or even rust instead.
https://dlang.org/spec/traits.html#allMembers
I’m just as guilty of using C by default like everyone else. It’s a catch-22: I use C primarily because of it’s popularity, but it’s popular because devs like me use it so much. What we need is a huge industry-wide reboot!
Don’t get me wrong, I completely understand your motive for wanting GC, but making the GC pointer-aware would be a priority for me. Regardless it’s admirable that you are trying something different and it’s cool to see this kernel experimentation happening!
If you’re curious, have a look:
http://thewrongchristian.org.uk:8082/index
I use the cross compiler from:
https://mirrors.edge.kernel.org/pub/tools/crosstool/files/bin/i686/4…
The GC itself is under kernel/thread.c and libk/slab.c.
But be warned, it doesn’t do much yet other than start, do some tests then wait for keyboard interrupts. I’m working at the moment on providing a simple testing shell, so I can test interactively using something other than gdb.
christian,
Thanks for the link, it’ll be interesting to see where you take it!
Hm, on a site where quite a few posters are quite anti-MS? (maybe you’d do better on /. )
Hi,
A kernel holds onto various pieces of data from user-space (data sent to pipes, message data, process names, …). For all of these things there’s a risk that a malicious user-space process provides “deliberately crafted to look like pointers” data to prevent the GC from freeing memory and cause an denial of service attack (due to kernel running out memory).
How much time have you spent trying to fix the security holes caused by your GC, and what is the current probability of a denial of service attack succeeding?
– Brendan
As it’s a toy OS I’m currently in the process of writing, and haven’t actually made it to user mode yet, that’ll be zero time spent on that.
That said, the per-object type marking can easily obviate the threat of DOS from user space data. There is no reason why user provided data need be considered in GC.
Are you trying to mess with those from the http://www.osnews.com/comments/30745 discussion?
LOL
Well, GC ain’t such a burden when it is engineered into the OS, the way I did. When a thread exit, its whole memory gets released to the system and recycled according to the object types previously allocated. The idle task then collect and free the oldest allocations, provided no one needs a huge amount of allocation. But at least that allows to track bad memory usage using the integrated profiler
Hi,
For strings; I think the solution is to stop over-using strings.
For user interfaces, as soon as you start doing anything right (internationalisation) you can’t use strings directly. You end up wanting to send “string ID plus optional values” to some kind of internationalisation service; where that service uses the string ID to find a format string for the end user’s locale and then formats values (time format, date format, thousands separator, …) to insert into the format string and generate the appropriate Unicode data on your behalf (which you’d cache and/or send to GUI or wherever, without ever bothering read or modify the resulting Unicode strings).
For configuration and files, everything should be using binary data and not text; so that you can make it more human readable (e.g. utilities, menus, dialog boxes, etc that rely on the internationalisation service) while also avoiding a huge amount of inefficiency and bloat (larger file sizes, parsing, syntax checking, etc).
The only thing that strings should be used for is file systems (file, directory and path names); but (assuming the existence of acceptably internationalised “open/save file” dialog boxes, etc) that’s almost nothing.
– Brendan
Why is this downvoted? Wtf people?
Hi,
I didn’t downvote; but…
The article (and other people) are complaining about inefficient software and too much bloat; and then someone suggests “lets add more bloat (for safety) and make everything less efficient!”.
It’s a bit like going to SJW meeting and suggesting that it should be run by men; or finding a group of people complaining about taxes being too high and suggesting a new tax; or going to Greenpeace and trying to tell them it’d be nice if we could just dump our trash directly in the ocean.
– Brendan
Sure but he’s making well-thought out arguments and not being rude or anything. Disagreeing is not a cause for downvote.
There’s an ‘inaccurate’ flag that down votes a post. That’s probably what happened with his post. After all, it’s not accurate to claim that the solution to inefficiency is to pile even more code on top of the problem. That’s one way to deal with SAFETY, not efficiency.
Depends on whether your “efficient” code is pissing away memory through memory leaks, corrupting buffers that have already been freed and reallocated to something else, or generally messing up memory which is not untypical with a C large program. It is certainly NOT efficient to leak away memory, chewing up swap.
Contrary to Brendan’s assertions, I’m actually a very competent C developer, and have done much debugging of memory issues like leaks, use-after-free and double frees. But it’s a tedious problem, especially in old legacy code, and such a problem catalyzed me out of a job recently (especially painful as it wasn’t my code, I hated the library in question, and had been moaning for years about problems with the library.)
My original point was that if C had a nice robust run time, that handled the common memory errors, had nice string and buffer handling to avoid common buffer overflow errors, and provided a nice efficient collection container API, perhaps more people would use it for application development instead of higher level, often interpreted languages, like python.
Bloat doesn’t come from providing a few foundation classes and APIs. Bloat comes from layers and layers and layers and layers of abstractions and libraries. I simultaneously laugh and cry when observing some Java stack traces, which can be dozens of functions deep even without recursion.
Hell, even Java with it’s JVM, runs like a trooper once JIT compiled, and GC causes very little trouble or overhead for all the Java applications I developed.
Hi,
I can just imagine it now – a modern 32-core AMD Epyc system with 64 GiB of RAM running your OS, that’s struggling to reach the performance of an 80486 with 16 MiB of RAM running Windows95; simply because you were too scared to use a raw pointer.
– Brendan
Brendan,
…I don’t understand the aggression here, but I feel like I need to stand up for christian.
Even if you do disagree with him, and I disagree with some of his ideas too, he doesn’t deserve to be spoken down to this way. We all have our bad days, but what’s the point in flinging insults? Don’t we want osnews to be welcoming and inclusive so that we can actually have more discussions on a wide range of topics? These are rare enough as is, I’d like to encourage them more with everyone feeling welcome.
Edited 2018-09-27 07:37 UTC
Hi,
From my perspective, “inefficiency to avoid effort” is one of the largest reason that almost all modern software sucks, and Christian is here trying to promote evil (by convincing people that it’s not just acceptable but actually beneficial).
If you can think of a different way to get people like Christian to fully comprehend my disgust I’m open to suggestions.
– Brendan
Brendan,
Consider it from a different perspective, avoiding effort through abstractions that let us skip it can increase our efficiency. Insisting we keep doing things the hard way is what bogs us down.
I’m usually the one arguing for more efficiency, haha. The point though is to keep an open mind.
Yes… Where did you go, in a discussion under such article?
Hi,
Let’s start from the start. Why do programmers need to write 200 incompatible text editors from scratch (and then complain about the work involved in reinventing the same wheels over and over; and do half-baked inefficient messes because there’s little point in continually improving/optimising code that will be discarded when the next fad arrives)?
This insanity is exactly what your “abstractions for tiny things like strings” are designed to help with.
– Brendan
Brendan,
In a previous post you made a point about high level functionality not being reused enough.
http://www.osnews.com/thread?662839
I don’t know what you think that should look like, but it seems your beef isn’t with abstractions, but that the abstractions aren’t high level enough.
Edited 2018-09-28 14:49 UTC
If you can back up your disgust with figures identifying the performance impact of GC, I’d listen.
On the other hand, lets say my kernel ends up spending 10% (pessimistic) of its run time running the GC. Now, lets say the application is system call heavy, and I’ve observed that a system call heavy application might spend perhaps 30% of it’s time in the kernel. Well done, your disgust has saved me 3% of my run time, a great improvement. Now, it might be that I, in fact, spend 1% of my kernel run time in GC, and your anti-GC zealotry has saved me just 0.3% performance. All the while, my code is less safe, has scope for a whole host of memory related bugs and I’m less confident of its reliability.
Hi,
Sure, and while we’re having fantasies it might be that Unicorns exist and shit pure gold too
Lets look at reality here. In the hope of sacrificing performance to increase safety and reduce development time by an insignificant amount; you’ve sacrificed performance to create a perpetual security disaster that is going continually waste a significant amount development time to work around; and you’re already trying (and failing) to defend “pure disadvantages with no advantages whatsoever” even though your code is a tiny fraction of what is necessary and doesn’t even support multi-CPU yet (which is something that is going to turn your GC into a nightmare of complexity as soon as you stop wasting almost all CPU time – e.g. 75% of a quad-core chip).
– Brendan
Hey hey hey, calm down there Linus.
Stop being a dickhead.
“Software engineering” is a myth (it is not engineering, never was, and never will be); and hasn’t evolved. Instead, it has degraded into retarded script kiddies gluing together lego bricks without any understanding of the end result and without any way to improve the quality of the end result”
My God, you also just described the state of Manufacturing Engineering in the US. A whole other topic but near and dear to my heart.
I love the smell of overconfident C programmers in the morning
Hogwash. C code is compiled and one can’t see exactly what the processor will work with in the end.
And there are plenty of C code that are using layered design, many large code bases are layered in an ad-hoc manner using complex defines making it very hard to parse for humans.
the most horrible bugs in history are done in gorgeous C. Safety over speed, I would choose most of the time.
A car can be made to go faster, with special fuel, but you loose the safety.
Big project in C are also very hard to understand, and they can be very tricky sometimes. Good c++17 code, if written well, is more expressive, easier to understand.
So far, to me it looks like c++17 is the best option to choose when you want speed and safety.
The real problem is the misuse of C, not even for microcontroller I am using C, also there I use at least c++14.
Far too many project are build on top of C, and also enough c++ apis are build on top of c apis.
The crap is literally everywhere.
A nice story to read:
https://medium.com/message/everything-is-broken-81e5f33a24e1
cipri
Indeed, you are completely right with that script kiddies part. Its amazing how I work with programmers 20 years younger than myself and see how they build huge monstrosities of overengineered “state-of-the-art” programming technology to do tasks so simple a monkey with a stick could program by hitting the keyboard randomly. Every single task is treated as a “dauntingly complex engineering project”, and the code is written as if it had to be hanged on the wall as a work of art, instead of being written to actually be run to do something useful, and do it properly.
And as keen af they are of “modern technologies”, they barely touch a debugger or a profiler to actually check if it runs, and does it right and efficiently.
The current quest for the holy grail of “self protecting code”, with all the unit testing craze, black box style frameworks, static analysis, a constellation of ever more restrictive languages, and other automated tools must make us think “Why we so desperately need such tools?” “Why a machine must monitor every single line of code produced?”. The answer is easy, and obvious. Nowadays the code is crap, piles over layers over abstractions over structures over architectures of pure crap.
I know the finger pointing to the languages/frameworks/insert_tech_yada_yada_here is the current hot topic of elitist self indulgence “my language is better than yours”, but most programmers aren’t able to point the finger to the real source of the problem, themselves. Thats too scary.
A good programmer can write good code in any language, but a bad programmer will write crappy code in all languages.
Dont blame the tools all the time. Most of the time its the hand that holds the tool thats the problem.
Sobakus,
If we take off our rose-colored glasses we see that our code has always been crap. Windows was crap, linux was crap, apple was crap, it was always crap.
Maybe we can allow some exceptions for very small projects under tight control of perfectionists like steve wozniak, but sooner or later complexity will overwhelm even the best programmer’s ability to write perfect code.
This is why static analysis is so useful, sure it will catch more errors from bad programmers, but so what? If “good programmers” have so much pride that they refuse to share error-checking tools that help bad programmers write better code, then I for one think this elitist attitude is at least partly to blame for bad code. Why are we shaming tools and high level abstractions that help us do a better job?
We’ve always written bad code, the thing I think we can agree on is that today’s bad code is so much more bloated and inefficient than in the past because tremendous advances in computer hardware lets us get away with it.
Most bugs would go away, not most problems. Efficiency is still a problem, and it’s quite often far worse in high-level languages.
Take Python for example. It’s not got most of the safety issues that C does, but it’s horrendously inefficient in many cases unless you’re using well written extension modules written in (guess what) C or Fortran.
Even C++ is a reasonably good example of this. Most of it’s reputation for being slow is because of horribly inefficient functions in the standard library. Avoiding the Std::String and just using C-style strings gets you a huge performance improvement because Std::String allocates and frees memory all over the place.
C++17 string view certainly helps a lot for strings. Iostreams is still a mess though…
C++11 changed the ABI so that std strings can use small string optimizations to avoid allocations. In some implementations, you can get 22 characters for free without allocations.
Other than that, std string is a container so you can always just reserve the size and reuse the string when possible.
And std array is a much better replacement for raw arrays of anything, since there’s exactly zero overhead, and you avoid Linus’ hated problem of arrays as arguments that are just pointers tricking people into thinking something different is happening.
I would argue the exact opposite, that software engineering is still extremely immature.
It is really only in the last 20 years or so software has become ubiquitous and part of almost every kind of product and service. In the same period CPU power and possible application complexity has risen exponentially with Moore’s law. Software development tools and practices have struggled to keep up.
At the same time there is still very little standardization – or maybe really too many overlapping standards (insert obligatory xkcd quote here). That on everything from computer languages, operating systems, libraries, protocols, file formats etc. etc. Often advancements are driven by implementation from big software vendors, whom have little incentive to offer interoperability.
Complex software systems are often big monolithic single-vendor beasts with massive budget overruns, painful big-bang commissioning and monopolistic service terms. That instead of federated heterogeneous systems evolving naturally across standardized interfaces and a competitive supplier ecosystem.
Let us see how it evolves the next 20 years, when the exponential growth fades (it will!). With stability comes standardization, competition on quality instead of features and incentives to utilize the available resources better.
There’s still a marked improvement though – in the more distant past, it was relatively likely that your house would collapse on you…
Article author apparently has never bought a discount coffeemaker, refrigerator, or other appliance.
Most software that people interact with is not remotely comparable to an automobile, which has important safety concerns to prevent people from dying, and frequently costs upwards of $25,000 when new.
And an airplane? Holy shit! $350 million per plane for a modern 747.
You’ll find the software that deals with such safety concerns – fly-by-wire systems in cars and especially planes, software that monitors nuclear reactors, etc – is designed to the same standards that the cars and planes he uses as examples are designed to.
You’re phone OS software, the software which you demand for free and want several years of updates for free? Yeah. It’s more comparable to a $15 coffeemaker or $25 vacuum.
And, those are all terrible.
But, there’s a ton of software out there that meets his requirements. You don’t notice it because, well, it doesn’t break. There are lots of well engineered software systems capable of year-long uptimes.
Edited 2018-09-25 00:15 UTC
When you buy a $800 phone with $350 hardware your software is not free. How can people still make the argument that a high end computer is comparable to a simple device that hasn’t changed their design in decades. Are you just trolling?
What… My coffe maker and vacuum work just fine thank you. No bluescreens or failures to work. I get coffee and clean floors cheap.
Well, I mean for those price points the level of cleanliness is debatable. Its not a dyson, but it removes crumbs from couches. Hardly the software’s fault. And the coffee doesn’t meet the gold cup standard, which could be done with software on more complicated devices, but this one doesn’t have sensors for any of that at $15. I’m not even sure there is any microchip in there..
Besides, cars *do* have software inside, and that software has to abide to international quality standards.
This is partly true.
There’s a huge diversity of software out there. My issue is actually have I’ve worked in mission critical areas. The software engineering part there is actually really poor.
Part of the problem is the low barrier to entry. To be a doctor, you need a lot of schooling, training, residency… To work on software, it is all informal.
And even if you have the right people, the processes are generally not there to enable high quality software.
I can’t show you any of the code I have worked on, but the Toyota Prius unintended acceleration issue resonated with me
http://www.safetyresearch.net/blog/articles/toyota-unintended-accel…
Embedded software code is just as bad in general. Don’t get me wrong, I’m sure there are some organizations that do it well. Just as I’m sure there are some great web development code.
But this idea that code that runs in cars or airplanes or mining equipment is ‘good’ just doesn’t resonate. It might work in that you don’t see it, but that doesn’t mean it is good. If you could visualize the software running on your car, it would resemble being duct taped together with splinters everywhere. It might ‘work’, but you’d never say it is good.
As to the hit on making software efficient? I’m guilty of this. Just last year my manager asked me to write program to parse some XML data and custom process it. Anyways, the database being small (< 2 MB), I just did it as a DOM parser. He was like, why didn’t you use a SAX parser, it is more efficient. And it is, but I took the easy way out based on my limited requirements.
If I was to put my finger on a personal gripe, it’s just the lack of standardization. How many times must we solve the damn problem? Each time, we need to rewrite applications. I don’t care what language you use, Win32, VC++, C#, Java, HTML, Swift… I just keep seeing the same pattern over and over. Textbox. Radio Button. Layout…
There’s little actually new. Just a whole lot of rewriting.
Though OTOH most really good documented (as in, there was for example a CCTV footage of rear of the car, showing if brake lights were on, which were hardwired to brakepedal, no software involved) “unintended acceleration issues” I heard of seemed to turn out to be drivers mistaking throttle for brake pedal… (so as it is, the issue seems largely nonexistant in most of the world, where manual transmissions dominate, because the drivers learn there to press the clutch together with brake to not stall the engine – so even if they mistake throttle with brake, or even if there’s really something wrong with engine control unit, what happens is simply the engine revving to high RPM while disconnected from the wheels…)
Hm, and the linked article neglects to mention that skid marks from parking brake / rear tires still don’t mean the throttle wasn’t also fully pressed… Also, it paints a picture as if there should be many more issues with Toyota cars than there actually(?) are.
Most software is essentially just reinventing the wheel so it fits another car.
The problem is that phones and other devices, have the economies of scale to actually provide good software… but fail to do so.
First, an internal combustion gasoline engine operates at about 20% thermal efficiency, not 98%. Diesels can do about double that. Claiming that cars are 98% efficient, even “for the sake of argument” is demonstrably wrong. It also underscores the fact that the author has no research or basic data (or even a search result) to back up his claims. But wait, he’s sprinkled about a half dozen XKCD comics throughout his rant. That makes it all legit, right?
The author then goes on to hold software up against automobiles, airplanes, and civil engineering / architecture works.
The difference? The people that design automobiles, aircraft, and buildings/public works are actual professional engineers. A programmer is not an engineer in any legal or commonly accepted way. In the US professional engineers have to have a four year degree, work under a PE for four years, pass two competency exams, and then earn a license from the state in which they want to work. They also have to show certain numbers of professional enrichment hours/credits every year.
Programmers need no formal education, have no required competency exams, and are not licensed by the state. Mistaking the two groups, and the works they create, just because programmers have decided to co-opt the term “engineer” is a logical fallacy. The majority of programmers don’t have the education, work experience, or credentials to create the quality of software the author desires. If he wants that quality of software he will need to pay quite a lot for it. Just like automotive design and R&D, aircraft design and R&D, and quality architecture all cost enormous sums of money.
Comparing the Apollo Guidance Computer from a moon mission to a current PC and coming away with “THINGS ARE ALL SO SLOOOOOOOOOW” is a completely tired and illogical comparison. The AGC was hand built for a very specific set of tasks. It was not a general purpose computing device. If you take a Raspberry Pi and program it in machine language to do 8-10 very specific things it will be *blazingly* fast. We all use general purpose devices. There will always be overhead in having a device that can do practically infinite tasks for us. The convenience of a smartphone comes with many caveats. That’s not necessarily programmer laziness or customer apathy. It’s the result of design challenges and decisions.
I love the author’s FiraCode font. It’s wonderful. I use it every day as I code game development tools in highly inefficient Python. Or is Python efficient because I can rapidly design, develop, deploy, iterate, and support tools? The artists I support don’t find the tools at all inefficient. The number of hours they save in a day, week, year by having a well operating content creation pipeline overshadows by many many orders of magnitude any CPU inefficiencies inherent in the Python runtime.
But the XKCD comics make the rant legit, right?
Edited 2018-09-25 00:42 UTC
I could argue that the “car” doesn’t necessarily mean the engine, but the car as a whole, including aerodynamics, grip, etc. Regardless, 98% is ridiculous.
Internal combustion engines, however, have improved in the past decade or two… Most “high efficiency” engines are hitting mid to upper 30’s for thermal efficiency.
In the “You gotta be kidding…” department, the Mercedes AMG F1 engine (1.6L turbo with two hybrid systems) broke 50% on a dyno run. Don’t know if they’ve managed that on track.
Interesting. The most recent article I found was from 2014 where Toyota was claiming to have a 38% efficient gasoline internal combustion engine as an engineering prototype.
The article says “98% of what’s physically possible ”
the 98% statement is too high, but “physically possible”, at least to me, implies taking into account the Carnot cycle, which does physically impose a max efficiency…
Hm, but the author doesn’t say 98%, period, but “98% of what’s physically possible with the current engine design” (emphasis mine) – so presumably taking into account, say, thermodynamics…
Wow, this is like looking in the mirror. His post was all of my software development rants rolled up into one large rant compilation, haha
The problem for him and I is that we’ve been reluctantly thrust into this race to the bottom where cutting costs is deemed more important to companies/employers than quality. Most CEOs and investors are only interested in using profits to enrich themselves rather than fixing it. It’s an epidemic and I don’t know if there will be a cure.
Yeah well the next generation of devs are lazy as hell. Which is normally a good thing but they’d rather grab a bloated library that does a trivial function, than spend five minutes writing the trivial function. And pat themselves on the back for being so clever for doing a npm install…
While I do agree with many of his points and have found myself pondering about my own inefficient software reality of my life is I don’t have time to spend sometimes to get to the point where it is efficient. I have only been writing software for about 6 years and in that time lots of it has just been thrown away when the next thing came along. The systems that “matter” and are still around and in usage get updated every so often and with each update we are taking out some of those inefficiencies.
gods_design,
Exactly, we should be thinking about the energy footprint of inefficient software! But unfortunately I think the problem may well be ignored long term, we’ll just suffer the opportunity costs is all.
Performance is not a quality metric.
Except when it is. Performance in terms of speed, memory usage, and latency are important in many types of software. It’s not a sole measure of quality, but it is one, and its value varies from user to user.
Yes, you are right. It can be one of many quality metrics.
On the other hand, speed often comes with shortcuts and hacks which often makes the code more buggy and harder to read.
Only because people don’t measure. Most cases of slowness is from doing stupid things, like doing things more than necessary, or using disk for storage when in-memory storage would do.
That really depends on how you look at it. There’s a lot of software where performance absolutely is a quality metric.
Compression and encryption software are a reasonably trivial example of this. So what if you get an insane compression ratio or have unbreakable encryption? It doesn’t matter one bit unless you can process data fast enough to be useful.
However, it’s often got less trivially obvious impacts too. A few hundred milliseconds of processing time may not sound like much, until you need to process a few billion data points, at which point that few hundred milliseconds translates to more than 3 years of processor time you need.
In real-world applications even in other engineering situations it’s often used as a quality metric too. If you can do something faster with less resources and the same reliability and safety as your competitor, you have a better quality product, period.
Yeah, saying that it’s not is a bit extreme but on the other hand it’s often not an important metric.
There are other metrics such as how safe the code is, how maintainable it is etc that are often more important than performance.
Back to the article, it’s kinda amazing how someone with 15 years of experience compares the slowness of a text editor with the speed of rendering on a GPU. Hint: one of these takes a lot more of that precious energy than the other.
The whole premise is wrong. Software was worse before. A lot worse. It did a lot less, it was less usable by people who aren’t into computers and it was most certainly a lot less safe and secure.
Is software crap? Oh yeah, you bet it is but it always was and it’s less crap now than it used to be.
Maybe when this guy has been programming for 30 years he’ll see things differently.
Compare Office 2016 that have a huge performance penalty without offering much than Office 2003 that ran on rather standard hardware. Explain how a quad core computer struggle to deals with the later Office releases while you could do things pretty well on the former using a computer under the GHz limit.
On what facts do you base your declaration that nothing changed from office 2003 to 2016?
I find a few new things.
https://www.cio.com/article/3003497/office-software/10-microsoft-off…
For example a great one is collaboration.
I guess your use case might still be the same, it doesn’t mean the software is still the same.
Another example is the OS. Our machines today are much more complex than they used to be.
Show me an OS for example that crashes as much as it used to be.
For development tools today it is super easy to do remote debugging, how was it in the past?
Server farms are now using more power than small regional cities. The push for software efficiency must be given a priority. In any case, sooner or later the giants of the industry are going to realise their energy bills can plummet with some decent software design and practices. Anyone who thinks they can ignore this issue long term is coding them-self into an early grave!
I wonder what will happen is governments start to tax bytes!
Edited 2018-09-25 03:51 UTC
Performance, Bugs, Cost – you can only pick 2. Even chinacrapphone company can produce (or steal IP from other company) quality software, however your mobile phone is going to cost 10000$ instead of 100$.
Except that they’re not all binary aspects like everyone makes them out to be, they’re all continuous variables. Any given piece of software can have all three, it just won’t be amazing at all three (unless it’s certain very well written pieces of open-source software, which have zero cost, very few bugs, and generally good performance, see NumPy for an example).
They are from a business perspective. Any “agile” shop is tuned for cheap, fast software not quality. You can’t do quality in one sprint. By the time you’ve refactored it 8 times due to spec changes from product, it’s not fast or even designed well anymore.
The first problem is that there aren’t requirements. You can’t build anything well without a plan.
Everything else comes down to architecting software rather than sloping it together.
I don’t think the arguments against using libraries are correct. Would I want to write a database from scratch for a project? No. Sqlite or a postgres library are perfectly fine to use. Even complaints about Java or C# are wrong because some apps can run at near C/C++ speeds using these frameworks if written properly with decent algorithms and thought about how garbage collection may impact things.
The reality is that electrical engineers reuse standard components all the time. This isn’t something that’s unique to software. Hardware folks are also to blame for meltdown. Intel cheated for a speed bump and it’s a very costly mistake. This isn’t a software problem. There are good and bad engineers everywhere.
Speed and reliability are also two very different things. You can build a reliable car like say a honda but it’s not going to win races.
Part of this is the monstrous power overhead of x86 servers. Netflix, iirc, is doing a massive rollout of ARM servers because they’re saving buckets of money on electricity.
Today, I program for myself.
Over the years I have learnt more and more ways to make my programs faster.
One program I have is about 64 times faster than the original coding from two years ago. On the same hardware and without any special tricks that only work with particular hardware or compiler, just all around better coding.
But I was not taught the faster methods in school(college), I was not taught them by people I know who are way more advanced than me, and worse some of the methods were not even find-able on the internet.
They come from a lot of hard work and testing on my part, and I also know if I was working for an employer they would have had me stop at the first working version, because while it took days to run, they would still claim spending more time on the code to improve it would be a waste.
Problems:
(1) Too many systems today have development stopped once something good enough to get the present job done is developed, then years later machines are overloaded as the organization grows. This really shows up when companies merge with other companies and a system suddenly needs to handle 2 to 10 times more data overnight.
(2) The same reason development gets cut short usually is also applied to the documentation. Guess what that means to the programmers trying to fix the overloaded system years later.
(3) Just the raw knowledge that so many computers just don’t have to be so slow grates on my nerves.
Edited 2018-09-25 06:53 UTC
It all started with ‘web apps’.
In the meantime a generation of developers has been reared who know nothing else and find it perfectly normal to stack a pile of ‘web technologies’ on top of each other to solve any software problem.
What is the latest Java Script framework?
Programming is not hard. Every engineer, teacher, lawyer even a judge can program. However, in order to become a job, programming is made artificially overly complex.
stargazer,
What do you mean?
The most fun kinds of programming are hard, like when I worked on crypto
Edited 2018-09-25 07:49 UTC
It’s this ‘programming is not hard’ mentality that’s the issue though. The problematic software is usually the stuff written by people who don’t fully understand what they’re doing.
Put differently, I challenge you to find someone who is not a programmer or mathematician who can understand what the following C function does after having just learned basic C programming without covering binary math (one of those difficult things that you seem to think isn’t necessary):
void f(int a, int b) {
a ^= b;
b ^= a;
a ^= b;
}
Even many people with a bachelor’s degree in computer science will not understand this immediately, and quite a few won’t understand how it works even after you explain what it does (it swaps the contents of ‘a’ and ‘b’ in place without using temporary storage). This isn’t something that’s artificially complex, it’s a real technique that’s used in real world embedded programming to reduce the memory usage of the code.
Similarly, many people who are relatively new to C++ are absolutely dumbfounded by the fact that you can quite often significantly improve the speed of your code by not using Std::String.
Yes, the basic concepts of programming are not hard. Actually writing good code absolutely is hard though, and so is writing correct code.
Not only that, but some idiots use Javascript linters that warns against any uses of bit-twiddling because it scares them.
http://docs.codeanywhere.com/preferences/analysis.html
I can kind of get that though for some people.
Doing bitwise operations when you don’t understand them is generally a bad idea, and that goes double for dynamically typed languages like Javascript because you might end up accidentally mixing types, which can lead to weird results.
I mean, I actively avoid doing bitwise stuff in my own code unless I need it for efficiency or protocol reasons, but part of that is also that I mostly use Python, and therefore have to mask everything when doing it because integers are variable width.
Types… In Javascript?
Bitwise operations are done on numbers. Don’t know what type you want to avoid mixing with numbers given that the language have like string, number, bool, ???.
Talking about quality, this function may swap params pushed on the stack but has no effect that the caller will see…
True, but the point was simply to illustrate the triple XOR-equals construct, and for it to even be remotely understood, you need to know what types the variables are (because it only works for matched types), and other than the inability to format code in comments here, the function looks a bit nicer.
ahferroin7,
I’m so used to seeing the xor version that I didn’t think the arithmetic example through, the sign matters haha…
Edited 2018-09-26 19:03 UTC
I think in reality this statement is completely backwards. Programming has been made artificially easy so that it could become a job, to the point where a lot of the abstractions created in the name of “easier” are actually so pervasively used that everyone forgets what was underneath them and missuses them. That is where a lot of the performance gets sucked up.
Its the guys that understand what is underneath all that abstraction that generally “get” how to write efficient software – and they are almost never engineers, teachers, or lawyers (well maybe engineers, but you get my point I think).
Anyone can be an engineer, teacher, lawyer or judge. However, in order to be a job these professions are made artificially overly complex.
A few years ago there was a discussion over reddit with ex facebook engineers over the question raised on why their android and iOS considerably grew with each release stage.
There was a point at which they had to redesign their own tools because of the app size limits imposed at the time by Google and Apple.
Nevertheless, one of the main reasons they reached is that as the team grew, the number of duplicate libraries and projects that got compiled from different teams into production line was so high that it wasn’t even funny.
Regardless how many times they tried to raise this point to management, they simply had other priorities on their list (like fast delivery, decreasing development time..).
Things like this seem to be put aside simply because “eventually” hardware will be as fast as they need it to be or “someone” from the community will find a way to make things “run more smoothly”.
I’d say it’s mostly a problem of communication between the teams and the priorities the management puts on them at the end of the day.
BTW if gasoline suddenly dropped to a cent a litre and CO2 emission requirements were dropped, I can guarantee you we ‘ll sooner or later be driving 2MPG cars. I mean, that’s almost what happened in the US while the US had their Iran oilfield secured and pumping out cheap oil.
Another factor is realiability. In the company I work for we had several applications (written in an obscure language without a library management system) sharing the same library files, which meant upgrading a library file upgraded it for every single app. Recently we made each app have its own copy of its libraries. Size ballooned to 10x the original size per app, but every build is reproducible now and the client happily paid the storage cost. We could implement a component store like WinSxS, but the 1 gigabyte in extra cost was minimal for the customer and comes with the EC2 instance anyway, but paying people to reinvent that wheel for an obscure language wasn’t minimal.
This is what the author omitted: Everything is more reproducible now and more contained. Windows 95 apps were so small because they dumped shared DLLs in WinDir, hoping they didn’t break another app using the same DLL. I ‘ve been through Windows 95 all the way to 98SE, and I can attest that everything is more reliable right now.
So, some of today’s bloat is laziness, some of it is not taking ugly shortcuts and not sharing everything to the extreme. Some of it is visual pizzazz.
For whatever is worth though, I agree people shouldn’t be allowed to develop commercial software without adequate formal training, and a “software warranty” legal framework should exist.
Edited 2018-09-25 10:21 UTC
In my own experience, it’s quite often the laziness part that’s the worst offender for performance issues.
It’s little stuff like Python programmers not using __slots__ when they could, or people not scoping their variables sanely, or choosing Java for portability when they will only ever need to run on one platform. Things like that get overlooked, and then nobody cares to fix them because the software works.
BTW the author is an Apple guy. Software quality has taken a steep dive in Apple, so if he has been living in a world of good software before, I understand the sentiment. On the other hand, I can see a steady improvement in the quality of software in Android…
talk for your…
change your work… it’s not because you have worked for crappy compagny it’s the case for everybody…
Until programmers start growing up and stop arguing about
– emacs vs vi
– tabs vs spaces
– brace on the same line vs brace on new line
– camelCase vs snake_case vs PascalCase
and stop creating new languages just so they can have
– a gratuitous syntax for print printf println writeln
– a gratuitous syntax for declaring variables like let var decl
– a one size fits all visual-feel* that fits only the limited experience of one programmer
just because learning their first was hard, we can never get anywhere.
It’s amazing how we’re still reinventing language features and development methodologies of the 60s (and no, waterfall development was never anything but a strawman for agilists and rebel scrum).
https://www.youtube.com/watch?v=otAcmD6XEEE
https://www.youtube.com/watch?v=AbgsfeGvg3E
* Can hardly call it syntax, semantics, style or paradigm, since a lot of programming language developments are about solving pet peeves rather than real problems.
Edited 2018-09-25 11:27 UTC
It’s key to distinguish between performance and reliability.
SOMETIMES performance matters. For example, users can easily distinguish between the time typical Windows Updates consumes versus Linux updates. But often performance doesn’t matter at all — what user can distinguish between two functions when one of them consumes a few hundreds milliseconds more?
Reliability is what requires more consideration in many IT shops. It nearly always matters. So many of us are required to pump out code to artificial deadlines with little regard to the reliability and durability of what we product. And when it doesn’t work, you can be sure the user will notice it!
benali72,
While that’s a fair comment, that’s not usually what software developers like me are talking about. Most of us are referring to webpages that take many megabytes/seconds to load and applications that take too long to load and run like molasses even on modern computers. Basically anything that uses tons of ram & disk space for no discernible reason.
For example, I use the gimp and inkscape quite a bit but the slow performance can be excruciating at times, even just opening an image takes too long. The gimp effects aren’t optimized for SMP or GPUs, it just pegs a single core to 100%. And it doesn’t help that many effects are written in python. I live with it but the truth is that it’s painfully slow with large images. Realistically our hardware has evolved so far that it is capable of delivering most effects in real time. It is a shame that our software keeps holding back performance.
I have half a mind to write my own graphics editor, one that incorporates my ideas about software efficiency and performance. but then my day job brings me back to reality, haha.
Edit: Perhaps I should go try alternatives again. I completely lost interest in photoshop after they moved to subscription fee. If I don’t own a copy, then I don’t want it end of story. Back in the day I liked Paint Shop Pro, but I stopped doing graphics for several years. Looks like they’re still around, anyone have an opinion?
https://paintshoppro.com/
Edited 2018-09-25 18:11 UTC
Discovered PSP with version 9 or so, very usable and very, very fast. Perform dual vector/raster edition, multi layer, great stuff once you know how things works. The big plus of PSP over its opponents is its keyboard shortcuts that are very Windows-alike and zooming with Ctrl+mouse wheel on where you point the mouse.
Had a cracked X2 copy for years, finally paid a premium X8 package because, f–k, PSP worth every penny. But since Corel bought Jasq, they almost changed the whole UI to be “tablet” compliant, so I still use X2 instead of my X8 copy
Try Krita which is a free PaintTool SAI.
Edited 2018-09-25 19:13 UTC
One example outside software would be to look at how often car parts get recalled. These days, it seems like no matter what car you buy, some part will need to be recalled in the next few years. However, it’s never the fault of the company that made the car, but some vendor that supplied a nozzle, tire or something. I think there is a deeper problem that stems from assembly lines.
Sure, everyone tries to test as much as possible, but at some point, every person in the chain says “I’m sure the vendor tested to make sure their products are good.” In most industries, this would be a piece to assemble your product, or a tool or die that creates a part. In software, this translates to some framework, and it doesn’t help that schools are teaching programmers to use frameworks and avoid “reinventing the wheel.” Plus, the issue is compounded by a flood of people who don’t care about quality, but only went into software development to get a bigger paycheck.
I remember when I first started taking programming classes, the teacher would teach us how to do some task (E.G. build a data structure) then introduced to a framework that did that same thing. The result was, when the framework did something I didn’t like, I could replace parts of the framework. Fast forward to college, and the teachers would just want to tell you how to use a framework, but never wanted to do deep dives into how the frameworks actually worked; if you wanted to know, you had to be motivated to do that outside of school. These days, most software developers have a dogmatic faith in the frameworks to solve all their problems; it’s almost like if the framework is using a screwdriver to pound in a nail, then nobody will even look for a hammer.
God, I think it’s impossible to think about this topic without sounding like an old man yelling at clouds… great choice of words Thom.
I have a Toyota car. The car maker told me that it had a 5 years warranty: i bought it, and that was exactly what happened.
During the warranty period a breakdown in the car was swiftly repaired for free. When the warranty was over, the car maker contacted me for a part recall: again, the faulty part was swiftly replaced for free. Maybe that part was from a supplier, maybe not: but it was not my problem. Toyota was the sole party responsible, and proceeded as it should.
I earn my life as a software developer. I wish this industry, me, could write software offering just one tenth of the warranty a car maker offers. Like the author of the article, my feeling is that me, almost everybody in this industry is creating crap. “Rush! Rush!”. Do not talk about issues like quality or ease of maintenance, or else you will get into trouble with your employer.
I cannot foresee that such level of warranty will ever be reached, despite that software, unlike a car, has not thousands of moving parts.
dariapra,
I wish I had the same experience. Unfortunately I think many companies are becoming less committed to honoring their warranties without a fight. I bought a used honda from the dealership with an extended bumper to bumper warranty. A mechanical defect arose whereby the steering column lock would not disengage on turning the key. Not only wouldn’t they fix it under warranty, they charged a $130 diagnostic fee. I had to write an angry letter to the commerce department for them to honor the warranty, but they still charged the diagnostic fee. Prior to this experience I honestly believed I would go in, they’d fix it, and I could leave for free since that’s exactly how they described the extended warranty when they sold it to me: no stressing over the repair bill. I was naive. YMMV.
Software is designed to be released as often as possible with new features. The rate of new releases in software is amazing.
Optimizing takes a really long time which is either used for creating new cool tools for us to use, or just reiterating to improve something we already have.
Airplanes take almost 20 years to design, so with cars the design rarely changes and then only in small iterative steps.
Also in software there are different kinds of software.
Embedded software for life critical systems is completely different from a consumer application. So is the release schedule, and attention to details.
ycarel,
And the pages would call this function any time they needed to read a setting. Hopefully people here can see what’s wrong with this. Even though the server was fast and the files were cached, this inefficient code added considerable overhead to every request. This is not something anybody with optimization skills would have done, but too many people get by in software never having gained those skills because at every single level there’s something reinforcing the idea that software efficiency doesn’t matter
Oh. My. God.
Edited 2018-09-25 21:51 UTC
Why not go back to the Manchester Mark 1 to see how it was originally done?
How many Windows programmers understand the universal Turing machine? Or the “Computable Numbers” paper? Lambda calculus?
I’m tired of hearing people say “the way we had it N years ago was just fine. What is it with kids these days?” If everybody thought the old ways were fine, we’d still be living in caves, and the Internet would consist of smoke signals and pounding clubs on hollow logs.
The key thing about performance problems is it’s often not one brain dead choice – but ‘death by a thousand cuts’. Each individual choice doesn’t add much incrementally, but they all add up.
This makes speeding stuff up again incredibly hard – as each change only makes things slightly faster and you need to make thousands to make a significant difference.
Also there is a downward spiral. If a high level bit of functionality – lets say boot time – takes 1 second then – if you make a change and add another 200 ms it’s noticable -so your are likely to at the very least to notice and attempt to minimize the impact.
If on the other hand it already takes 30secs then who care’s if you add another 200ms?
This is how things balloon over time.
This has to be balanced with Donald Knuth’s advice – https://en.wikiquote.org/wiki/Donald_Knuth on premature optimisation.
Often micro-optimisations and ‘clever’ tricks for local performance reasons result in something that’s harder to fix at a higher algorithmic level.
ie if somebody actually understood in detail and at all levels, why something like windows can take an age to update, maybe it would be an easy fix.
There’s a lot of talk here about,
‘If only all software was written by people who understand how the code interacts with the hardware’
Well – I give you GCC and it’s tool chain – a compiler – presumably worked on by people who understand machine code – performance isn’t great and used to be truly rubbish.
http://voices.canonical.com/jussi.pakkanen/2012/10/01/building-cc-w…
It’s the inherent complexity that of the system that makes it hard to reason about and fix.
So yes you need to pay attention to performance everywhere, but if you do that at the expense of system complexity in the end you may end up with something that’s very hard to optimize at the macro level.
That’s why ‘software engineering’ is hard – you have to be able to see the trees and leaves *and* the wood.
On what software you choose to use and support. If you aren’t happy with the way a program is designed, find a better one or program your own.
QT uses some freak-o version of C++ and almost never uses the Standard Libraries. GTK doesn’t support auto-connecting signal handlers to class methods (only to public functions! HAHA best practice my ass), so you have to write millions of lines of useless boilerplate to bind public functions to objects. Just to write an app in C++ which has some modern features and doesn’t make you pull teeth on text and memory handling like C does. Linux software development is crap. Windows and OSX are no better. Some of these projects need a long hard look at themselves and start cleaning up these stupid problems. What the hell is the point of offering a WYSIWYG editor E.g Glade if it doesn’t help in any way to lower development times because you have to write double the code to use the damn thing? Oh and before someone replies. Try googling for autoconnect signal handlers GTK/GTKMM/C++. The issue has been raised a million times with no resolution. I even tried to raise it on Gitlab for glade with no response so far.
Edited 2018-09-26 13:40 UTC
The modern text editor does indeed much more than the older ones. I’m using this example, since it was mentioned in the article.
The older editors and IDEs used basic bitmap fonts to display small text files with rudimentary syntax highlighting and very limited autocompletion support.
The modern fonts, for example, are much more complex. In fact, the fonts are now turing complete. While it sounds frivolous, this gives us excellent results, including ligatures for coding. For example, I use the great Monoid font, which looks beautiful on hi-res displays: https://larsenwork.com/monoid/
Also the project sizes have grown to mach larger proportions. Code bases can easily surpass 100,000 individual source files, but we still get very fast code completion. Thanks to Clang and similar efforts the IDE has a compiled model constantly synced with the text. This allows instant feedback on code issues, and gives much greater tool support.
Other “fancy” stuff, like constant Git status updates, continuous testing, live previews of HTML/MD/etc markup, code analysis, and automated fixes for common mistakes, etc take up resources.
Of course some things are frustrating. For example IntelliJ will occasionally pause for a few seconds. They could handle this much better. So there is room for improvement.
However the modern requirements are much larger compared to Windows 95 era. It is not even a contest anymore.
Edited 2018-09-26 16:56 UTC
The mistake is to blame developers for this. In reality, the blame is to be laid at the feet of 2 parties – Business Folks and Development Managers. Business users want this, that and the other thing by Thursday. The Software Manager is paid to make it happen quickly. It’s just a given that it will be sloppy and unstable. Lather.Rinse.Repeat.
Another factor, again related to the 2 groups above, is the “new thang”. The new OOPS/SOA/MICROSERVICE/SERVERLESS/VIRTUALIZATION/CLOUD will make everything better, so we we rewrite the bulky monster. OF course, business and SD managers never realize or admit the real problem is the actual development process itself, so another big, buggy monster is created in the “new thang”. Lather.Rinse.Repeat.
That’s a fancy way to state the adage:
On time.
Under budget.
Correct.
Pick any two.
The author of this article has very cleverly managed to synthesise pure OSNews commentators’ catnip.
And such a forum is a bliss because we can share our thoughts
😀 (note to self: must check if my cat reacts to catnip)
the only solution to this problem is technology hitting the wall.
there are efficient software products, but mostly when they target embedded hardware. or gaming consoles.
current approach to performance issues is to throw more ram/cores at a problem, the question is whether all problems in computing today will scale like that. if they will, nothing is going to change.
The Education Industry/Sector is also very cavalier about its ineptitude to adapt to modern times. It has resisted optimization for more than half a century.