The Unix philosophy of using compact expert tools that do one thing well and pipelining them together to manipulate data is a great idea and has worked well for the past few decades. This philosophy was outlined in the 1978 Foreword to the Bell System Technical Journal describing the UNIX Time-Sharing System:
[…]Items i and ii are oft repeated, and for good reason. But it is time to take this philosophy to the 21st century by further defining a standard output format for non-interactive use.
↫ Kelly Brazil
This seems like a topic people will have calm opinions about.
This isn’t a new idea/discussion, IIRC we’ve had it on osnews before. Even though plain text can be (and often is) parsed, almost every program ends up having to reinvent similar concepts and tooling over and over again. It’s pretty obvious that programs would benefit from inputting and outputting data structures and not just plain text. It’s too bad this wasn’t part of unix originally. So the challenge is how to get there from here.
I agree with the author’s view that something probably would have been done to support it if they known…
There’s plenty of room for innovation. Programs should have appropriate hints to know when they’re talking to a shell versus another utility so that it all happens transparently without the need for users to tell the programs to change formats. But the problem is that it’s kind of too late. Realistically most software will not be rewritten for decades, if ever. So while I agree with the author that a new ubiquitous standard would help make software better, getting there is the challenge. Until we do, solutions like the author’s “jc” are hacks that add complexity and failure modes in the interim. Maybe it’s a way forward anyway.
I believe the concept has merit, but I have my doubts that software will change for structured IO to become standard. Honestly, a simple binary data structure would be far more efficient than JSON. If 100% of tooling supported it, uses would never have to see it. But that’s even less likely to get acceptance.
The concept has more than merit, it is one of the defining features of PowerShell. It would’ve been nice if PS used a standard object format rather than its own PSObject format in order to foster broader compatibility outside of the Windows ecosystem, but of course the idea that it’d would ever be ported to Linux at any point in time would’ve seemed like a fever dream earlier in its life.
PowerShell can be installed on certain Linux distros. Alpine, Debian, RHEL, Ubuntu are the official MS supported distros.
learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-linux?view=powershell-7.4
Of course, a higher level programming language, something above Python, Ruby, or Perl, would serve the same purpose and cover the same bases.
Drumhellar,
+1 on powershell.
At inception, I really liked their object concept. But it was heavily tied to .net (and later native) windows metadata.
Somehow tying it to JSON with a type system might have helped.
Or even using a dynamic version of google’s protobuf.
Nevertheless, powershell stayed only as a very useful windows administration tool, but would not go onto replace bash/zsh on unix systems.
I’m not that familiair with PowerShell, but if I were to design a way forward it would likely be based on introducing a third output stream, next to stdout and stderr (and maybe a new input stream) for structured data. It would offer a backwards compatible way of adding this new way of interacting with applications.
As for the format, something already well understood should serve well (like JSON or YAML), but the format is not the difficult thing, coming up with useful schemas is. Something like a filehandle would probably be quite easy,
Of course translating data from one schema to another is something that would help in situations where there would be no mutually understood schema between applications (currently applications like grep en awk seem to be quite capable, but with structured data it could be easier).
At that point, I would say protobuf is probably the best choice.
It is a generic “struct” definition that transcends programming languages, and network boundaries. Basically a generic value type language, which has compilers for everything under the sun (C, C++, Swift, Java, Python, Go, …)
https://protobuf.dev/
It is already used by Google, Microsoft, Apple, Amazon, and many others. So it is not like this would be an obscure, unknown technology, or controlled by a single entity.
And yes, in addition to native storage protocols, it can also be mapped to others, like JSON:
https://protobuf.dev/programming-guides/proto3/#json
or YAML (via 3rd parth):
https://github.com/krzko/proto2yaml
Possibility 1, creating a new “Shell” language, which is a higher, higher level programming language. Kind of the way Guix uses Guile for scripting everything, except adding lots of user friendly sugar on top.
This solves the problem of shell being a dual purpose thing quite nicely, now that I think about it. Guile for the machines, and “Shell” for the people.
Possibility 2, create a shell service which abstracts the interactions and mimics an interactive shell, but it’s really sending RPC calls back and forth which simulate an interactive session. “ls -l /some/file | grep now | tail” wouldn’t send that command, but instructions to search “/some/file”.
This is what I’ve come up with when thinking about how SSH could be better, and how server management tools could be better.
These two ideas aren’t necessarily mutually exclusive. :\
Flatland_Spider,
I’m not familiar with either of these tools, but being based on scheme is pretty neat because scheme itself it pretty innovative next to more conventional programming languages.
https://guix.gnu.org/en/blog/2020/guile-3-and-guix/
I think more software should be designed this way. Too often a software implementation and UI code become tightly couple, and it doesn’t seem ideal.
I think both of those ideas could yield fruit.
Alfman,
One problem with this scheme is most programmer do not like async systems processing input coming at random times. When you say “RPC”, you automatically open that can of worms.
Of course we can go with “one process -> one rpc call” scheme. But then it becomes no more expressive than using the command line to pass call parameters (as we do today), and use textual input/output exactly like the stdin/stdout pipes. Maybe one more for error stream (stderr).
And… that is why it was very difficult to move beyond “the Unix way”. It is easy to program for (any language, almost any level of developer skills), it is universally available, and highly expressive (command line parsers are really good now, and csv/xml/json takes care of structural data needs).
Circling back, only PowerShell had some moderate success, and that in a limited audience, for a higher level Unix paradigm.
sukru,
Well, I honestly don’t think it would have been more difficult for unix & devs to support structured data. Those who would have grown up around structured data would be calling that “the unix way” today.
Alfman,
True, but it would be extremely difficult, even impossible to get them agree on a standard, or two.
Many to this day, for example, insist on using the thing called emacs, while the superior VIM has been around for decades. 🙂
sukru,
The entirety of POSIX is a standard that we’ve mostly agreed upon, so I don’t think this would have been any different.
Of course people are going to prefer different editors. As a DOS user I preferred wordstar hotkeys, text selection, etc. I picked up VIM rather reluctantly. VIM is my primary console editor these days even though I’m not a big fan of editing modes.
Alfman,
Yes POSIX is a rare good example.
I am usually optimistic, but I might stay pessimistic on this particular issue. Would be not sad if I am proven wrong though, in fact I might be glad.
sukru,
I always distinguish between the difficult of having done it back in the beginning versus doing it now. Developments that could have easily made it in back have become many orders of magnitude harder to push through today. And it is for this reason that I am not optimistic about progressive changes to get us out of local maxima today without influential backers to promote them.
I did years of this kind of UNIX/Linux data processing. It was never easy and always required clever problem solving. I think the general idea is valid: moving unstructured UN*X data to more structured for easier processing. JSON is nice but XML or YAML might be better.
AndrewZ,
I agree. The ability to pipe data is extremely useful, but by keeping it as text rather than higher level primitives that include arrays and name value collections means that we’ve had decades of everyone reinventing the wheel and writing text parsers. Even last week this is exactly what I was doing for work.
Why XML? I prefer JSON to be honest. I’ll grant you YAML is more sophisticated and might be better for meta data purposes, although I’ve never once had the opportunity to work with it.
I feel that human formats aren’t the best choice for machine interchange. Just because everything can be converted to a plain text stream doesn’t mean that it’s technically ideal to do that. IMHO a better approach would be to use efficient machine formats under the hood. And then use tooling to create human translations for front end.
For instance, when you pipe the binary output of a process to a console, it would automatically be translated by the shell into the user’s preferred notation (JSON/XML/YAML/etc). This would kind of give us the best of all worlds. Processes would use efficient binary interchange, and humans would be able to interact with them using ubiquitous and standard shells.
@Alfman
Love your idea about separating a standardized output bitstream that can be turned human readable, but I think getting that bitstream standardized is hard enough that we will never see this in our lifetime. Turns out (I think) that ASCII-text is about as standardized output bitstream as we can get.
“Why XML? I prefer JSON to be honest.”
XML allows attributes on a tag, a concept that can’t be translated to JSON (or not standardized at least). Granted, you don’t really need it unless you integrate with another system that uses it.
“YAML is more sophisticated”
The major reason why I see it being used is that it allows commented lines which is really useful if you are using the format for configuration files.
More useful stuff: https://www.baeldung.com/yaml-json-differeneces (with typo)
Since parsing YAML is more complex than parsing JSON, it will be slower and thus further away from your “efficient binary interchange”.
klodder,
I agree with you that today it would be impossible to standardize. However in the time of Ken Thompson and Dennis Ritchie, it would have become a meaningful standard, as much so as any other programming fundamentals.
Unlike other superior standards, XML is tedious for both computers and humans. Formats that are less dense while being more readable like YAML is a win-win.
Yeah, I wouldn’t mind something that is equivalent to YAML optimized for machine processing, but ultimately to have value as a standard it has to be ubiquitous, otherwise it becomes an “alternative” rather than a unified abstraction. This is why I think it’s kind of too late to really fix it.
Years ago I studied Astrophysics and the Unix philosophy is strong in that one, in science and R&D in general, where you have nodes of extreme expertise tuning models and optimising results of difficult problems. This the strength of those tags. I saw the possibilities for XML early days, but I get the perception around it being tedious / overbearing, the workload is significant so much so as an individual I dropped it’s use for various projects under development.
Of course I get that if you’re a corporate with a floor full of staff at your disposal you’ll probably also go XML, but the rest of us won’t at least not in any reasonable timeframe.
I’m not sure there will ever be a one solution fits all or even one solution fits many for that matter. Whether the work is done parsing text or preparing data it’ll still needs to be done.
cpcf,
You can express the same things in a plethora of ways. XML is just one of those ways and isn’t necessarily more expressive.
The cool thing about standardizing the abstractions would be that the choice of markup language itself becomes much less important. Person A could set their shell to use XML, person B could set their shell to use YAML. The software they write and use would use the more efficient binary abstractions and wouldn’t really care about human market languages at all. The code would be more optimal and everyone could be happy in using the format of their choice to interact with all their software. I think this would be awesome, but it only works if we embrace higher level data abstractions than plain text.
Sure, but we could be using much more sophisticated tools. You might do things like pipe the output directly into spreadsheets and plot real-time graphs from structured data output without having to parse data first. Once we unlock structured data, there’s a ton of innovation to be had behind that door.
YAML also has better data types, specifically integers, and it’s less visually noisy then JSON. However, YAML being clearspace sensitive is a pain. Especially with an unconfigured text editor.
UCL is the supposed to solve the same problems as YAML while being clearspace insensitive and slightly less noisy then JSON.
I’d like to use UCL more, but it hasn’t gotten the language library support like YAML has. 🙁
There is a lot of space for advancements in plain text config languages and data serialization formats.
“clearspace”, is that the new term for whitespace? I feel it’s a better name (like allowlisting instead of whitelisting), but this is the first time I’ve seen it’s use.
@Rrups
It’s my new term. 🙂
In addition, there are entries in the Unicode or ASCII tables which don’t have a print representation and non-printable characters in a dark theme are dark.
This is why I jump to a programming language very quickly and don’t write a lot of shell these days.
I still content C was supposed to be a scripting language, and it got out of hand. When looking at the Unix Philosophy, how small C is, and how Unix was used, it makes quite a bit of sense.
None of those are great options. I would say something binary with loadable schemas. Protocol Buffers, Cap’n Proto, Avro, and Thrift are some binary serialization things to look at.
Plain text is an encoding itself. One that has lots of tools built up to deal with it.
Flatlander_Spider,
If by “shell” you mean bash/sh/and relatives, then I never could stand programming in those. Clumsy round-about syntax, less capable, hacked in features, less performant, significantly worse data abstractions…there’s just no redeeming quality. There’s nothing wrong with “scripting languages”, but just skip the shell entirely and go strait to a real programming language be it perl/python/go/php/etc!
Woah, that’s quite the allegation 🙂
I kind of like C’s basic syntax, but some of the language aspects like include files and forward declarations were total disasters. Other languages like pascal handle modularity better than C. I like that D lang fixed C’s annoyances while retaining a familiar syntax.
Exactly. Even with those tools, there’s so many ways to express data that parsing said data is a chore. The concept of a collection, array, table, etc are nearly universal,yet virtually nothing can talk together without massaging the text. Awk/sed/cut/etc are useful tools, but that we have to rely on them so much to get at the data is not good.
Yeah, bash/sh/zsh/ksh or csh/tcsh. Shell scripts are fine for a small things and bootstrapping stuff, but it gets ugly and dirty quickly.
Yeah, it’s fun to bust that out at parties. LOL
That’s the thing. The programming language landscape had plenty of flora and fauna at the time C appeared. There were quite a few rather sophisticated languages around, like LISP. LISP is 10 years older then C, and Smalltalk was released about the same time. Fortran is 13 years older then C.
I think that the “unix philosophy” made sense in the 70’s, when Unix was an operating system without memory protection designed for computers with about 64-256Kbytes, and without the current debug tools and scripting languages.
Instead, today I think that the equivalent of the “unix philosophy” would be using DBus to communicate smaller elements, as a component system.
Yeah, Andrew Tanenbaum was correct in microkernels being the future.
” But it is time to take this philosophy to the 21st century by further defining a standard output format for non-interactive use. ”
The point of UN*X is that output does not have to be formatted in any specific way.
Sure, you can define a format if you want, but it doesn’t have to be _the_ “one true format.”
A favourite quote from Usenet days nicely summarizes the idea …
> This is Unix, we shouldn’t be saying “why would you want to do that”,
> we should be saying “sure, you can do that if you want.” 🙂
> — Steve Hayman
The author’s example of having to go though hoops to parse output from “ifconfig” only highlights how ifconfig’s output does not respect style point (ii) “Expect the output of every program to become the input to another …”.
ponk,
That’s inherently limiting though. It means everyone reinvents the same primitives over and over again. The concept of reuse fundamentally requires us to build higher level primitives.
All to often we have the situation where an outputting tool has a data structure with obvious arrays and fields and the inputting tool needs that data structure too. It would be so nice to have a common interchange format that reflects this fact rather than having to work with data formatted for human and trying to parse that. When we translate structured data into a textual medium, it often becomes ambiguous, inconsistent, neither expandable nor future proof without breaking compatibility with earlier parsers. In short we end up with a textual representation that’s needlessly hard to work with. You realize how tedious it is to take machine data into human formats and back again?
For example, if I wanted to take the output of an arbitrary command (say “arp -a”) and pipe it into a data structure or spreadsheet, that should just work without having to mess around with field positions or any other textual formatting byproducts that don’t pertain to the arp data. We could save a lot of the tedium by keeping structured data structured.
Ifconfig was just an example, but there are countless examples and the vast majority of unix software has the exact same shortcoming.
@Alfman
We could save a lot of the tedium by keeping structured data structured.
This would be a holy grail to find, having a standardized structured data format is extremely important for pipe-and-filter systems (in OS terms: CLI scripts).
I do try to incorporate a more structured data format in my (Java) API’s, but I think only so much is possible if you see that even the smallest “packages of data” can hardly get standardized, see https://microformats.org/wiki/Main_Page -> and more specifically check out https://microformats.org/wiki/h-adr
-> this is only for an address, and doesn’t really make me feel “wow, we finally got it right!”.
When aiming bigger I guess we can go for “ontology components” -> https://en.wikipedia.org/wiki/Ontology_(information_science) . While this isn’t my area of expertise, I think people at universities spend many many years trying to find that perfect structure only to get to a point later on where we will feel that mixing structures has advantages too (in my mind I compare it to studying psychology, economics or engineering and later on business people say that you need to have a mind that crosses the different fields).
Maybe you have a clearer idea on how to get to that more structured data format? Otherwise we will both have to shake of the disappointment that our IT systems will never get to a perfect state I guess :-).
klodder,
Well, we don’t often get the opportunity to rebuild such large industries New platforms can and do offer a lot of innovative solutions to old problems and I have absolutely no doubt we could innovate here. But my faith in changing the industry’s momentum is a different story. Even if we could do better, there’s so much sunk cost in existing technology that convincing the industry to change is notoriously difficult and expensive.
If you want to talk about the cool things we could do, I’m game 🙂 But I’m under no illusion that it would be anything more than an interesting niche project. It might get looked back on like “plan9” – interesting ideas to update unix, but that’s it.
“If you want to talk about the cool things we could do, I’m game ”
An osnews-thread doesn’t feel like the most practical way to have in depth talks :-P. There are however a lot of thought provoking bits and pieces in this thread and I try to “give back” where I hope I might demystify something.
klodder,
Yea, wordpress isn’t great for long discussions.
Yeah, I think it could have been fun to be around at the beginning when these things were being decided and were very malleable. Now days things have grown significantly larger than any one person or team; even the giants like microsoft/ibm/apple/google/etc have to excerpt a ton of force at great cost to budge the needle.
Two ideas I’ve hit on is to make the tools plugins of the larger project. Kind of like how BusyBox has everything integrated and programming languages work on memory directly. That way the data never leaves the application, but the application can present the data as requested while the disk format is some binary format.
The other one is plugable schemas. Kind of like how ProtoBuffs and Cue Language have schemas, and how Logstash or Fluentd transform ingested data for storage. The “|” operator would load the schema and transform the data if possible. I’m not sure how possible this is though since it would require the transformation process to be rather dynamic.
I should point out, this kind of problem is what Alan Kay was thinking about when he thought up the idea of Object Oriented Programming. The data and program would be a monolithic block, and other programs would interact with it via APIs.
“I’m not sure how possible this is though since it would require the transformation process to be rather dynamic.”
There would need to be some kind of pointers (metadata) on what you can do with the data, and because insights change, there would need to be some kind of versioning in the “protocol”/”data format”. Also, the (structured) data needs to flow between different applications, which means they would need to know how to handle different versions of the structured data.
I think I can come up with problems until my mind gives up, so I might not be the best person to find a solution here :-).
Your OOP remark was interesting, maybe that is already the biggest possible step towards more advanced software interactions.
Flatland_Spider,
Unix could have turned out very differently if it was based on “everything is an object” instead of “everything is a file”.
@Alfman
https://en.wikipedia.org/wiki/IBM_AS/400
-> Object-based design
Unlike the “everything is a file” principle of Unix and its derivatives, on IBM i everything is an object (with built-in persistence and garbage collection).
The AS/400 – iSeries – … is not my area of expertise, but it must be a wonderful system from what I hear about it (it was always 5 years ahead of anything else, cfr. virtualization, being able to migrate from 48 -> 64bit without the shit Windows went through, …).
klodder,
I see that is a literal quote from wikipedia, which is interesting although wikipedia has no information about it.
I don’t know anything about it. It’s so hard to be original when there’s half a century of history of computing and counting. Even something that seems novel to us has already been done and forgotten about many years ago. In a couple hundred years it will be even harder to do anything original. If the world hadn’t lost scrolls at the Library of Alexandria to a fire, they probably contained some of the earliest literature about object oriented operating systems, haha.
@Alfman
“I see that is a literal quote from wikipedia, which is interesting although wikipedia has no information about it.”
Yeah, I took that because it was the most concise way of giving a “start” for an investigation. I never worked with the system, but my (retired) father was a high-availability specialist on it for IBM. In Belgium almost all banks use the system, even though it was pronounced dead many times. IBM didn’t do a good job at marketing the machine, and it was renamed a couple of times. From the top of my head: AS/400 -> eServer iSeries -> … i5 -> System i …
It has a loyal following, and was probably the first machine to have Linux in a virtual partition etc. Anyway, have to go :-).
@klodder
The shell would have to take care of the hints during the parsing phase, kind of like how databases have query planners, and the pipe operator would have to do the heavy lifting of transforming the data by loading the schemas.
@Flatland_Spider
Interesting, I never thought about shell pipes in that way. It sounds a lot like XSLT to transform XML input to different formats. It’s not a way of working that still has a lot of following due to complexity/learning curve I guess, but when it works it’s rather nice (most of the time you should be able to keep these transformations simple).
The only downside I can think of is that you typically shape (“downgrade”) the data stream in a way that it is useful for the next program, which also means you don’t care about preserving the data you won’t be using. When piping data through multiple commands, it might be interesting to preserve certain data as metadata throughout the complete chain. Doable, but needs a lot of people to agree on a kind of standardized payload, and from what I see in different pipe-and-filter framework attempts in the Java world, I’m not convinced that the importance of the messaging format is particularly well known even by the creators of such frameworks.
klodder,
You’re not wrong about this, the choice of structure can be somewhat arbitrary (ie the data is enumerated on this node instead of that node, different names, etc). However I would suggest that the simple fact that data is structured at all brings significant improvement to those who work with the data.
For example, when SQL was conceived, someone might criticize the idea of SQL on the basis that every vendor could end up implementing a different schema, something that would turn out to be true. But I think we can recognize that the existence of different schemas doesn’t in any way nullify the huge benefits of having SQL to frame and query the data. In fact I’ve taken databases from different vendors and created views that join the data between them. Boy was I thankful that SQL makes it so easy to work with said data despite the different schemas!
I think that structured data I/O could bring a tremendous amount of benefit to the way we use command line software even if the structures aren’t always identical. Also, if structured data I/O became the de-facto norm for software development, I’m quite confident that we would see more innovation on the tooling front to handle translating between data structures (akin to the “view” in a database).
I think if C had established structured data I/O from the very beginning, every language and framework from that point forward would have supported it as well. It’s one of those things where being late to the game may mean we lost the opportunity to do it. Now it may be too little too late.
@Alfman
Interesting food for thought. It also got me thinking that maybe this is all a balancing act of what we can expect from newcomers in our industry. Shortcuts happen and this level of sophistication almost never gets to see the light of day, but I’m a believer.
Indeed. 🙂 It really only specifies “everything is a file.”
This is newer complaint of mine. New CLI tools are written for the person, and not the machine. It should be the reverse.
Flatland_Spider,
That’s exactly the issue. Once this precedent was set, it become the norm and today’s developers grow up with it always having been the norm.
It didn’t have to be this way though. Even familiar the venerable “printf” function could have been implemented to support structured data output…
This could have output structured data rather than just plain text. The user’s shell could print the structure or pipe it directly into more powerful tools like a spreadsheet. Heck that spreadsheet could even be embedded into the terminal OLE style.
I’m just thinking off the cuff here, but there are so many cool directions we could take things in once every application can output structured data by default.
Taking an example from the centralized logging world…
There could be an intermediary program. Let’s say “mapper” or “map” whose only function is to map input data to structured output data. Kind of like how Fluentd or Logstash ingest data and structure it for storage in a search engine db, or how Kafka is used in multiple stages to enrich data in data piplines.
“ls -l | mapper ls csv % libreoffice”
It would load the filters or schemas and transform the data. Passing it into a binary pipe, as someone else mentioned.
| is character pipe
% or %> is binary pipe
%! is binary error
%< is binary input
Flatland_Spider,
Instead of different pipe operators, I’d rather see one set of pipe operators that work regardless of the kinds of data they are handling. End points could emit & query “hints”. This could work in exactly the same way that unix software can query whether a file descriptor is a TTY or not. Sometimes you may notice that commands will output differently on the console versus when they are piped to a file. This is what’s happening and I think the exact same idea could work perfectly with structured data. Ideally this should not only work locally, but across things like SSH pipes.
You might explicitly tell the shell to set certain “pipe hints” for debugging, but by default the shell would just let the processes set their own hints and they would communicate structured data on their own.
Alfman,
Ideally, yes.
As minimally viable prototype, a binary pipe operator might be the easiest way to demonstrate and test the hypothesis.
There are terrific reasons for having the technical debate, there may be even better reasons for having the philosophical “Should We?” debate! In this regard I have to side with inertia. I’ve had a number of major projects scuppered by inertia, despite new solutions or ideas being obviously technically better, more efficient, even more profitable, inertia won the day!
cpcf,
As with many legacy shortcomings, going with the grain tends to be easier, even when it is technically worse.
As an example, we can look at 1500 byte ethernet packets. It’s easy to show that small packets increase routing overhead and bottlenecks and costs (getting much less work done and bytes transferred per packet). Especially in the context of streams that can be easily surpass a hundred megabytes these days. Switching to jumbo packets is obvious, yet breaks all manor of internet connectivity/hardware/VPN etc. And so legacy decisions are cemented into the fabric of our technology whether or not they actually aged that well. Should we do better? Yes we should. Will we do better though? It’s tough when compatibility is deemed a higher priority than innovation.
One of the many things IPv6 failed at. :sigh:
Flatland_Spider,
Unfortunately not fixing the packet sizes makes nowhere near to top of my issues with ipv6.
Allowing a MTU of up to 9000B to be negotiated could have a been a simple win though.
It’s unfortunate. What were the projects?
Not specifically software projects, mostly engineering solutions for advanced manufacturing / additive manufacturing. Even in the early days these technologies were hijacked by defacto standards mostly being driven by pre-existing supply chains.
cpcf,
The question with “new and shiny” is always a difficult one. If US for example had strict standards like the EU, CDMA would not be utilized by Qualcomm / Verizon, or at least not that early. While our friends at the other side of the pond still had to deal with GSM, they too eventually caught up in the 3G version.
So breaking things is useful.
On the other hand, if we had not stopped the WinAmp style custom user interface craze of the late 90s and early 2000s, it would be really messy today to have basic usage of computer systems. Even Windows itself jumped in with themes, and non-standard windowing overrides (which also broke TAB navigation and accessibility among other things).
So, it is never possible to see which things will be beneficial, which only has marginal benefits, and which things are fad that should die out. Given that, the current “democratized” but slow progress is probably the best we can do.
sukru,
It’s also tricky to determine because network effects can play an even bigger role than merit. If a small small developer works on feature X, it’s not likely to get much attention. But if that same feature is promoted by an apple or google, they can get much more traction in promoting the same feature in front of millions users and developers. As an example of this, Bordland unquestionably had superior tools to Microsoft in the 90s, but microsoft unquestionably had more influence than bordland. Influence won over quality. The point being, it’s not enough to judge a technology by it’s intrinsic benefits, it’s also necessary to factor in the environmental conditions that can impede or propel it in the market.
Alfman,
(In reverse order)
Borland had excellent tools… in the DOS era. But arguably, their Windows tools were bloated, and buggy. Not to mention 3rd party components being incompatible between releases. I, too loved their platform, but had to give up, especially around Visual C++ 5.0/6.0(?) era.
(Same for Novell, Lotus, and many others which sought after wrong things, and lost their customers).
In general, it is not A or B, but usually A and B adopting something together steers the larger community. For instance, everyone* switched to clang / vscode / protobuf, but not enough people switched to tensorflow. This means there is some hope for switching off from Intel architecture monopoly to ARM for the desktops (as Microsoft is also doing that).
As for network effects, yes that is true. But I think there was a saying about 10x improvement. If I am say using YouTube to host my videos, and someone comes and says Vimeo, for example, offers twice the better service, it is very unlikely I would want to switch.
The inertia makes moving away from “good enough” very difficult. And this might be a good thing on the long run.
sukru,
I used borland’s DOS tools extensively. Windows-wise I’ve only used delphi, and I’d still take it over visual C. I find visual C to be so bloated to this day. The client I primarily use it for has a legacy code base, and I dread every time we upgrade visual studio. The project gets slower and more bloated. I just want to use the old compiler, but they’re a microsoft partner and they want everyone on the “latest and greatest” VS, no matter it performing worse. The problem may be worse because they’re not upgrading developer machines.
I don’t think I’m familiar with the saying. But I will say many content creators I used to follow have left youtube. It used to be heralded as a democratizing platform, but it’s become less and less viable for the little guys to make a living on there.
I agree about it being very difficult to change direction. Although I don’t see why that’s a good thing in the long run. In the short & medium term, we can save effort by sticking with what we’ve got. We might even climb to a “local optima”, but there are long term to opportunity costs of not having switched to something better sooner. This adds up over time.