Rejecting an engrained practice of bullshitting does not come easily. Frameworkism preaches that the way to improve user experiences is to adopt more (or different) tooling from the framework’s ecosystem. This provides adherents with something to do that looks plausibly like engineering, except it isn’t. It can even become a totalising commitment; solutions to user problems outside the framework’s expanded cinematic universe are unavailable to the frameworkist. Non-idiomatic patterns that unlock significant wins for users are bugs to be squashed. And without data or evidence to counterbalance bullshit artists’s assertions, who’s to say they’re wrong? Orthodoxy unmoored from measurements of user outcomes predictably spins into abstruse absurdities. Heresy, eventually, is perceived to carry heavy sanctions.
It’s all nonsense.
↫ Alex Russell
I’m not a developer, but any application that uses frameworks like React that I’ve ever used tend to be absolute trainwrecks when it comes to performance, usability, consistency, and platform integration. When someone claims to have an application available for a platform I use, but it’s using React or Electron or whatever, they’re lying in my eyes – what they really have is a website running in a window frame, which may or may not even be a native window frame. Developing using these tools indicates to me a lack of care, a lack of respect for the users of your product.
I am militantly native. I’d rather use a less functional application than a Chrome web application cosplaying as a real application, and I will most likely not even consider using your service if all you have is a website-in-a-box. If you don’t respect me, I see no need to respect you. If you want an application on a specific platform, use that platform’s native tools and APIs to build it. Anything else tells me all I need to know about how much you truly care about the product you’re building.
Well, this is a double-edged sword.
When Linux wasn’t so mainstream, I’d be happy to have a web application that allowed me to access a service I needed. Now, as you’ve showed in the past you’re perfectly aware of, the browser has become a very convenient way to reach an immense variety of users with much less efforts. And users of less trendy OSes are generally happy not to be left on the wayside. Which brought us electron apps.
I prefer open APIs and native protocols as well, but let’s not forget why browser based GUIs have gotten so popular with developers, who aren’t actually evil monsters waiting for the next move on how to make their users more miserable. They just do what they can with their limited resources to satisfy the biggest number of them. Not necessarily always in the best way, but I’m pretty sure they’re trying.
It’s all a big balance, I guess.
I don’t think React is a bad technology by itself (yes, non native, but everything web based is not going to be native looking even if it’s faster than some “native apps” that are made in Python or Qt’s QML). In fact many native frameworks are taking into account some React learnings (like Swift UI, Jetpack Compose, etc). I think there are two issues: first one, it’s the most popular framework out there. So you’re going to find lots of bad/mediocre developers using React, just like in the past happened with jQuery, PHP, Visual Basic, etc Basically, if your technology is the most popular, you’re going to find many newcomers and people that don’t care a lot using it. Because they don’t look for other alternatives. Usually, better developers look for alternatives, and many times used them instead, so there’s a bias. And second: React is being used in applications that collect lots of information. So, the UI itself is not the problem, it’s just that you’re doing more and more stuff than many native apps, just because they want to get many data and in the web ecosystem is easy. This slows downs things.
You wanna reconsider that generalization?
Thanks to all the heavy-lifting being done in C++, I’ve easily written PyQt QWidget things which blow the pants off C-based ones. (eg. Even before I threw in some threads and confirmed that QImage unlocks the GIL while working, my first attempt at a loader for GQView/Geeqie collections blew the pants off Geeqie at thumbnail-loading performance.)
…and that’s before you throw in how easy PyO3 and Maturin make it to write your own heavy-lifting bits in Rust so Python remains just a glue language akin to QML but for QWidget.
I am a developer, and I use React a lot (not exclusively). While I generally agree with the assertion that there’s too much emphasis on picking the right tools (and it comes from technical management, where they have an entire realm of bullshit called “devops” which is really just configuring applications), the idea that using frameworks necessarily makes applications ” absolute trainwrecks” is the worst know-nothing take I’ve ever read on this blog. There is a good reason React took over the world, and it’s not for busy work (though, it has enough downsides at this point, I think that particular space is ready for a change.)
Also, your desire for application consistency on a platform is, as always, unreasonable. No one else wants that, not developers, not users. It’s not a goal anyone wants.
As someone who done webdev and who’s had to deal with flaky Internet connections on many occasions, I’ll say that frameworks are fragile junk compared to the graceful degradation you get from using HTML and CSS as designed. On the web, frameworks exist so that managers can pay “full stack” developers who are really just backend developers trying to make themselves more valuable instead of people who actually know how to use HTML and CSS. That’s why they keep reinventing client-side templating in JavaScript.
As someone who’s developed and regularly uses desktop applications, I can say that this web-tech cruft suffers from the Things You Should Never Do, Part I problem. Devs reinvent whatever tiny slice of functionality they personally use, in a buggy way, and then call it done. It’s nothing specific to web-tech. Just a general we reinvented bugs that were solved by the 90s problem that SwiftUI also suffers from.
(It took years for anything Chrome/Chromium-based to not have broken X11 SELECTION clipboard support.)
…but, if the oft-copied Apple can’t even keep their checkboxes square, I think there’s no hope.
It’s impossible to argue that React is a “full stack” or remotely backend framework – it’s barely a complete library. Maybe you could argue next.js is full stack, but that is more backend for frontend.
Yeah, it’s true that a static HTML page with some CSS loads faster and degrades better than any modern “SPA” or similar acronym, but it’s also extremely limiting in capability, and you can’t scale hand written static HTML to millions of pages. You could use something like PHP on the backend, with some templating, and mix in some JS code snippets on the front end, but this is a well worn 30 year old highway, and it has it’s own challenges.
React at the end of the day is an over hyped templating library, with a built in state manager, but those things are very well implemented, despite the frustration that some have managing it. It solved – well – a number of sticky, difficult to scale real world problems that the previous patterns never really got a handle on.
I’m fine with saying that it’s time for something new to replace React. I’m not so fine with pretending it (and similar tools like Svelte, Vue, etc.) didn’t solve some real world problems. It definitely did, and still does.
Actually I think you’ll find most users *do* want consistency between applications on a platform. Not least of all because it reduces the cognitive load when switching between applications. Imagine if each application had it’s own clipboard copy shortcut, rather than CTRL+C.
Most users want their apps to do whatever it is they turned it on for, and couldn’t care less about whether it looks like Windows (ugly, limited) UX. In fact, I’d argue, from the UX/design side of things, that most users definitely don’t want their apps to look like Microsoft Office. If Thom had his way, all apps would have that ribbon thing that infected Office and Explorer. No. Most users don’t want that.
If users did want that, then we’d see a lot more of those types of apps winning the competition, and they don’t.
I agree with the Captian here, it only matters to users that they can get their tasks accomplished. They do not care for the minutia that people like Thom or John Gruber do. You can point to terrible interfaces and everyone will hate them, but they aren’t an argument that every font must be the same, every radio button must be the exact same control, etc, etc Keeping everything exact is a recipe for a good ui, but also a path to pain of worshiping on that alter of consistency instead of focusing on user experience. google used to create a billion versions of controls all slightly different in appearance. Some odd balls delighted in highlighting every one and castigating Google over the discrepancies. Real users did not notice or care.
“ If you don’t respect me, I see no need to respect you. If you want an application on a specific platform, use that platform’s native tools and APIs to build it. Anything else tells me all I need to know about how much you truly care about the product you’re building.”
I’m sure developers are shaking in their boots, overcome with fear that they will lose all 0.01% of users that feel this way.
Linux would be a barren wasteland of a desktop OS, for the vast majority of users, if all you had were the meagre supply of mostly not that great desktop applications in native toolkits. I say this as someone who has been using Linux for 2 decades.
Saying you don’t respect someone because they choose (or are told to use) a certain framework is just plain weird.
Honestly, these days, given that I’ve got 64GiB of RAM, I’m likely to choose an Electron application over a GTK+ application as long as it doesn’t become a habit.
At least then I’ll get native server-side window decorations on my KDE desktop and a lack of buggy drop shadows on the context menu without having to fight to get the gtk-nocsd LD_PRELOAD hack to apply to Flatpak’d apps and possibly facing having my forced SSDs wrapped around a headerbar. (Both Electron and GTK+ can have native KDE Open/Save dialogs as long as they’re sufficiently new and aren’t explicitly using GtkFileChooser for its extensibility.)
Native Windows app dev here. I sure love native apps and couldn’t agree more.
Yet, it’s getting harder and harder to continue to maintain and extend our Windows application as a small business. While the product is great, the new web applications from our competitors work on any device. Our customers are professionals using locked-down user accounts and installing updates (even with a simple setup) isn’t easily possible for everyone due to missing rights. So more and more customers are asking us for a web application.
We’re too small to develop for Windows and Android and iOS and the web. So we’ll probably start with a web app companion which over the years will take over the lead.
And yes, it will be React, because of all the update traumas we had with our Angular projects.
I would suggest maybe trying something like React Native for desktop, which can get you a mix of the 2 worlds – somewhat native rendering surface (native rendering yes, but not with native widgets) with some extensions to get native widgets. You can program the application bundle in JS, and actually update on the fly without requiring the full application to be updated, provided the system libs are stable in that larger app package. And you can tailor the UX for each supported platform, and even maintain a branch that has some kind of web support.
I said I would suggest such a thing, because apparently React is a hated monster, and using it will make you a “very bad person” VBP. Good luck out there you Devil!
BTW there’s hope. What a delight to finally use Fantastical also on my work Windows PC rather than only on my Mac!
At first I thought they must have recreated it in Electron, but no, it’s native.
Thom expresses the frustration final users have with applications nowadays. Heck, I don’t even like the term “app” because it is closely related to the concept of lazy programming. I don’t know anything about C++ or ASM, but I know as a consumer that programs created the old way were fast, responsive and resource friendly. Modern apps are the cyber Rude Goldberg machine, and we are the morons that keep letting this happen by using them, including me.
We have seen the efforts of different groups bringing modern apps to Windows XP, like that weird, interesting port of Discord that looks native to the OS and it is responsive. Why can’t developers do that?
Discord in specific … the Linux “native” app is not so great. Just running website via chromium-ungoogled gives a better experience.
I think *the culture surrounding React* is as much as a problem (and many have pointed out this). For instance, Slack famously for a long time shipped with an unoptimized debug build of their React app, on the other hand Vivaldi actually uses React for it’s UI (though, personally I think Vivaldi could be a bit faster and React is a strange choice for a browser UI).
As for native though, I think being *a good platform citizin* is as important as using native frameworks and code. For instance, I think VS Code have in the past been pretty good Mac app by following a lot of the conventions (I think they’ve slipped in recent years though, but that’s more a choice to be consistent across plattforms than true to one). VS Code also runs pretty well, but from what I gather that required an enormous effort on the team’s part to make that happen. Nothing that you can reasonable expect anyone without Microsoft-caliber resources to be able to pull off. A more recent example is Obsidian, which I is a pretty damn good app despite being web-based (and loved in the usually snobby macOS community for a reason). Ok, maybe some are able to pull it off, but I think that’s mostly down to Obsidian being far simpler (note-taking app). It’s worth noting though that neither of these Electron-apps to my knowledge use any off-the-shelf web frameworks, it’s more custom hande made code.
And don’t get me wrong: I’d love more native apps, but I’m willing to settle for non-native ones as long as they respect the platform they’re running on and are efficiently coded. In fact, all else being equal, I *vastly* prefer native apps (when I mained Linux for a while I was a huge GTK snob simply because I wanted consistency). That said, in some cases… the non-native ones are simply better by some metrics that matter enough for me to use them instead of native alternatives.
AI is going to fix this. I recently started using ChatGPT to create Linux native applications using C++/GTKMM/Gstreamer/OpenGL/SDL2. It works. You need to know how to integrate generated code into your applications, but you can also upload your code and have the AI modify it and feed back to itself. It’s extremely powerful. 6 months ago I was convinced that AI was a dead end which would never be appropriate for programming. Now I’ve gone the other way. I think we are going to watch the death of a lot of proprietary software shortly. AI is able to use open source frameworks to generate actually useful code. I was able to make a native GTK based Wifi Heatmapper in a few hours. Something Linux has lacked for years. I knew howto create GTK windows/buttons/menus etc but there was no good tutorial on creating a drawing area and having dragable/placeable. gui elements (e.g for laying out AP locations). ChatGPT understood the problem and gave a solution quickly. I am also working on a live show cueing application and again AI has been an immense help, letting me skip having to learn Gstreamer and letting me focus more on UI/Featureset. I think once the memory capabilities get bigger It’s going to explode. I’m expecting we’ll be able to feed in gimp 2.10 and blender 2 plugins and get modernised versions for Blender 4.x and Gimp-3.0. That will revolutionise both programs. There are a LOT of plugins for both Blender and GIMP which over the years have been left to rot due to lack of maintenance.
Darkmage
I agree with you. There have been deficiencies and many people are eager to put down AI as not being very capable at handling complex problems, but the thing is that AI is just going to keep improving. Clearly many people aren’t ready to accept this, but a lot of creative jobs are already highly threatened and coding jobs where we take a written specification and translate it into code aren’t too far away either. AI keeps making progress. Barring a disruptive worldwide event that cuts off chip supply chains, AI is the future.
While I might have given some the impression I’m 100% gung ho about AI, I’m not one to have blind optimism in it. Naturally there are new opportunities for those managing/working with AI, but I do expect it to result in a net loss of jobs on the whole. It seems obvious to me that the benefits of AI are going to go to shareholders while the burdens of it are going to be felt most by the working classes who are displaced.
Ideally the use of AI would be accompanied by social programs to protect people who are displaced so that AI can transform the world for the better for everyone. However this is the exact opposite of what’s happening: politicians keep giving corporations all the tax breaks while slashing social programs leaving workers more dependent on corporations. Alas, this is what ultimately turns AI into a dystopia (not the terminator style takeover, but economic dystopia): Corporate shareholders get rewarded on both sides: lower taxes for themselves with fewer workers to pay. Under capitalism, it is the working class who will inevitably face the hardship of an AI transformation.
I’m still on the fence. I see a lot of people looking at the current state of OpenAI and thinking that they see something like progress. I’m not sure I see it. Yes, it’s slightly better (they basically used the LLM to produce some guardrails and checks) – but if you look at the curve, it’s not improving that quickly, or that much any more. There are some tools that have figured out how to mildly compensate for the inability for these things to produce reliable output – sometimes, probably much of the time, maybe even most of the time (I’m skeptical).
I can’t tell if it’s really approaching some inflection point where it’s actually useful, or if it’s just improved enough to capture some of the previous skeptics in to its hype bubble. This is my gut. I just don’t see how something so non-deterministic can scale much further than it already has, and every time (literally EVERY time) someone tries to show me how cool it is, and how much progress it’s made, it still blows up. The last thing I need on a tight deadline is an unreliable partner. Show it to me when it’s reliable. It’s not there yet. I’m (still) skeptical it ever will be. And this is before we get to realities like “LLM generated code has 40% more bugs” which is a problem no one has a solution for. (And at the end of the day it will only ever be good at generating solutions to already solved problems – I speak of LLMs here.)
I will say though, that a lot of people have been betting their companies and their futures on this crap. If it doesn’t work out, it’s going to lead to a blood bath. At the same time, if it does work out – it’s going to lead to a blood bath. I guess management wins?
CaptainN-,
1) Job displacements are already taking shape…not in the future but today. Hopefully he doesn’t mind being used as an example, but I think Thom Holwerda might consider himself in this boat and he’s not alone.
https://www.theverge.com/2024/1/8/24030420/duolingo-laid-off-10-percent-of-its-contractors-because-of-ai
2) Skeptics try to portray the worst case scenario as a limit for AI’s broader uses, but this isn’t fair even for human employees. A generically trained human will naturally lack skills and knowledge for a specific job, but so what? Just because a “generic human” isn’t qualified doesn’t mean A) all humans are unqualified or B) the human can’t do better with specific training. In the same sense, it’s obvious that generic AI isn’t optimal for specific jobs. But this is not an inherent limitation of AI and companies are going to start training AI using their own quality training data…training that is not publicly available.
3) Double standards for AI and humans. Anything short of perfection is seen as a failure for AI without acknowledging the fact that all humans make mistakes on a regular basis. “To err is human”. AI benefits from supervision just as regular employees do. All the while AI is improving over time.
Here’s the thing, if you’re dead set on showing that AI is a fail, then you’re already ‘right’ from the get-go because you’ll find a reason to draw that conclusion for yourself. But that doesn’t mean your conclusion is applicable to others, especially companies that have a different objective than you do. Companies who view AI as means to improve profits may find that it works for them regardless of our opinions here. For better or worse they are the ones who will be making the decisions to use AI rather than us.
I don’t mean that LLM “AI” has no use – it’s particularly good at processing natural language in a variety of ways. I mean to say that it’s not particularly useful for solving novel coding problems, or even being particularly good at reliably solving older problems. It’s not like I don’t use it for the occasional quick start for some algorithm I once knew, but it’s still only about 50/50 whether the output is any kind of useful. It’s not getting better. Some of the tools around it are getting more robust, and that can have the appearance of “progress” – I’m just not so sure.
But yeah, translation – it’s pretty good for that. Summaries, pulling out keywords and phrases, or converting a natural language query to some more structured format. LLMs are super useful for those types of tasks. I’m just not convinced they are as useful as people seem to think for replacing thinking work (not that AI in general isn’t coming for that work.)
Yann Lecun have already heralded the plateau of LLM paradigm gains and called for new approaches.
Devs have always been on the libertarian forefront of things opting for B2B contract and less taxes / safety nets. Now they are going to pay for that as a worker class.
Not so fast, the AI copyright debacle is far from solved, and by massively GPLing AI generated code you could get yourself in big trouble.
In the current global business environment, business gets what it wants in terms of laws. If they want to be able to copyright LLM generated plagiarized content, they’ll be able to do so. Common sense or historical norms have nothing to do with it. Power has everything to do with it – and they have all the power.
React ecosystem is probably one of the worst I have ever seen as a developer and as a user. It’s problematic in so many ways, and I don’t see any reasonable advantage over pure HTML + CSS + vanilla JavaScript. Quite the opposite, it pushes lots of bloat to the user side, and requires way more processing to load simple things. The syntax and the code flow are 100% bullshit. It’s no surprise it has been developed by Facebook (now Meta), a company known for producing trash software.
A couple of months ago I wrote an article about how web crap has taken control. It has heaps of data, not just subjective rant, so I think it’s worth reading: https://medium.com/@fulalas/web-crap-has-taken-control-71c459df6e62
Do everything in html + css + js and you are back in the 2000s. REACT greatly simplifies programming and allows you to focus on logic and not on presentation, saving you from manipulating HTML directly. On the other hand, many of today’s apps can be solved with a web app. I have made several apps that solve my problems and making them online is much better than making them native. For example, a practice management system, or club management system. It can be used from anywhere, even from a cell phone, with a single code and a single app for everyone. REACT or some other framework are much better than going back to HTML + CSS + js. and NEXTjs is FAST real FAST. sorry for this bad translation….
Footnote #2 is EXACTLY why I try and stay as far away from GUI, and especially web, frontend development
You might say javascript interfaces are the flash of modern times.
At least javascript is more standardized and less proprietary, but in terms of the way it gets used it essentially took over for flash. All of the widely criticized aspects of flash UI websites are now built right into the browser. It’s up for debate whether this is a good thing overall, but I lament how antifeatures built in javascript are harder to block these days. I’ve been looking for years for a way to stop javascript execution and/or inhibit JS events in FF after a page has been loaded to put an end to pesky javascripts interfering with users.
Old Flash dev here.
As an interesting footnote, Flash had a programming language at its core called Actionscript, which was based on the same ECMAScript standard that Javascript is based on (and if memory serves the runtime started as a fork of Mozilla’s spidermonkey – I can’t find a reference for that though). That standard is numbered, so 3.0, 4.0, 5.0, etc. Back in the day, when Flash was necessary because IE sat on the browser space like a 350 lbs gorilla, Javascript was at ECMAScript 3.0. Flash (Actionscript 3.0) was at ECMAScript 4.0 (kinda – not 1 to 1, but it implemented a portion of ES4), and included extensions like type notation, which folks around Javascript are now talking about adding in (and that you can get in Typescript). It had other neat things about how properties are scoped, and class support, and things like that, long before Javascript would gain those important features.
When Adobe decided to throw in the towel with Flash (which is more of what happened, less that Jobs killed it though that helped), and Microsoft finally lost enough market share with IE to pull their head out of their ass, the browser makers finally started to work together to create a standard (sort of reminds one of AMD and Intel now burying the hatchet if you think about it.) They started on a project called “harmony” to just kind of clean things up, and get the browsers to at least implement the same core concepts. Harmony eventually became ECMAScript 5.0, and many of the features of ECMAScript 4.0 were abandoned in that reset. But it’s not true that it was never implemented – it was in Flash (mostly/partly)!
Today, many of the great features that made Actionscript so much more pleasant to work with that the Javascript of the time have been implemented in Javascript, including things like Proxy, classes, and a host of things we only dreamed about in Actionscript like async/await. Many of the other features which made Flash such a powerhouse also have analogues in browser tech now (video, 3d, canvas, webassembly, etc.), but it took years to catch up.
For me, it’s more so that I don’t want to have to deal with browsers, especially browser compatibility how so much just assumes Chrome, and Firefox (or the few others) aren’t worth even trying to support, even if they sometimes support the standards better. Or don’t want to support “standards” that Google is trying to force on everyone.
Also, “make it look good” or “make it fast and have all these animations” and crap like that…nope. I don’t want to have to deal with any of that.
It’s incredibly easy to support all the browsers at this point. Most of the time, everything just works, unless you are doing something completely bonkers. It’s not 2006 any more. Things have settled quite a lot.
I am a develper, and I do like the Java+Spring backend and Angular or React frontend, it is a good way for systems that are really cloud based to operate, as you do not have to worry about the operating system of both the server and client.
That said, I do not like how the web frameworks use a LOT of Javascript, even if it was not needed, eg, for building the actual HTML and CSS stuff.
Then you can’t complain about the fact there are only two major OSes per form factor (for example, Windows and MacOS for the desktop, Android and iOS for smartphones and tablets) and escaping that duopoly is nigh impossible for the average user. Hint: It’s native apps (even for things that shouldn’t be native) causing this, companies can maintain native apps for so many OSes.
I had the same opinion until I realized a simple fact. What is an UI library? It’s mostly a layout engine that allows to place widgets (components) in a semi automated way that adapts to screen / windows size. The fact of the matter is that modern browser layout engines (read Chromium) nowadays blow the pants of any custom UI library in terms of both versatility and performance for the simple fact that millions of $ have been invested (by Google, Apple, Mozilla, MS) for them to excel in exactly this area.
Same can be told for developer tools like debuggers and profilers
The days where HTML based UIs were a laughing stock (which is when the opinions of experienced senior desktop app devs where shaped) are are long gone.
What I really hate about native apps is that there icons fill up everything. To launch them you need to remember where you put it between hundreds of other apps. Basically you need a librarian. Deep linking is also a feature I need. I can go back in the browser history, even search it, and continue the work. Apps does simply not complete.