KDE developer Nate Graham has penned a post detailing some of the things the KDE project is working on that should come to full fruition next year. There’s quite a few things here, but the biggest one is probably KDE’s maturing support for Wayland.
I’ll be honest: before 2020 the Plasma Wayland session felt like a mess to me. Nothing worked properly. But all of this changed in 2020: suddenly things started working properly. I expect the trend of serious, concentrated Wayland work to continue in 2021, and finally make Plasma Wayland session usable for an increasing number of people’s production workflows.
That’s good news, and I hope the move to Wayland fixes my biggest issue with Linux on laptops: playing video is a massive assault on your battery and fans.
Congrats to the KDE/Plasma team. I really like the humility and the pragmatic openness of quite a few influential members in their community.
I wouldn’t expect it to make any difference. The problem is most likely that the graphics driver can’t decode some video formats. (https://wiki.archlinux.org/index.php/Hardware_video_acceleration#Comparison_tables) In my experience my older laptops stay pretty cool with Youtube and VLC playing MP4, but they absolutely burn up with WebRTC video conferences and Zoom, which apparently use a video codec that my Intel graphics driver for Linux doesn’t support.
In general I can’t say I share your optimism for Wayland. After 20 years of running Linux on the desktop I’ve probably become a bit of a luddite, but Wayland really feals like it’s throwing out the baby with the bathwater. I don’t doubt it will be useable 99% of the time eventually, but that last 1% will be a bunch of nasty edge cases. It seems to be following the modern trend of protecting users from theoretical threats and security flaws while getting in the way of the user’s legitimate activities. Yes, I understand that spyware could *theoretically* be logging my keystrokes and copying the output of all my windows, but the hard fact is that I *don’t* have any spyware on my system, and I just want screensharing to work when I need it without jumping through any hoops.
My sense is Wayland is meddling in issues which should be decided by the OS and userland designers not some lower level interface. Things like absolute versus relative positioning should not be hardcoded into Wayland. On security: Wayland is the tail wagging the dog. Designing security in is a higher level task Wayland could contribute to but Wayland is certainly not the right thing to manage this. I personally think Wayland is jumping the gun on these issues and it needs “higher level management” to take responsibility for security and set up a working committe. In fact they may have to anyway because there isn’t a single mainstream OS designed with security in mind from the start.
On the codec issue: This can easily be solved by bundling new codecs to take priority over the supplied driver codecs. For a long time Windows had a mechanism where users could meddle with codec priority. Today almost nobody needs to do this (and I couldn’t find it now if I went looking for it) because of new driver models (which are generally very stable and backed by a good driver development kit which sometimes does the heavy lifting for IHVs) and general better support from IHVs but Linux tends to not have this due in part to the threat of breaking compatibility and some people refusing to sign NDAs.
I have no idea why people keep falling for propriatory communications software when things like WebRTC and SIP exist. There’s no need for walled gardens and different standards for the sake of different standards. But then there is always some bright spark wanting to reinvent the wheel or some hedge fund wanting to own the entire market with Microsoft or Uber style not to mention meddlers in NSA and GCHQ and backdoor deals with Microsoft to buy out and compromise things like Skype.
Ughh. yes, I totally agree that proprietary offerings like Zoom and Teams are pretty awful, and also unnecessary. I actually implemented Big Blue Button and Jitsi Meet for the group I work with at the start of COVID, but the majority eventually voted to move to Zoom… Oh well. At least Zoom has a Linux version. But on a completely pragmatic level, I have to say that on Linux I don’t notice any difference between the graphics/thermal performance with Zoom and the open source WebRTC apps. That is to say, they both run terribly hot (to say nothing of the battery life), because neither supports hardware video acceleration, at least not on my slightly older Intel processor gen. Teams for Linux actually does appear to support hardware video acceleration, so it’s noticeably better in that sense, but the overall product is an abomination that Microsoft just threw over the fence to make a show of it’s love for Linux. Apparently they don’t actually love Linux *users*, because Teams for Linux is not even close to feature parity with the Windows version. I honestly don’t understand how an Electron wrapped web app can be so drastically inferior and different from one platform to another, but of course Microsoft found a way to mess that up with Teams. It’s also a worrisome example of how Chrome is becoming the new Internet Explorer 6. The WebRTC features for Teams only work with Chromium based browsers, but not on Firefox, despite Firefox actually pioneering and championing the WebRTC standard.
It’s frustrating how Microsoft of all companies don’t get portability layers. It should just be a hash include. Yes I know things can be a bit more complicated which is why you use abstraction layers too. Boom. Done. Then there are people who only test on their narrowly defined system so it only works with one product not a range of products following a standard. None of this is new. They should be deeply embrassed with themselves.
It’s not just developers like Microsoft who hate other developers or user bases but goevernment and other service providers who seem to hate their clients/customers. One thing I have noticed is some people with a job title always insist on you using their chosen protocol no matter what usability or security implications there are. Or the other thing is administrative defaults (like in Teams) set so neither they nor you cannot invite additional none corporate guests into the conference even to fulfill a safeguarding role which in some cases may be a legal requirement. Then there are people using Zoom just because that was the last big thing they heard in the media.
This is exactly the reason for almost all “normal” computer users. I try to do my part and inform people that alternatives exist and invite them to meetings on the open WebRTC platforms whenever it depends on me to do so. But there’s still a tendency toward defaulting to a commercial offering and assuming it must be better because it costs money.
@rahim123
Very true. To make sense of a complex and changing world peoples behaviour can be “sticky”. People can also take mental shortcuts when assessing authority and reliability and value – this is recognised as underpinning areas of interest within contract law. But the you also have marketing. Commercial companies spend a lot on mindshare and basically flood everywhere with adverts or give journalists something to talk about. They also spend money on reducing friction at the point of sale.
Back to KDE and Wayland. I think Wayland has some developer and end user friction issues and they are not listening. Fix this and any desktop on top of Wayland becomes more of a proposition.
rahim123,
Yeah, this seems to be happening everywhere. As an industry we are replacing free P2P options to centralized services that rely on 3rd parties service providers like Webex, zoom, etc. I’m reminded of MS netmeeting back in the late 90s that did video conferencing and desktop sharing over a modem or basic DSL. Granted it was a proprietary windows program, but nevertheless it was a good example of P2P working without the need for subscriptions and it worked very well. The software industry has turned it’s back on P2P, which despite the great potential is overlooked in favor of centralized subscription models.
There are those of us who push alternatives but alas network effects often strip us of the element of choice. 🙁
Yes, I agree. That’s one of the many problems with Wayland. There’s no reason why a window system can’t have per-program privileges and only allow certain clients to manipulate global state. The window system in the OS I’m writing will do exactly that (since my OS will have a radical file-oriented architecture, all security will reduce to file security, and the window system itself won’t have to do anything other than split up its interfaces into different files). Leaving out such functionality is inexcusable in my opinion. Even Linux hasn’t been immune to the trend of creeping architectural authoritarianism despite being free in terms of licensing.
It would be nice if an authorative expert on security (by expertise not vested interest job title or bias) spoke up on the architectural issues versus Wayland. I think the Wayland people are pulling a Gnome 3 and only listening to the people they want to hear and cherrypicking. Not only that but their view needs to be articulated in a readable way not buried in “herd memory” scattered across a bazillion different youtubes and blogs only familiar to the people running the Wayland show. As things stand I think Waylands approach on these issues is more office politics than science.
andreww591,
Yeah, there needs to be a permission for these sort of things. The problem is, by rejecting such capabilities up front, it makes wayland unusable for certain use cases and it poses barriers to adoption. We’ve already seen this quite explicitly:
https://ubuntu.com/blog/bionic-beaver-18-04-lts-to-use-xorg-by-default
So as I see it they’ll eventually have to add this whether they want to or not, so I suspect it will get there. But unfortunately it may end up getting hacked in when it should have been planned for from the start. Oh well.
Yes, for better or worse linux development is extremely authoritarian. From the kernel down through many (but not all) distros there’s little pretense of democracy there. People can and do fork it of course, but distros have their own agendas and don’t ask or care about user needs. The CentOS debacle is a good example.
What I see happening with Wayland is that even though the goal was a cleaner, more straightforward design, all the capabilities that X has that Wayland lacks are going to be hacked in, and the end result will be a system just as convoluted as X, just differently so,
X will have been reinvented, only just as poorly, with much of your software breaking along the way at some point.
What is the reason for manipulating global state? Xwindows used that in the past to do things like making screen savers work and remote desktops/ screen sharing. I understand it was frustrating that many wayland compositors lacked those features, but they’ve been added back using apis that allow for finer grained permissions rather than the free for all that previously existed. I think that’s progress rather than an architectural failing.
I think a window system should provide some kind of standard API for programs to manipulate specific parts of global state (e.g. screenshots, positioning of windows of other clients, etc.), subject to fine-grained permissions. With Wayland it seems that instead of the obvious idea of splitting up the all-or-nothing security of X11 into separate objects with individual access control, they decided to keep the all-or-nothing model and just make “all” a lot smaller, as far as the core protocol is concerned. I guess it is a bit harder to split up permissions in a window system based on sockets and anonymous shared memory than in one on a pure file-oriented OS with per-process permissions as its primary security model.
It seems that the APIs to manipulate global state under Wayland are not part of the Wayland protocol itself and are completely different between different Wayland implementations, leading to various applications that only work on specific desktops (which usually isn’t much of a problem on X). Of course, as is typical for Linux people, they are now trying to fix the lack of standardization for such features by adding yet more hacky middleware layers that will probably be replaced in a few years. I definitely think it’s an architectural failing that they didn’t make that kind of functionality part of the core protocol.
andreww591,
Those are good points. Having one API working across window managers is a big deal for application compatibility. By refusing to handle certain features like window positioning, screen capturing, screen sharing, etc wayland may be passing the buck to desktop managers. The issue with this of course is that it creates API fragmentation for these features and whether things will work or not could depend on which window manager is running. Consequently applications will end up either requiring specific desktops or will require more code bloat and developer effort to handle all APIs. It wouldn’t be the end of the world, but it’s easy to see how it could turn into a source of frustration for future users and developers. Some will probably see it as a missed opportunity to provide a consolidated API.
Yeah, there’s a disconnect between those focusing on ideology versus those focusing on pragmatism. The ideologists don’t want to hear about pragmatism, haha.
For video conferences, the issue could be the screen overlays and all those fancy gadgets that are transposed over the video chats.
Unlike plain old video streaming, there are a lot of things that are done locally. Previously you would at most have some subtitles, and ad links (the controls were usually on a separate div). Now many video streams are combined, and all those effects are added in prost processing. And that does not scale well to older laptops. (Mine can start cooking breakfast after a 30 mins meeting).
Interesting, that makes sense. What OS are you running on the older laptop that gets hot?
@sukru
All of those operations should be cheap even on a modest GPU. You’ve basically got a load of bits being blitted across the bus then a few basic operations to scale and position them on the screen which are culled where necessary to avoid overdraw. As long as the PCI bus doesn’t get saturated I don’t know what the problem is.
Can you check your GPU and CPU loads next time you run that app just to narrow down where the issue is?
You can also trace what graphics API calls or code calls are hogging resources. it could be a dud driver but driver issues are actually pretty rare. It’s usually bad code causing problems.
@HollyB
Instead of bit operations they usually do hardware accelerated blending (i.e.: desktop composition style), and some of them (like background blurs) are really computationally intensive.
These are known issues, on all platforms:
Zoom: https://nerdschalk.com/how-to-fix-high-gpu-usage-issue-in-zoom/, https://devforum.zoom.us/t/very-high-cpu-load-audio-video-problems-for-web-sdk/7937/7
Meet: https://www.reddit.com/r/chrome/comments/g6i1ij/chrome_on_mac_has_very_high_cpu_usage_in_meets/, https://www.howtogeek.com/412738/how-to-turn-hardware-acceleration-on-and-off-in-chrome/
Teams:
https://techcommunity.microsoft.com/t5/microsoft-teams/microsoft-teams-disable-hardware-acceleration/m-p/301765, https://www.itexperience.net/fix-performance-issues-teams-high-cpu-memory-usage/
FWIW, Thom mentioned on twitter it fixed the issue. Poor Wayland support is the reason why I’ve abandoned KDE and moved to Gnome/Sway. Screen sharing works pretty well in gnome Wayland these days. I have no reason to use anything non wayland these days, its faster, smoother, more secure. I don’t have any spyware on my system either, but dear god solar winds didn’t have any either, until they did. You don’t wait until your house is on fire to buy a smoke detector do you?
I understand your point. It’s definitely a calculated risk. It’s one of the reasons why I exclusively use Linux, because of the very real risk of spyware on Windows. And I don’t use my Android devices for anything important or access any important accounts with them. But the architecture of Xorg is still nothing more than a theoretical risk (after all it’s been around for how many decades?), whereas the probability of not being able to get my work done due to screensharing and clipboard issues with Wayland is much more real.
But to your point, does pretty much any kind of screensharing app just work under Wayland? WebRTC, proprietary Electron apps, proprietary remote desktop access? Do they actually share the contents of all windows, including Xwayland? What about screenshot apps, does pretty much any one work, and do they capture 100% of the screen content? And what about clipboard manager apps?
https://www.giac.org/paper/gcih/571/x11-forwarding-ssh-considered-harmful/104780
“But the architecture of Xorg is still nothing more than a theoretical risk”
rahim123 No X11 not a theoretical risk but a true blown risk with write up after write up on how different exploits were done taking advantage of X11 weaknesses. X11 was not a theoretical risk in 2004 and it still not a theoretical risk. XWayland by default has a stack of feature off that all historically cause major X11 security problems. Yes two of the things you have to turn off to fix X11 protocol the section X11 WM interface with and section X11 compositors interface with.
rahim123 it surprises most people that there are Linux worm viruses out there that do exploit X11 forwarding to get from system to system. X11 need to be treated as security flawed and its a serous risk using X11 that should not be taken lightly.
Yes Xorg has been around for decades and before Xorg existed worms exploiting X11 forwarding existed. X11 security faults are old well documented and regularly exploited against anyone who is overly trusting with X11 on network.
@rahim123
Please give a citation for a genine security expert (not a Wayland job title or fanboi) on the security issues and a genuine reason to justify Waylands “tail wagging the dog” policies. Quoting badly designed systems as examples does not count.
Sorry to be blunt but I’ve only ever read handwaving or snakeoil when looking for an answer.
HollyB the answer is simpler than what you think.
https://en.wikipedia.org/wiki/Evaluation_Assurance_Level
Does the X11 protocol technically pass the EAL4 level requirements.
Lot of what wayland is doing is a tail of a very big dog.
https://en.wikipedia.org/wiki/Multilevel_security
This is a big point you cannot properly implement Multilevel security in X11. Windows and OS X can in fact. Wayland design it is possible.
What I’m looking for is the architectural security case for Wayland hardcoding things it shouldn’t have any business hardcoding. Any faults with X.11 doesn’t justify this. That’s the problem.
Yes with Wayland better security models may be implemeted than X.11 Swiss cheese but this doesn’t mean Wayland should butt its nose into bigger architectural decisions. In any case even assuming Waylands hardcoding is correct there’s still the lack of architectural context. Hardcoding a kludge for issues which should be managed elsewhere is shifting a problem at the cost of causing another problem. So now you have two problems not just one problem when you should have zero problems.
I think Wayland is guilty of overreach and the left hand not knowing what the right hand is doing. That’s why the issue needs an independent security perspective not a job title on the Wayland project to look into this. Then you have toe treading responses with “We know we’re wrong but it’s our decision and we’re defending it” mentalities. Now we need an expert on psychology and organisations to pipe up. Then that will set the security expert off which means we will need a third expert in systems theory to explain how organisations have their own integrity issues which need fixing because of these conflicts.
https://en.wikipedia.org/wiki/Multilevel_security
“What I’m looking for is the architectural security case for Wayland hardcoding things it shouldn’t have any business hardcoding. ”
That is starting the wrong way. Can you implement Multilevel security properly without doing what Wayland protocol does the answer is no you cannot. Windows and Mac OS there is basically a hidden protocol in the compositors. You notice this hidden protocol when you look into how Windows and Mac OS implement things. Under windows you are looking to UAC windows and old application scaling.
Yes win32 absolute positioning has not be non relative positioning since Windows Vista. Really win32 application is really running like Xwayland on top of Wayland inside Windows.
Yes the way windows vista does it compositor is to make Multi level security and application Window scaling in fact work under windows.
Really the question with Wayland vs Windows/Mac OS is do you show the security reality to application developers or not. Windows/Mac OS decided to hide this behind a layer where Wayland protocol goes here is the limitations of the direct path.
@oiaohm
No I’m not. It’s a global view i.e. “a decision in the round” before you start cutting code and go up an ally that requires hacks and workarounds as well as cheesing a million people off. What happens next is really something else and a mix of business decision and technical decisions where the underlying architecture is designed with all the factors in mind. Wayland just decided to go “we are doing this” and dropped anchor which ripped the bottom out of a lot of peoples boats. Then there is a political the after event decision “Wah Wah” security to justify a preconceived end point. So instead of a clean well designed and documented design that’s easy to make sense of (I have never seen a chart laying out the architectural model) you have “herd memory” and everyone arguing points from their individual perspective. It’s simply too much work and friction to chase up and I’m not watching dozens of youtubes or arguing the point with random people who are not top level decision makers.
Basically, you haven’t answered the question.
https://lwn.net/Articles/517375/
>>Wayland just decided to go “we are doing this” and dropped anchor which ripped the bottom out of a lot of peoples boats. Then there is a political the after event decision “Wah Wah” security to justify a preconceived end point.
Sorry no this is 2012 talk but I could go back to before Wayland existed. Kristian Høgsberg the one who started wayland is one of many before Wayland who were trying to get Multi level security to work with X11.
>>Any faults with X.11 doesn’t justify this.
This point of yours is wrong. Wayland design is based on the earlier debates on what was required to fix X11 so Multi level security could work. Result has basically been write a new protocol as there is no way forwards without ripping the bottom out of a lot of people boats.
Really to make multi level security work with X11 results in requiring to run multi virtual machines. When each program is running in a virtual machine it does not have real screen positions either. You use Xephyr to sandbox under X11 you also don’t have absolute positions any more this was a argued for feature by security people.
HollyB big thing here Wayland does not come from no where. There are over decade of security debates in the X11 world before Wayland exists arguing over every point and coming to agreement what would be be best way for security.
What wayland hard codes for security features that items that were prototyped with QubesOS using virtual machines and XSELinux. Yes these beasts were also world breaking.
HollyB The big catch here is features you see on high security desktops that you don’t see on your general desktop wayland is shoving straight in your face.
**I have never seen a chart laying out the architectural model.
https://wayland.freedesktop.org/architecture.html The reality there is not very much to the architectural model chart. Most likely you have see the Wayland model and gone that not enough and that cannot be it when that is it.
https://en.wikipedia.org/wiki/File:Wayland_display_server_protocol.svg
This was done by a developer doing a system wide appearance adding in the system provided parts to the Wayland architecture model. Notice how much is EGL all those green EGL links are not Wayland.
Yes that 2012 debate is important because that when they say for security DMABUF need to be used on the wayland protocol transfers to prevent security issues. eglstreams from Nvidia still does not implement DMABUF so still has the 2012 security fault that does display it self that wayland compositor cannot restart and leave applications running like it should when using Nvidia.
Relative positioning that wayland mandates is there for security. This force by virtual machine and Xephyr in past out of security need.
Lot of problems Wayland is having to solve were unsolved problems in the virtual machine and Xephyr attempts at X11 secure desktops.
Really here a fun one. Wayland client side decorations this happens to be part performance and part security. There was debates in 2006 about the idea that applications could be fully encrypted in memory to compositor how would they display their output.
Fun right Wayland model technically support application encrypted in memory buffer passed to compositor encrypted with that buffer only being finally decoded by the computer screen. This explains why the Wayland model does not have Wayland doing very much.
oiaohm,
You keep obsessing over relative positioning, which is a mostly a semantic debate few care about. Most of us know that X11 is an ancient code base and that there is merit in replacing it with something much cleaner and more modern. That’s not the problem at all. The problem is when the community working on a new solution is comprised of individuals with exceptionally elitist attitudes inflating their opinion over everyone else’s. Sure it’s fine for them, but when tasked with making a general purpose solution for the broader community it becomes particularly important to work with the community. And to be blunt, linux often falls short here. I say this as someone who really wants linux to be good. In many ways it is and wayland is mostly good too… but it could be better if they had worked with the community to address our collective needs from the outset rather than ending up with more convoluted workarounds after the fact. Alas it is what it is.
I really wish everyone could be more respectful of differing viewpoints rather than being so authoritarian about only their opinions being right, but I know that I may as well be pissing in the wind, haha 🙂
Unpopular opinion puffin: Wayland is proof that the FOSS community can’t make a true competitor to WDDM even when given a clean sheet to work with. On a desktop or laptop system, I’d rather have WDDM with the NT kernel than having X.org or Wayland with the Linux kernel.
RAM is cheap nowadays. For example, last week I updated my dad’s cheap laptop from 2GB to 4GB, for just 15 euros. inclusive of 24% VAT. Meanwhile, battery energy is expensive and scarce (and adds weight to the device) and CPU power is also getting scarce (with the recent trend for ultra-low-voltage and fanless designs). Looks like Windows has done the right thing by sacrificing the abundant resource (RAM).
Missed it when Microsoft changed with Windows Vista to WDDM Nvidia had to be pulled kicking and screaming. MacOS dropping Nvidia support was Nvidia refusing to change. Same with
https://en.wikipedia.org/wiki/Direct_Rendering_Infrastructure
The Linux WDDM is called Direct Rendering Infrastructure. Yes Kristian Høgsberg the developer of Wayland also wrote version 2.0 of Direct Rendering Infrastructure in 2008 before Wayland was released. Version 3.0 came out in 2013. Turns out Linux world does not change this much yet at core Nvidia binary blob drivers are still DRI 1.0 from 1998.
How does Microsoft pull Nvidia kicking and screaming. kurkosdr. Since vista there has been 16 different versions of WDDM each new version is required so new features of direct X will work. So if Nvidia does not release a new WDDM supporting want Microsoft wants their graphics cards will not support the latest game. This is kind of the downside of open standards like opengl and vulkan for Linux this is not a option.
“CPU power is also getting scarce (with the recent trend for ultra-low-voltage and fanless designs).”
Not really. M1 from apple shows low voltage and fanless does not have to equal lack of CPU power. Its horrible really watching a M1 running in benchmarks for twice as long as the X86 competitors when performing the same with the same size battery.
WDDM of the first objective was to unify Scheduling between vendors. DRI 3.0 was also todo the same thing.
“Looks like Windows has done the right thing by sacrificing the abundant resource (RAM).”
To be correct ram is not abundant resource at the moment. There is kind of a hard wall for emergency hibernation at 16G of ram caused by storage transfer rates and how short of notice you get that that computer you have sleeping is going to sleep its way out of battery. More ram the less time a computer can sit with hardware in sleep mode with OS sitting in ram. Microsoft choice that ram is a abundant resource is wrong. Modern ram is not the most power effective beast.
https://www.electronicsweekly.com/news/research-news/uk-iii-v-memory-saves-power-dram-flash-2020-01/
This new proto non-volatile memory if it pans out will make the RAM storage a true abundant resource due to being non-volatile but that does not exist yet in production.
**pointing out that his memory will not need the reconstructive write that is necessary after reading a ‘1’ from DRAM. Nor will it need DRAM’s periodic refresh.**
kurkosdr its really simple to forget current designs of ram are really power vampires sitting in sleep mode sucking down power with the refresh or when you reading ram and you read a 1 you have to burn power to put it back. Double the ram in a laptop halves the sleep time.
Power management complex problem. The fact Linux graphical environments are starting to use cgroups around applications helps.
@oiaohm
WDDM has lots of backwards compatibility. If an IHV doesn’t want to release new features by issuing a new driver that’s something else and has nothing to do with WDDM. It’s not as if Microsoft break the entire driver model with each iteration to support new features many of which have nothing to do with graphics APIs.
Yet according to you Linux is the pinnacle of architecture stability.
Yes really. One new system out of nowhere which 95% of general computer end users will never use.
Yet according to you one swallow makes a summer.
Honestly I can do without the fanboism…
**WDDM has lots of backwards compatibility. If an IHV doesn’t want to release new features by issuing a new driver that’s something else and has nothing to do with WDDM. It’s not as if Microsoft break the entire driver model with each iteration to support new features many of which have nothing to do with graphics APIs.
Except this comes at a price and worst part lot larger than you think.
https://borncity.com/win/2020/06/13/windows-10-version-2004-graphics-issues-with-multi-monitor-and-f-lux/
This is one of the most recent but this has happened many times. Not breaking the entire driver model when making a functionality change leads to the above issues of nice randomish breakages of applications and application developers having to have a growing list of quirks. Yes Microsoft does force application developers to release new application by bringing the house down on their users from time to time.
This comes a question who do you bring the house down on application developers or driver developers pick one.
**Yet according to you Linux is the pinnacle of architecture stability.
It depends on you measure of stability.
https://www.kernel.org/doc/Documentation/process/stable-api-nonsense.rst
I guess you read this.
**Kernel interfaces are cleaned up over time. If there is no one using a
current interface, it is deleted.
This turns out to be important difference. A lot of issues happen with windows when people load up a old driver on a new version of windows because the backwards compatibility code is now broken and no one noticed. No one knows until some poor user is using their system and it goes bell up with the Microsoft model drivers this includes WDDM.
This is a difference the Linux world the interface that was not confirmed to be functioning correctly will have been removed from kernel so breaking the driver. You think about when you want the failure. When you are installing the driver (Linux) or when you are doing something critical that must be right(windows). Please note the windows choice has resulted in different medical machines killing people. This is why there is different defines of stability.
Stability to install the driver(windows) or stability when using the driver(Linux) pick 1. Yes it truly pick one stability to install the driver means you will maintain large and large code base supporting more backwards compatibility stuff resulting in more lines of code and more bugs.
**Yes really. One new system out of nowhere which 95% of general computer end users will never use.
Except it was not the first one. Arm based chromebooks and other things have been around for quite some time. There has been a need to question how much power CPU/GPU have been using.
Remember Linux has embedded usages there are some phones for example where running the CPU arm 4 times harder using zram to compress data in ram is more power effective than adding more ram so results in longer battery life with less ram.
@oiaohm
Do you know what? I don’t actually care. I have an old IGPU and an old external GPU I can run over an expresscard adapter on Windows 10 and never had a problem. Beyond a minimal point I’m simply not interested in Reddit chatter or the byzantine inner workings and politics of any system whether Windows or Linux because most of it is wild speculation or office politics. My stuff works and that’s all I need to know. I let the job titles and engineers sort it out.