Consider these memory requirements for Fedora Core 2, as specified by Red Hat: Minimum for graphical: 192MB and Recommended for graphical: 256MB Does that sound any alarm bells with you? 192MB minimum? I’ve been running Linux for five years (and am a huge supporter), and have plenty of experience with Windows, Mac OS X and others. And those numbers are shocking — severely so. No other general-purpose OS in existence has such high requirements. Linux is getting very fat.
I appreciate that there are other distros; however, this is symptomatic of what’s happening to Linux in general. The other mainstream desktop distros are equally demanding (even if not as much as Fedora, for example Arch Linux or Slackware run Gnome on 128 MB, but not very comfortably when you load 2-3 apps at the same time), desktops and apps are bloating beyond control, and it’s starting to put Linux in a troublesome situation. Allow me to elaborate.
A worrying tale
Recently, a friend of mine expressed an interest in running Linux on his machine. Sick and tired of endless spyware and viruses, he wanted a way out — so I gave him a copy of Mandrake 10.0 Official. A couple of days later, he got back to me with the sad news I was prepared for: it’s just too slow. His box, an 600 MHz 128MB RAM system, ran Windows XP happily, but with Mandrake it was considerably slower. Not only did it take longer to boot up, it crawled when running several major apps (Mozilla, OpenOffice.org and Evolution on top of KDE) and suffered more desktop glitches and bugs.
Sigh. What could I do? I knew from my own experience that XP with Office and IE is snappier and lighter on memory than GNOME/KDE with OOo and Moz/Firefox, so I couldn’t deny the problem. I couldn’t tell him to switch to Fluxbox, Dillo and AbiWord, as those apps wouldn’t provide him with what he needs. And I couldn’t tell him to grudgingly install Slackware, Debian or Gentoo; they may run a bit faster, but they’re not really suitable for newcomers.
Now, I’m not saying that modern desktop distros should work on a 286 with 1MB of RAM, or anything like that. I’m just being realistic — they should still run decently on hardware that’s a mere three years old, like my friend’s machine. If he has to buy more RAM, upgrade his CPU or even buy a whole new PC just to run desktop Linux adequately, how are we any better than Microsoft?
Gone are the days when we could advocate Linux as a fast and light OS that gives old machines a new boost. BeOS on an ancient box is still faster than Linux on the latest kit. And to me, this is very sad. We need REAL reasons to suggest Linux over Windows, and they’re slowly being eroded — bit by bit. Linux used to be massively more stable than Windows, but XP was a great improvement and meanwhile we have highly bug-ridden Mandrake and Fedora releases. XP also shortened boot time considerably, whereas with Linux it’s just getting longer and longer and longer…
Computers getting faster?
At this rate, Linux could soon face major challenges by the upcoming hobby/community OSes. There’s Syllable, OpenBeOS, SkyOS, ReactOS and MenuetOS — all of which are orders of magnitude lighter and faster than modern Linux distros, and make a fast machine actually feel FAST. Sure, they’re still in early stages of development, but they’re already putting emphasis on performance and elegant design. More speed means more productivity.
To some people running 3 GHz 1G RAM boxes, this argument may not seem like an issue at present; however, things will change. A 200 MHz box used to be more than adequate for a spiffy Linux desktop, and now it’s almost unusable (unless you’re willing to dump most apps and spend hours tweaking and hacking). In those times, us Linux users were drooling over the prospect of multi-GHz chips, expecting lightning-fast app startup and super-smooth running. But no, instead, we’re still waiting as the disk thrashes and windows stutter to redraw and boot times grow.
So when people talk about 10 GHz CPUs with so much hope and optimism, I cringe. We WON’T have the lightning-fast apps. We won’t have near-instant startup. We thought this would happen when chips hit 100 MHz, and 500 MHz, and 1 GHz, and 3 GHz, and Linux is just bloating itself out to fill it. You see, computers aren’t getting any faster. CPUs, hard drives and RAM may be improving, but the machines themselves are pretty much static. Why should a 1 GHz box with Fedora be so much slower than a 7 MHz Amiga? Sure, the PC does more – a lot more – but not over 1000 times more (taking into account RAM and HD power too). It doesn’t make you 1000 times more productive.
It’s a very sad state of affairs. Linux was supposed to be the liberating OS, disruptive technology that would change the playing field for computing. It was supposed to breathe new life into PCs and give third-world countries new opportunities. It was supposed to avoid the Microsoftian upgrade treadmill; instead, it’s rushing after Moore’s Law. Such a shame.
Denying ourselves a chance
But let’s think about some of the real-world implications of Linux’s bloat. Around the world in thousands of companies are millions upon millions of Win98 and WinNT4 systems. These boxes are being prepared for retirement as Microsoft ends the lifespan for the OSes, and this should be a wonderful opportunity for Linux. Imagine if Linux vendors and advocates could go into businesses and say: “Don’t throw out those Win98 and NT4 boxes, and don’t spend vast amounts of money on Win2k/XP. Put Linux on instead and save time and money!”.
But that opportunity has been destroyed. The average Win98 and NT4 box has 32 or 64M of RAM and CPUs in the range of 300 – 500 MHz — in other words, entirely unsuitable for modern desktop Linux distros. This gigantic market, so full of potential to spread Linux adoption and curb the Microsoft monopoly, has been eliminated by the massive bloat.
This should really get people thinking: a huge market we can’t enter.
The possibility of stressing Linux’s price benefits, stability and security, all gone. Instead, businesses are now forced to buy new boxes if they are even considering Linux, and if you’re splashing out that much you may as well stick with what you know OS-wise. Companies would LOVE to maintain their current hardware investment with a secure, supported OS, but that possibility has been ruined.
Impractical solutions
Now, at this point many of you will be saying “but there are alternatives”. And yes, you’re right to say that, and yes, there are. But two difficulties remain: firstly, why should we have to hack init scripts, change WMs to something minimal, and throw out our most featureful apps? Why should newcomers have to go through this trouble just to get an OS that gives them some real performance boost over Windows?
Sure, you can just about get by with IceWM, Dillo, AbiWord, Sylpheed et al. But let’s face it, they don’t rival Windows software in the same way as GNOME/KDE, Moz/Konq, OpenOffice.org and Evolution. It’s hard to get newcomers using Linux with those limited and basic tools; new Linux convertees need the powerful software that matches up to Windows. Linux novices will get the idea that serious apps which rival Windows software are far too bloated to use effectively.
Secondly, why should users have to install Slackware, Debian or Gentoo just to get adequate speed? Those distros are primarily targeted at experienced users — the kind of people who know how to tweak for performance anyway. The distros geared towards newcomers don’t pay any attention to speed, and it’s giving a lot of people a very bad impression. Spend an hour or two browsing first-timer Linux forums on the Net; you’ll be dismayed by the number of posts asking why it takes so long to boot, why it’s slower to run, why it’s always swapping. Especially when they’ve been told that Linux is better than Windows.
So telling newcomers to ditch their powerful apps, move to spartan desktops, install tougher distros and hack startup scripts isn’t the cure. In fact, it proves just how bad the problem is getting.
Conclusion
So what can be done? We need to put a serious emphasis on elegant design, careful coding and making the most of RAM, not throwing in hurried features just because we can. Open source coders need to appreciate that not everyone has 3 GHz boxes with 1G RAM — and that the few who do want to get their money’s worth from their hardware investment. Typically, open source hackers, being interested in tech, have very powerful boxes; as a result, they never experience their apps running on moderate systems.
This has been particularly noticeable in GNOME development. On my box, extracting a long tar file under GNOME-Terminal is a disaster — and reaffirms the problem. When extracting, GNOME-Terminal uses around 70% of the CPU just to draw the text, leaving only 30% for the extraction itself. That’s pitifully poor. Metacity is hellishly slow over networked X, and, curiously, these two offending apps were both written by the same guy (Havoc Pennington). He may have talent in writing a lot of code quickly, but it’s not good code. We need programmers who appreciate performance, elegant design and low overheads.
We need to understand that there are millions and millions of PCs out there which could (and should) be running Linux, but can’t because of the obscene memory requirements. We need to admit that many home users are being turned away because it offers no peformance boost over XP and its apps, and in most cases it’s even worse.
We’re digging a big hole here — a hole from which there may be no easy escape. Linux needs as many tangible benefits over Windows as possible, and we’re losing them.
Losing performance, losing stability, losing things to advocate.
I look forward to reading your comments.
About the author
Bob Marr is a sysadmin and tech writer, and has used Linux for five years. Currently, his favorite distribution is Arch Linux.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
It’s pretty amazing that something as heavily used as Gnome terminal could be so sluggish, even in comparison to the already slow Gnome platform. And these guys are using straight C. Just think if they went to C# or Java. Oh, the horror.
…unless they already know Unix and/or Linux.
Have you been paying attention at all? The GNOME terminal performance issues – which occur only with certain drivers that have very poor RENDER acceleration. If a better X is developed – as is happening very quickly now – those problems will be gone. Many key components in GNOME – metacity, pango, etc. – rely heavily on the RENDER extension of X, as they should. It has nothing to do with the language or the code. It’s the architecture, which unfortunately, until now, has not developed in order to support the technologies in GNOME.
> Also there is no reason why a user cant upgrade his hardware anymore.
No there is.
My toshiba satellite 4090 can’t have more than 128mb of ram…
It’s a nice notebook and it rocks.
I never had an hardware issue with it.
Yes it’s old, 1999, but the LCD is very fine and it is working fine with my NT4.
Trying to use Thunderbird and Firefox my fan runs often, continuosly I’d say, with SuSE9.1, with NT4 rarely.
What have I have to do ?
Still using the unsupported NT4 or sending to the trash because modern GNU/Linux distros can’t run on it at an acceptable speed ?
99% of the dual-boot rigs out there get better fps in windows than in linux. Linux has closed the gap, but windows invariably runs games better.
I call major bullshit on you. If we discount crap like ATi drivers (because the drivers just plain suck. They suck on Windows, but they suck even worse in Linux), and talk properly ported games (Quake and UTx engines, rather than Wine(X) emulation), it’s pretty common to get 10% to 30% better fps in Linux.
Hell, my Windows drive has the bare minimum of hardware drivers and Direct X on it, just for playing games, and yet, my overly messed with, half broken, more than slightly borked Linux drive still gets better frames in UT2K4 and ET.
Just curious – what graphics driver are you using?
Just the standard SIS driver that comes with XFree86 4.3.0 (XFree86 4.4/XORG 6.7.0 isn’t in the ports tree yet), I have 16MB dedicated to it (its a crappy onboard video card). Even in all its crappyness, it should be *that* crappy when it comes to graphical processing.
This is an important call to arms for those who are able to make those lightweight boxes run again – on Linux. We know it is possible, but we need those LiveCD’s to quickly install that GNU/Linux environment with the needed (Open?)Office, browser, email and media player apps. Possibly with XFCE, or IceWM?
But this article is not an attack on ‘Linux’, and there is no need to defend it. We do want those old machines -so many of them still around- to be usable under Linux in an easy-to-install way. It’s a challenge for those who can – the author is hitting a nail on its head here, let’s listen.
It’s not “graphical processing” in general that is the issue – rather, the question is whether that SiS driver has decent render acceleration. I’m not personally certain about that specific driver. Maybe someone hear has more info on it (?). If not, hopefully the soon to be realeased x.org improvements will help you out. The current situation with Pango and RENDER does irk me, too, but it is being worked upon. Thankfully, with my Nvidia card and the nv drivers, I get very good performance.
Dude, if you’re going to flame somebody at least be informed.
Gnome-terminal is not slow, VTE the terminal emulator widget itself is slow, which was written by Nalin at Red Hat, not Havoc. And VTE is mostly slow because it’s doing a lot of work processing unicode (Pango, written by Owen Taylor) and rendering text to the screen using XRENDER (written by Keith Packard).
So, really you just make yourself sound like you’ve done no research at all with such a comment.
About system requirements: people have the code. People know where the bottlenecks are. They aren’t getting fixed fast because most people don’t seem to care, it’s fast enough and those who want it to go faster are bitching about it on OSNews instead of writing the damn code.
Its not that I dont want to upgrade memory of my machine. I am unable to find RAM for my machine. My machine requires a 100MHz SDRAM and its not available in the market. If I have to upgrade one component I have upgrade my whole machine. I think this is what the author of this article tries to convey. We need applications which give a better performance on a low configuration machine like mine.
Windows 98 (1998) min req 32mb ram right?
Windows XP (2001) min req 128mb ram right?
Fedore Core(2004) min req 256mb ram right?
You mean Fedora has higher requirements than XP 4 years later? say it isn’t so!
Anyone who has run XP on a 300mhz machine with 128mb of ram knows that is full of it, You can open apps sure, but if you use it for more than e-mail and a browser you’re not going to like it. Everyone knows 5-600mhz and 256mb ram is the reccomended requirment on XP for a mosly full functional system.
Microsoft offices min requirements are just as much as fedora’s if Windows bundled office as linux does then linux still runs on par with XP which was made four years ago.
Anyway point is by windows standard Fedora should double XP’s specs every 4 years and it hasnt even done that yet, there is no story here unless you went to sleep in 2001 and just woke up.
Summary of the arguments presented in this thread:
– My X yearx old computer with Y MB RAM is slow with the latest Z Linux distribution.
where 3 < X < 6,
and 64 < Y < 256,
and Z is an element of the set of full-fledged Linux distributions like Fedora, Mandrake, SuSE, you name it.
The meaningless conclusion is: “Linux is getting very fat”.
How the author jumps from his anecdotal evidence to his meaningless conclusion is clearly fuel for a long thread, seeing as this thread is growing fast…
Here ya go:
http://www.zipzoomfly.com/jsp/ProductList.jsp?ThirdCategoryCode=011…
I believe that all PC machines sold in the last 5 years can install at least 256MB RAM and that currently costs no more than $60 (less than the cost of pretty well any commercial XP piece of software).
Hence, what this article is about is that someone is either trying to run the full Linux desktop (GNOME/KDE + several large apps) on a machine that’s either more than 5 years old or came with 128MB RAM and they won’t spend a small amount of money for a RAM upgrade.
In other words, the article is an utter waste of space. You cannot *buy* a PC now with under 256MB RAM (and 512MB RAM is becoming the norm) – clearly if you insist on running Linux on less than that, then will you have to adjust what you run (don’t run GNOME/KDE – pick a lightweight window manager – try some of the more lightweight packages alternatives as well e.g. Abiword instead of Open Office and so on).
Apparently, this author thinks that apps shouldn’t be large and shouldn’t require more than a few MB’s RAM to work with. And yet hardware is increasing rapidly on the “baseline PC” – more hard drive space, more RAM and much faster CPUs. You might think this encourages bloatware, but in fact it just improves the user experience and allows more complex packages to be made available that need more resources to run.
IMHO, Things are starting to catch up to users in this community for userland applications. A friend always tells me that just because computers are getting so much faster doesn’t mean we should ignore memory usage of an application. These issues are still very important.
Also, I think part of the problem include developers using the wrong tool for the job. For example, openoffice was mentioned a lot here. OpenOffice is a Java based application, and I’m sorry to say it but Java is *not* a good language for GUI things. Don’t get me wrong, it is a good and powerful language, but it is not suited for these things. People will think I’m trolling or what not, but I do know what I am talking about, and when I set up a system I take under consideration what type of application it is and how it was written.
The projects that do pay attention to these finer points are the ones that are getting through just fine even now.
I have to say, it saddens me when an application gets tied to closely to one of the desktop environments. Because all that means to me is bloat.
I’m not 100% sure about this, but I believe the Java part of OO.org are optional. The version with Fedora Core 2, for instance, does not use Java at all, as far as I know.
I like the comparison of current computers with the Amiga, but as you said, a well tuned Linux distro run fast, as the tuned Workbench ran fast on its specialised hardware.
The only device in wich I recognize the Amiga usability speed is my PALM pda 🙂
Computers are a lot more complicated than before, we have AGP, PCI, USB(2), Firewire, ATA, serial ATA, scsi, WIFI,..
We are used to process 1GB files and instead of 880k D7 on my Amiga, I use 4.5GB DVD-R to store my data, it’s 5000 more but a P4 3Ghz is *only* 500 times faster (if you want to count in MHZ, but not a good computing).
So, ok, Linux is slower than the Amiga Workbench but it does a lot more.
Anyway, I use Gentoo on a AMD XP 3200+ with 1GB Ram, KDE and Gnome are very usable but booting is slow. When I switch to the beloved Windowmaker, things are way faster and memory usage is down.
Did you ever tried to play divx with an old AMD K6-3D with 32 or 64MB at 300mhz on Windows? But with Movix on the same hardware, it works, no need to tune, no need to install, just burn the cd.
Linux + Mplayer is the lightest thing that can be run on an old PC for playing movies.
The *free* Linux OS can be used as a desktop OS, a generic server OS, a GRID OS, a home multimedia OS (movix and so).
It as the potential to be very fast, your article is right for KDE, GNOME, OO and distros using that tools but I disagree for Linux *THE* OS.
Maybe the dev. need to make a choice here, quickly provide applications or provide fast applications in a slower cycle,
it is not a Linux particularity, it’s true for all the software industry, it is a major issue in software and hardware world and that’s why most people are “afraid” of electronics in cars or washing machines.
But if we do not want Linux to follow the Beos way, we need working applications asap more than hyper-tuned applications tomorrow, even if it not optimal, the code can be tuned later, we can already see this in the latest KDE release.
I like to tune code at work, maybe it is time for me and other code tuners to help the Linux community..
“Perfection is reached not when there is no longer anything to add, but when there is no longer anything to take away”.
With this quote Brian hits the nail on the head. I personally like linux because I can tweak it. Most of the people the `big’ linux desktops/distro’s are meant for, don’t even know what “tweaking” implies. They sould not have to, because tweaking something to perfection should be the Sein of the coders/hackers[/i], not the users.
Stability, Usability, Speed and Security aren’t just slogans, they are what makes an OS or piece of software better than good. I think these four will forever be on the horizon, but shouldn’t we at least try to attain them?
Someone here wrote: “I would be happy if most developers went into a feature freeze for 6 months and just optimize the shit out of their apps.” I couldn’t agree more. Kill your darlings, dear hackers! Slim and trim down, think again, find The Beautiful Way*. A laurel Crown to her/him who creates the slickest, leanest piece of code! Good luck and happy hacking!
*) “When I am working on a problem I never think about beauty. I only think about how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong.” — Buckminster Fuller
P.S. As to comparisons between XP/win98, linux, BeOS or whatever, not to mention the specs flying around in this thread: who cares? Arguments from example are almost never useful, because one can always find a counter-example. Your teachers should have told you that.
Hi there,
I run FC2 on my PIII550MHz, 512MB RAM, GeFo2 without any problem.
I know I’m a patient person so I don’t mind waiting some seconds to get my apps open, and I understand this might be an issue, but let’s face it: running a 2004 distro on a 2000 computer needs hardware boosting. If I were to run FC2 on the original 128MB RAM, TNT2 and horribly slow Seagate HD I would have probably the same bad experience.
Oh, and I must say also that I make a strong use of gdesklets and bulls**t like that, so I guess my system could be much faster!
Win2000 on my actual machine runs just slower, even slower than KDE3.2 booted from knoppix!
It’s just a matter of personal experience, but sure some Linux apps are just getting out of hand. Anyway, I still like it a lot more than Windows.
Why swap is important:
Swap is the virtual memory on secondary storage where applications and modules go when they are sleeping to free up more main memory as necessary (simplisitc but covers teh basic points). Swap effectively increases your available memory at the expense of extra IO operations to page data in and out of main memory.
Windows uses swap, Linux distro’s use swap.
The actual physical memory required to run an application is the total max active memory required at any time during the life of that application. Mostly, this represents the maximum memory required to load the application and its associated libraries.
An example, FC2 with KDE, Gnome, and Xorg plus basic system services, can run on 512mb of main memory with no swap. The largest application is Xorg itself, consuming at present approximately 30-50mb in its image and data. Thus after starting Xorg, approximately lets say 40mb of main memory is used, Xorg tends to reside in main memory as its libraries are shared and always in use by various applications that may be running. On a 128mb system, this leaves approximately 88mb of main memory for applications. Of course, a slice of this is used for caching – something most distro’s do to increase system performance, this cache will decrease in size as less and less memory is available until it doesnt exist – a hit on performance naturally, but nothing unexpected. It’s used for caching all sorts of IO, libraries etc. If this is 20mb on 128mb system after loading X, then 68mb of main memory approximately remains – but that doesnt include the kernel, so we’ll take 2mb off for that. 66mb.
At any given time, most of the applications and services on a computer are sleeping. Linux distro’s completely swap these out to disk when more main memory is required, and in general, the less active a process or library, the more likely it will be swapped out.
What this means is that you can load any application that does not exceed 66mb in image and data, at any time.
Most applications in modern distro’s use shared objects adn dynamic linking, meaning that if a library is already loaded, it can be used by any other applications that require it, without loading a new instance of that library. This really optimizes memory usage. Windows uses DLL’s for dynamic linking something similar, but im not sure whether they are shared, anyway, im concentrating on Linux distro’s at the moment, to explain system performance.
When a large amount of memory is required for an application, a lot of pages need to be swapped out to make room for it, if this application requires so much memory that libraries it uses wont fit in as well, something known as “thrashing” occurs – that is, the application itself is pages out, the library call made, then the application paged in, and the library paged out, the application then makes another library call, gets paged out etc. This is very detrimental to system performance, as you are probably aware.
Thrashing can only be prevented by choosing applications with less maximum memory requirements, or increasing available main memory. Low end machines will often trash on 3d games, due to the size of texture ad sound files and complex scene hierachies.
This is why dynamic libraries are such a good thing, as static linking increases the actual image size of the application.
There are 2 main bottle necks in any IO operation:
1) The speed that the cpu can transfer data over the system bus
and
2) The speed that the IO device can process the data
Possible fixes for 1) – faster cpu, faster bus, direct memory access
Possible fixes for 2) – faster IO devices
DMA means basically that the device is told what memory needs to be transferred, and can access that memory itself, freeing the cpu up for other work, otherwise the CPU has to manually move the data from main memory to the device. PCI video cards for example, do not to my knowledge support DMA, AGP cards do (AGP = accelerated graphics port, and basically means DMA for video, with a higher bus speed). Likewise, IO operations for non busmastering DMA capable hard drives will be inherently slower than for those with such drives.
If you are going to do a large amount of paging, you should optimize your paging system by having fast, efficient drives with DMA access, and a good bus speed on your motherboard (newer motherboards are obviously better).
What it comes down to is in any OS, memory is not the only concern to system speed, and in fact, if you have 128mb you can run pretty much any major application without slowdown, provided you arent running multiple large applications (this goes for ANY OS, not just Windows or Linux distro’s). Other things that play a part are the size of your swap partition, the speed of your system bus, drives, cpu, and main memory and whether you are using DMA enabled devices or not. This really can play a crucial part in over all system performance.
FC2 with Xorg, KDE, Gnome, a handful of system services, Konqueror, Sylpheed, Kopete, OOWriter, konsole, sshd, and a number of other services, WILL run wihtout any swap at all in just 512mb of memory. I did this for weeks before I realised that my swap partition wasnt being mounted.
What this means is that all those applications fitted together in just 512mb of memory total. At any given time, only 2 or 3 applications or services will actually be awake, the majority are sleeping (inactive pending being woken by some event). If those applications and associated libraries were paged out to swap, FC2 would run nicely on 128mb of ram, with about.. 400mb of swap. There would be some slowdown in switching between applications if the applications were paged out, but thats to be expected.
The reason Gentoo is so slick, is that as a source based distribution, you can compile everything as dynamic/shared, without worrying about what libraries you have on your system.
once upon of a time i thought linux people paid more attention to the number of CPUs it could run on.. rather than optimization. when i installed fedora i reveresed that.
I believe that the great problem can be closed in a word: fragmentation. Linux has a great potential, given by the fact that’s free, and anyone can work or discover & create apps. But taking a part from a place, another part from another place and so and so is giving a huge weight to the distros. That chaos reign can be a serious task for linux and can bring it down.
I’m running SuSE 9.1 prof on a Compaq Presario 2700 (256 Ram) and sometimes it gets really slow, expecially when compiling or tarzipping, so i sadly agree with the article. Don’t tell me “hey, suse sucks use xxxx”, ’cause distros war is a war of the poors, extremely dangerous to the entire linux community.
My personal hope is that major distros will concentrate on putting the various apps al togheter, linking them well, tweaking them, just to buy a solid os with integrated solid apps.
“I’m not 100% sure about this, but I believe the Java part of OO.org are optional. The version with Fedora Core 2, for instance, does not use Java at all, as far as I know.”
Recommended dependency, but never tried it myself. Then again, point is OO isn’t the only one, just a good example Another is Eclipse (the development tool), Yada, yada.
The new wave of tools emerging won’t help the speed issue. Those being C# and mono.
I still maintain that it is essential for developers to use the right tool for the right job. But, then there is the problem of knowing the tools that exist!
Using the right tool for the right job is very important, I agree. But I disagree with your example of Mono and C#. I’ve been very impressed with the responsiveness of mono-based apps – I’m primarily thinking of Muine and MonoDevelop. Load times are fine, responsiveness is good – nothing like OO.org.
“My personal hope is that major distros will concentrate on putting the various apps al togheter, linking them well, tweaking them, just to buy a solid os with integrated solid apps.”
This is a good statement, but I have to say again that it is a difficult process. There are those who do work on these things, but it takes time and there are so many applications. It is not trivial to put all the pieces together.
When people say that Linux (distros or kernel) is kept together by hacks and patches, I cringe and only wish if they really knew how things get done within most of the community. [Not directed at anyone, just a general comment].
The mono devs haven’t even begun to work on optimization. I’ve heard that they expect significant performance boosts shortly after the 1.0 release when they bunker down and optimize everything. Apparently there is a lot of room for improvement still.
“I’ve heard that they expect significant performance boosts shortly after the 1.0 release when they bunker down and optimize everything”
In comparison to what though? I have dealt with it only a little, so I can’t offer any hard data, but I hope that ya keep and open mind and make sure that it doesn’t take you in the wrong direction. I know I will be And I love to test out the new tools!
If a better X is developed – as is happening very quickly now – those problems will be gone.
That was good for a laugh. But KDE never seems to have these problems.
But only….one day….in the future….some years from now..if we only had a better X. Any more comedic relief?
You said,
The GNOME terminal performance issues – which occur only with certain drivers that have very poor RENDER acceleration. If a better X is developed – as is happening very quickly now – those problems will be gone. Many key components in GNOME – metacity, pango, etc. – rely heavily on the RENDER extension of X, as they should. It has nothing to do with the language or the code. It’s the architecture, which unfortunately, until now, has not developed in order to support the technologies in GNOME.
What hardware config would give me better speed?
“But I disagree with your example of Mono and C#. I’ve been very impressed with the responsiveness of mono-based apps – I’m primarily thinking of Muine and MonoDevelop. Load times are fine, responsiveness is good – nothing like OO.org.”
Doh, working backwards here… sorry. Last I used it I was not impressed, but I will say that I have not heard good things as of yet. Again, I wait until the day so I can judge for myself, and hopefully I do gain another option for my development platform.
In comparison to what already seems like a very nice platform for GNOME application development to me. As I said, I’m very impressed with the responsiveness and stability of apps like Muine and MonoDevelop. If you’re referring to the patent issues … I think we will make it through with the ECMA core just fine. The cool part of mono – for developers like me at least – is the Mono-specific, not ASP, windows.forms, and friends.
Well, either you’re the typical fanboy that will lie through his teeth to defend Linux with his last dying breath or you’re about the only person in the world with a dual-boot rig that has games that get better fps in linux. Take your pick.
I don’t play commercial games, so I don’t really have experience with this, but do you have any numbers to back up what you’re saying? If not, maybe you shouldn’t be so rude.
Firstly, Red Hat is not Linux, i.e. it is a Linux distro., not the Linux distro..
Now, I’m not saying that modern desktop distros should work on a 286 with 1MB of RAM, or anything like that. I’m just being realistic — they should still run decently on hardware that’s a mere three years old, like my friend’s machine. (from the article)
My machine is very nearly three years old, was mid-range at best and is twice the spec of your friends machine (256MB RAM, 1.4GHz Athlon). IMHO 256MB of RAM has been the norm on new machines for about 4 years (except for bargain basement machines, which are never good value for money).
So when people talk about 10 GHz CPUs with so much hope and optimism, I cringe. We WON’T have the lightning-fast apps. We won’t have near-instant startup. We thought this would happen when chips hit 100 MHz, and 500 MHz, and 1 GHz, and 3 GHz, and Linux is just bloating itself out to fill it. You see, computers aren’t getting any faster. CPUs, hard drives and RAM may be improving, but the machines themselves are pretty much static.
That is the age old paradox. User and developers generally don’t want faster software they want more features. Additionally, if people never needed to upgrade, they never would, hardware sales would drop and either pace of development would slow or prices would increase (or both).
IMHO Fattening software could enable the fast startup and instantaneous response the author wants. It could encourage the hardware to fatten along with it, then the software could be put on a diet.
I don’t play commercial games, but I don’t have any experience….
Thanks for the laugh again there brad. I mean you don’t play any commercial games, but you thought to chime in anyway.
Well, I have played Quake I, II, III, UT2003 all on linux and windows and invariably they run faster on windows. Most of the time it’s not even close, with the closest being Quake III. Of course I know that either Edward has seriously screwed up windows drivers or he’s just plain lying because the UT engines always run faster in windows with proper drivers on the same rig. UT is especially optimized for windows.
Anyway, you and Edward can go ahead and defend linux, Gnome whatever all you want. I’ll continue to laugh.
If you ask me, linux is more prone to bloat as an application developer has a wide range of API’s to help him do what he wants to do. Also, there is no one desktop to produce appliations for. So, when a good application is created it can use interfaces such as ncurses, qt, gtk or custom. Lets look at my installation. Apps i used most are
firefox (custom interface + gtk)
thunderbird (as above)
dselect (ncurses),
xmms (gtk)
mplayer (gtk),
centericq (gtk)
xpdf (gtk)
blender (custom?)
worker (custom)
You can see that in windows or MacosX all versions of these applications where created using standard API’s for those platforms, but there is not a standard for linux yet and there may never be. There are so many choices, but then thats what makes it so great.
I recently purchased a new machine and i dont really like the bloat of gnome (most things can be done with a good shell) but i knew firefox, thunderbird and games i’d be able to play now with a good gfx card may take up a bit of RAM. So i bought 1Gig. And im very happy
BTW – I didnt RTFA
Oh, so you’ve been basing this off the experience you’ve had on your box alone? What kind of graphics card do you have? I don’t play commercial games, but I’ve done some OpenGL development. I’m chiming in because you seem to be arrogantly applying your personal experience to disparage other people on the forum – and you always seem to leave out actual helpful info like your hardware/software during these “tests.” Just thought I’d double check to see if you had any basis for your rudeness.
It takes about 10 seconds to start OpenOffice and Mozilla on either of my computers. I prefer the leaner faster Textmaker word processor instead. It is full featured and starts up in only about a second. The same company also sells the Planmaker spreadsheet but, neither are not free. For my browser, I used Mozilla Firefox instead of Mozilla because it is much leaner and faster. Firefox does not include an email program so I use a seperate e-mail program.
Slackware 9.1 it recommends 64 MB of RAM for X-Windows. That is not bad but, Slackware does come with a wide choice of kernels, window managers and other options. I wonder if they expect the user to make lightweight choices when using more minimal hardware? So anyway, Slack would not the best choice for a newbie unless perhaps a more experienced user installed it for them.
Vector Linux and Gentoo Linux probably also have more minimal hardware requirements but I have never used either. What should we recommend for a newbie with an older computer? I hope that being bloated is not a problem with all distros and all Linux applications? The Linux community needs to find or create a good leaner faster distro for those who need it.
Lumbergh:
Drivers are whatever the latest reference VIA , SBLive, and nVidia drivers were of about a month ago. DirectX is also the latest (9.0b IIRC). If those are ‘screwed up’ then I have even less respect for Windows than I did before.
Given that people on irc.enterthegame.com#linux report similar results to me (Linux faster than Windows for native games with Q3A/UTx engines) almost all the time, I’m going to guess that it may be the Mandrake/Redhat/SuSE etc have default configurations that aren’t receptive to playing games. Given that I run Debian, and most people who report such values are running Debian or Slackware (with a handful of FreeBSD and clueful Gentoo users as well), this wouldn’t suprise me.
kaiwai: The SiS driver ‘works’, but it’s never going to be great. Unless SiS changed their tune very recently – they don’t and won’t release info to write drivers. From what I’ve gathered, it’s a miracle of reverse engineering that you can use even vaugely accelerated X with SiS chipsets at all. Sorry. See following link for more info;
http://www.winischhofer.net/linuxsisvga.shtml
Brad, I’m with you on this one.
Personal experience on my side: 3d games run about 8-13 fps faster in linux than windows.
Of course I’m using a 2.6 series kernel with preempting, plenty of swap, and 512mb of ram. All my HDD’s are on Ultra DMA 100 and have the correct cables too. Card is a geforce 2 gts pro. Soundcard is emu10k/Sb Live 5.1
TMK the only major 3d game that was developed in windows, is UT/Unreal which was developed with VC++, Quake I, II and III all developed under linux and ported to windows. I beliebe UT2k4 was also developed under linux and ported to windows. BF1942 was developed in windows, ported to linux I think.
But anyway, what defines the speed of the game generally is not whether you play in windows or linux, but whether you have good drivers, a decent supported card and a good motherboard. It’s all about bottlenecks and streamlining.
IME Linux memory management is a lot more efficient, and the memory caching runs faster. I wouldnt recommend running 3d apps with less than 512mb of ram on either linux or windows, due to textures and sound files.
Lemmingburgh seems to me to be a simple, run of the mill, troll.
Mandrake 10.0 is slow, though not unbearably. I have a 550mhz with 128 megs of ram. I can run kde 3.2.2, with a dozen apps open, and several servers. Granted it can take a minute to load Open Office, but it is very usable.
when i ran Mandrake 10 it did seem slow. My solution? use the latest knoppix to due a hard drive install of debian. configure, and make sure synaptic is installed as well. Very easy to do.
It’s not just me, it’s about everybody else out there that plays games except for the fanboys that think that linux is the holy grail. But since you don’t play commercial games, you’re the expert on how well they play on windows vs. linux, so by all means chime in again.
But hey, you’re the graphics expert with your opengl experience. Maybe you should be working on that non-existant X server that needs to catch up with the Gnome guys work.
Or better yet, work on the replacement for the disaster that is Bonobo.
I’m sure Edward appreciate you sticking up for him though. Fanboys must stick together.
Oh forgot to mention that im using NVidia’s binary driver adn GLX modules, not the distro supplied ones.
1gb swap partition.
I run both windows XP and Linux (2.6 + KDE etc) on many PCs:
PII 266MHz 256MB
Crusoe 800MHz 128MB
Via 600MHz 256MB
AthlonXP 2500+ 512MB
P4 3GHz 256MB
Linux is pretty slow on lower end PCs, but it is not painful: i know exactly that it takes X seconds to start app Y, and Z seconds to boot, I can live with that. Windows can be very responsive, but also extremely slow, for reasons that are totally unknown to me. (It’s not fragmenting nor an antivirus, nor fonts)
Also, with Linux, as you get better hardware, you get better performance proportionally. I cannot figure out why, but Windows does get slow even on high-end PCs.
Also, you can easily use linux as a server in text mode on anything.
Man, all I asked for was some numbers and info about what hardware and drivers you’re using. Two linux users have provided that info now, which makes them seem a lot more credible to me. We still don’t know what version of Linux or Windows you’re using either. Just calm down a bit and maybe we can figure out why you’re getting different results than these other people. Bonobo seems a little off-topic at this point, but so you know – the XServer work is well under way and a lot of good fixes are already in x.org CVS. Driver support does suck for some graphics cards right now in Linux – the best drivers coming from NVidia, in my opinion. Things are improving though. Just calm down a bit.
*sigh* Some ppl in here complain about programs being poorly written. How easy do they think optimizing is? Whereas I can understand what they’re trying to say, it’s just not like that. If ppl haven’t tried programming to some extent the word ‘optimizing’ is more or less analogous with someone just magically pushing a button “Optimize program” and it’s superfast and uses -5MB ram.
It’s just not like that.. as jbmadsen pointed out in #60 (or around) he hit the problem 110% correctly. Today we use higher level languages to build applications faster. We do all the exciting parts of adding functionality, being creative and innovative and leave all the more mundane jobs to the compiler/IDE. So productiveness naturally suffer from efficiency. I know one(!) guy who has a fairly decent understanding of assemblylevel coding. And he can indeed write small efficient programs.. The sad thing is that he just might end up taking 10, or maybe 100x as much time to write what another can write using a highler lvl language.
You also have to remember most ppl writing these apps do it for _fun_. Not for you, your grandmother or your dog. Try and tell a guy he is writing bad code and then inform him he should optimize it so it runs better, when the code has been written by him mostly for himself and then released to the public, so _you_ can use it freely.
Simply, if u don’t like where linux/gnu/apps are going, write your own damn code, optimize existing or use DOS. There is no stopping evolution in this; it’ll progress like it has always done.. so either join the ride, do something actively about it (instead of just b*tching) or use/do something else.
Hmm, are you talking abotu games that have been ported to linux, or just ones that are win9x/xp based, and have to be run under winex or the like?
See, I know starcraft runs brilliantly under linux using winex, wine or the starcraft fork of the wine codebase. I also know there is no problem whatsoever with any of the UT/UT2k4/Quake I-II-III etc games (they run fine for me, as previously expressed, including between about 7-13 fps faster).
I have to boot windows XP to play SIMS though, as it uses soley windows API’s and DLLs that arent quite complete in wine/winex yet. I bet windows doesnt play tuxracer real well though.
Technically, the Linux kernel memory management and scheduling seems to be superior to that of windows xp, but then, its also a lot newer too.
Quake I, II and III all developed under linux and ported to windows. I beliebe UT2k4 was also developed under linux and ported to windows
Well since you are clueless about how these games were developed I guess we should definetely believe you to be credible on your performance claims.
Both Carmack and Sweeney both use VC++ and have so for a long time. Since at least quake II for Caramack who has never developed on linux and since forever for Sweeney. They were not “ported” to windows from linux. In fact, porting to linux is always a money loser and the only reason the quake or the unreal series were ported was for the good karma.
You fanboys can keep up your linux myths though. I’ll continue to laugh.
I bet windows doesnt play tuxracer real well though.
Haha, TuxRacer, the jewel of linux gaming. Good one.
This is from my own machine. Perceive it as how you’d like it to. The Windows installations was tweaked moderately for performance.
Machine 1:
P4 2.8E (3.265GHz OC on stock HS)/512MB PC3200/Maxtor 160GB 8MB Cache/ATI Radeon 9000 Pro 128
Machine 2:
Cel 2.2E/256MB PC2700/Maxtor 120GB 8MB Cache/ATI Radeon 9000 Pro 128
Machine 3:
P3 1Ghz/512MB/40GB HDD/nVidia GeForce2Go 16MB VRAM
Boot Time (OS: Machine 1 / 2 / 3)[s]:
Boots to usable GUI (logins are skipped on WinXP)
WinXPPro(JPN): 19 / 25 / 35
WinXPPro(EN): 14 / 22 / X
BeOS5PEMax3.0: X / 16 / 17
*NIX OS Boot Time (Boot to shell + startx)
FC1: 24 + 11(GNOME/Bluecurve default) / 26 + 10(GNOME/Bluecurve default) / X
Slackware9.1: 30 + 4(fluxbox) / X / 46 + 6(fluxbox)
FreeBSD4.10: X / X / 34 + 6(fluxbox)
FreeBSD4.9: X / X / 33 + 6(fluxbox)
There. Come on flame me.
Optimizing is not that hard as you claim it to be. Especially not for extremly bloatet programs.
Its hard to optimize to a certain goal, but its quite easy to optimize a program to be say about 20% faster. When you write your code without optimizing in mind you should be able to get that 20% just by rechecking your code for bottlenecks and then optimizing those. The more you optimize the code, the harder it gets to optimze it further.
So optimizing code for the first time is very easy and should result in a huge performance gain, where of course it is hard to squece more performance from already heavily optimized code.
i agree wholeheartedly with the author. its is silly when the asoundingly fast and big hardware is not enough to do simple tasks such as wordprocessing, extracting tar files, without wasting resources. remember 5 years ago – how big were machines then? tiny compared to today. and the OSes ran better. WinNT4 has a high capability for its ability to runout of a feww hundred MB on disk and in 32/64MB RAM.
most software which is bundled into linux distros is bloatware. i used to be bale to just load up a light window manager and use the tools i wanted. but alas, the fat is seeping down into thelower level tools now… kernel, modified standard commands, etc etc … this is why i prefer BSD now after many years of linux distros. you are guaranteed a base install which is fully functional and its susually within a few hundred MB on disk, if not less.
keep it small, keep its focussed, do it well. let the users buidl big things out of small tools. do not make the users try to break big tools into smaller ones.
inflagranti, you are correct. But the point is that, usually, OPTIMIZING IS NOT FUN. I, as a free software user, have no right to dictate where hobby developers should spend their free time.
The recent release of GNUstep LiveCD uses Morphix hardware auto-detection and it contains light-weight apps that are good for low-end computers. Wmaker is pretty and fast X environment, only it’s very different from Microsoft-style GUIs so one needs some time to learn it. Once you become familiar with it, the GNUstep/Debian combination is, IMO, one of the best available solutions for low-end machines — just what this article is calling for.
My main box is a dual PIII with 384MiB of ram and SCSI disks. I bought that box in 2000 and haven’t felt the need to upgrade it yet.
Before my SGI-branded Sony monitor died, I used my trusty Indy as X term to connect to my box. WindowMaker ran fine, KDE ran fine. Gnome 2.x was completely unusable. There’s something in the way GTK 2.x and nautilus do their stuff that prevents it from being used comfortably over a network connection.
Now I have a 17″ monitor and use my PC with a local display. I’ve had Gnome 2.6 on both FreeBSD and NetBSD, and finally gave up and returned to WindowMaker. It takes 1/10th of the time to load and I didn’t use any gnome app anyway, so it’s not a big loss. I’m running Arch Linux now since I wanted my 3d acceleration back and didn’t want to compile programs (moz and OO.o for example).
GTK 1.x was fast, was incredibly fast. GTK 2.x is awfully slow. Hopefully it will be getting better, but it makes me wonder if I should shop for alternative toolkits, even though I do love the GTK+ API.
One think I’d like to comment on is why the reviewer thinks Evolution is any better than Sylpheed. I know it has the calendar thing but, for basic e-mail/news use, Sylpheed is way faster and works very well.
yup. that’s the major annoyance. that’s the reason why i have 1/4GB ram on 200MHz cyrix and 1/2GB ram on 733MHz via c3 and use woody almost everywhere (one remaining hevaily modified zip-slackware instalation)
🙂
Yes linux distributions are getting fatter, but so is modern PC hardware. If getting fatter means taking advantage of this ever-increasing-in-power modern hardware, then does this necessarily have to be seen as a bad thing?
My specs are P4 3.2 ghz, 1 Gig of ram, ATI 9600 Pro.
I’m running Gentoo and XP Pro.
Now on games up to UT2k4 (quake I, II, III, UT2k3) the difference is meaningless from the perspective that the game is running fast enough so you don’t care, but invariably everyone of those games runs faster on windows than linux. Especially something like UT2k3. The unreal engines have historically been “let’s get the windows version done right and then maybe worry about linux/mac”, where the quake engines have historically had crossplatform in mind. Mouse control has always been somewhat of a problem under X until recently without a ton of tweaking of X parameters and even then it never quite felt right.
Note, phonetic is totally wrong about what platforms all the quake or unreal engines were intiially developed on. They’ve never been developed on linux first. Sweeney has always developed on windows and actually Carmack did use NeXT machines at one time(I think maybe Doom and possibly Quake I), but Quake II onward he’s developed using VC++.
I’ve played some of the older quake engines on older machines and the difference between the windows and linux version was much larger, but linux has improved in recent years.
Listen, linux can be a good gaming machine, but when I see people saying that they’re getting better fps on their linux partition than their windows partition I’m going to get suspicious because it starts reeking of fanboyism.
Are you sure bootup is a good measurment? Linux doesn’t care about fast boot as much cause its not suppose to turn off that much, it waits for things like the network, it doesn’t pass it then load it in the background while something else takes the cpu cycles.
Network usually takes about 8-10 seconds for me, sometimes more and it has next to nothing to do with cpu speed.
Windows has a fast bootup cause after a system lockup or new program installed we’re told to reboot resulting in silently talking to ourselvs about how much windows sucks until we see our desktop again.
but sticking to the subject,
1) The Linux kernel is not specially fat.
2) Some everything-but-the-kitchen-sink GNU/Linux 2004 vintage distributions could be called fat. They are designed and marketed as such.
3) Not happy with a 2004 fat Linux distrib in your 4-year old Duron with 128MB RAM? Good, stick to a slim one.
Pass on, people, nothing to see or read here…
Windows XP does *NOT* feel good in less than 256 of ram. I’ve been using gnome in 256 MB of ram and it was not _that_ bad. Now, if you’re executing evolution and/or openoffice….openoffice takes 60 MB of ram easily. That’s half of the ram on a 128 MB box. Yes, office is *much* better and faster…and the windows window manager is really light.
And yes, gnome and kde are bloated. That’s no surprise, xchat takes 12 MB of RSS usually in my box, where mirc through wine eats much less RAM. X doesn’t takes too much memory IMHO. 23 M of RSS right now is not that much for the beast it is. I guess it could get better. icewm right now is eating 5 MB of RSS, less than fluxbox once you *really* start using it.
IMHO this can be fixed with a bit of tuning. It’d be worth of it to get a set of kernel sysctls tuned for desktop.
I had a friend who wanted to load Linux on an old Pentium MMX machine with 48MB RAM. Windows 98 ran perfectly content on it, but Debian with XFCE4 was too much for it. I dual boot Debian (with Gnome) and Wndows on my Athlon XP 2400+ with 448MB RAM (after 64MB is taken out for graphics) and it goes blow for blow with Windows if not performing faster than Windows, but the minimum requirements keep creeping up. It seems strange to me. It doesn’t feel any bit slower than Windows on my machine. In fact, most of the time it feels faster, but on an old machine it won’t even run (a machine that can capably run Windows). I just don’t get it. Code that is too inefficient for an old computer should run slowly on my computer. I’m sure the answer is something like ‘it uses more resources to make it faster’ or something.
In the end, I have to disagree with the author. I don’t think that the hobby OSs are catching up to GNU/Linux. It takes a lot more that what they have to make an OS that competes with GNU/Linux. Sure, they’re fast, but they don’t do nearly as much stuff as GNU/Linux does. That makes a huge difference. The question is whether they will stay this fast. More importantly maybe, are they faster than GNU/Linux was at that stage in its development?
Wow, what a reasonable post. Thank you. I would like to point out, though, that ATI driver support isn’t that great. It’s quite possible that the guys with NVidia cards could be getting better performance in Linux. Anyway, lets bury that argument because no one seems to have any kind of authoritative benchmarking – I know I’ve seen some around somewhere. But, well, it’s 6:30AM and I haven’t gone to sleep yet. I’m done with my work. This thread is out of control. Good night/morning.
XP soooo slow that, we’v installed it on 3 machine in out network , all p4 with 256 ram, 1st week it runs fast after that, it become really slow, even startup time is too long, i mean after it show the desptop and begin loading start menu etc, it take too long time, no need to tell how it’ll be slow after applying SP or other patches.
so i just installed win2003, guess what, with few step tweaking ( performance,disabling undeeded services etc) it really run faster then XP.
but like all windows systems, i have to reboot the system after 24-48hourse.. u know why .. it will be tooo slow.
GTK 1.x was fast, was incredibly fast. GTK 2.x is awfully slow. Hopefully it will be getting better, but it makes me wonder if I should shop for alternative toolkits, even though I do love the GTK+ API.
It’s quite painful to run gtk 2.x apps on older hardware. Apologists will say that it’s X’s problem, but imo that’s no excuse to say, “hey in 5 years X and the hardware will finally catch up with us” .
These are RatHeadish products.
Just check requirements for RH 9.0.
Then compare them with those for Slackware 😉
Hmm.. just poking about in the history of quake.. You are right on one thing, and wrong on another
Yes, Quake (all incarnations as far as I can tell) were ported to linux from DOS at first, but then windows. So you were right there, thanks for the correction.
However, Quake is not specifically or exclusively optimized for windows (in any of its incarnations) Carmack and Id in general develop in pure ANSI C/C++ and assembler. Carmack in particular refuses to use microsofts DirectX offerings, instead coding his own rendering routines pretty much from scratch (he’s a technological purist).
Its important to note that while they are not developed as far as I can tell, on a Linux platform, Id games ARE developed FOR linux platforms.
This is a trend that is increasing, and all I can say to you, ya immature sod, is SO NER!
Yes I’m a linux fanboy, and damn proud of it. I’m proud of the motivation and principles behind FOSS, and I support it whole heartedly. Unlike the average window user, I keep myself aware of the developments and intents of the proprietary software world. It’s not most proprietary software companies that worry me, just a few, unfortunately, windows fanboy, Microsoft is one of them.
Likewise, I support AMD over Intel, because I am aware of the relationship between Intel and Microsoft and the plans for the future. Things are going to get a lot worse, before they can start to get better.
That makes me an AMD fanboy too.
Sure, I’m not saying everything about Linux is better nor is everything about AMD superior to Intel. I’m a fanboy just the same. On the other hand, I hate and despise Microsoft who have done nothing for the good of anyone. Who continually and deliberately engage in dirty, and unhanded, unethical and downright immoral activity for their sole benefit and in the process harmed developers, families and nations.
Yup, I’m a Linux fanboy, and sometimes that means making sacrifices to support my principles. Don’t that worry you though, at least you get to play more commercial games while supporting an evil and detestable software giant. I’m sure all the extra wasted hours soothe your troubled or nonexistent conscience just fine.
Right, and what advantage are you talking about, then?
BTW – We *all* remember how *fat* (in terms of RSS ie: real ram used) went gnome2 when comparing with 1.4. I’m wondering how a debian woody default install (gnome 1.4) feels when compared with typical distros
“Just for the record, I have just booted into KDE, and started only aMSN, Firefox, konsole and kdict. Memory footprint:
774680 TOTAL
263724 USED
17096 BUFFERED
124584 CACHED
That’s 260MB used already.
”
Yes, but 140 megs of that memory are not allocated to processes. Your processes occupy 121 MB of RAM.
A couple things:
First off, Carmack does use DirectInput and DirectSound for win32. Download the quake I or quake II code and see how he just abstracts out the OS specific calls. He’s always used Opengl for video hardware acceleration.
As for fanboyism. All I can say is that when I get involved in social activism it’s a lot more meaningful than just software. The problem with you people is that you take software way too seriously. If you want to be an activist, be an activist in something that really counts.
Anyway, this is way off topic and it’s very early on this part of the planet.
Brad Griffith is right. The ATi driver offerings for Linux really are complete pants when it comes to 3d. My brother has a laptop with a ‘supported’ chipset, and despite six months of on and off fiddling by both of us (he’s just as much a ‘fanboy’ of Linux as me), we’ve completely failed to get anything even approaching decent OpenGL performance out of it. Crack-attack is playable, but that’s about it.
I also played all those games under xp and linux, and I disagree with you.
it was a 50/50 split as to which had the better framerate. like you said UT3 is better on windows, but what about the rest ?
and the machine I tested on had one of them ati cards… you know, the ones linux support is supposed to be rubbish with ?
but indeed, please refrain from spreading FUD unless you are prepared to back up what you say with proof.
btw – what about enemy territory ? meet me there and I will kick yer ass into next week
My Dell Inspiron laptop has only 128 MB Ram, and FreeBSD 5.2.1 runs like a champ. I have the heaviest possible install of KDE 3.2.2, with the Arts sound daemon, and many services running in the background, since this is a development machine: CUPS, Apache/PHP, PostgreSQL, Sendmail, plus standard system services. And I am running with ACPI for power-down and battery notification.
Now, I will admit that I have tweaked performance a little bit, with the following startup parameters:
sysctl kern.ipc.shmmax=67108864
sysctl kern.ipc.shmall=32768
And, I have reduced the standard number of ttys from 7 to 3.
But that’s it. I am running the standard base kernel, not recompiled for efficiency, and all the cool desktop stuff. Very rarely do I get a windows lock-up, and I think that is because of the buggy IBM display driver (apparently, XFree86 3.4 is supposed to fix that, but packages are not yet available).
I will say though, that Slackware properly configured ran almost as well on this laptop. I wouldn’t even start to try Fedora, Mandrake, or Suse on this system, though. Nor would I even begin to run Gnome… no thanks.
Yes, I would like more RAM, and sometimes I run WindowMaker, if I want more performance for a specific task (such as Gimp), but overall, it is quite useable; I often run Mozilla + many Konsole sessions + Kate + Gaim + Xmms + other various programs concurrently with no problem at all.
I do not know if someone already posted something comparable.
Windows XP was designed and optimized in 2000/2001 for Computers from those years and even earlier. Fedora Core 2 was released 2004. The computers of 2003 and 2004 usually have more memory und faster CPUs. In my opinion it is ok for developers to optimize their software for this hardware.
I have been on a MS conference recently (shame on me as a FreeBSD and Linux user/developer owning a Powerbook with Mac OS).
They gave an introduction to Longhorn. For best performance and all details Longhorn will need an 128MB 3d accelerated graphics card – just for desktop use!
Longhorn will be released 2006 (or probably later) so it seems to be a fair assumption about the consumer hardware widely in use in 2006.
To me it is one of the important benefits of Linux/FreeBSD and open source in general that I can have an up-to-date operating system and applications any time.
Firstly I was a bit amazed that the writer was surprised a modern day OS required at least 192MB of RAM.
256MB is default for any XP-machine, and you will need it when you run more than 2-3 apps at the same time. We order PC’s with 512 now (Windowssoftware are resourcehogs too)
Quote : His box, an 600 MHz 128MB RAM system, ran Windows XP happily
I cannot quite imagine that because if you start E.G Word on a machine like that, XP starts swapping like hell on the harddrive.
What makes a machine snappy:
– CPU
– Enough RAM
– speedy harddrive
A PC with an older CPU can react pretty snappy when it conatins a fast harddrive and enough RAM, A new PC, low on RAM and a slow (5400RPM) hardrive can feel like an 486.
RAM-modules and Hardrive are dirt cheap these days. Barenones aren’t expensive either.
If you need performance stick to older OS’s and software otherwise buy a faster box. In case of Linux you can always try one of those slimmed down distro’s.
Next to that it’s up to the user to decide what’s fast or what’s slow. Some don’t even notice their machine is as slow as hell 🙂
Nah, boot-time is not everything, but is something to refer to. All systems was configured to achieve at least fast gui responses on normal uses (nothing bigger than firefox + ooo running back-to-back + alpha).
One point I should say is that FC1 with default BlueCurve/GNOME and no boot-time optimizations, takes a century to boot, and I’m not even close to be impressed by the GUI speed. My FreeBSD/Fluxbox works just fine and bloody fast on the 1GHz laptop. I’d stick to it for as long as it lasts. I really am not that sensitive to GUI consistency, as long as I don’t find it that much counter-productive. And I fell in love with FreeBSD for some time now(almost 2 years).
I think that the desktop-oriented hobby OSes will make a big change in computing the near future. Free is not everything. The same applies to open source.
This is fast approaching 200 Posts.
http://www.osnews.com/comment.php?news_id=7324&offset=30&rows=45#24…
Anyway remember this excellent OSNEWS interview.
http://www.osnews.com/story.php?news_id=5215
Havoc says better profiling tools will help to speed things up.
Well KDE used valgrind, what about GNOME?
OpenOffice is a Java based application, and I’m sorry to say it but Java is *not* a good language for GUI things.
Not it isn’t. If you check the sources, you’ll see it is C++
And may I add that my WinXP machines experiences 30-40days uptime, with no performance hit, except for one thing: Explorer sometimes eats up the memory upto 100MB+. Solution is to kill explorer and restart it. I’m really proud with it, since I use it casually and for gaming(mainly Lineage2). It also runs Apache/PHP/MySQL and Mercury Mail Server in the background everytime, and I still have an average of 400MB free phys memory with those running. WinXP is stable and also flexible. I like Win2003 better, but can’t afford it.
I’ll just pray that BeOS will once again become feasible.
Choosing a toolkit (GTK or QT) and sticking as much as you can to applications that use this toolkit helps. I’m not surprised when the author tells that the combination KDE/Mozilla/OO eats memory. They all use different toolkits, so more libraries have to be loaded in to the memory. I only occasionly use GTK apps (Gimp and Sodipodi) and try to use as much KDE (QT) applications as possible. I’ve got 256 mb of ram and have no performance problems whatsoever.
Blaming Linux is useless, because the source of the problem are the bloated DE.
I love BSD and I love Linux. Real “geeks”, like myself and thousands of others can use BSD or Linux without a GUI. I think CLI is great and talk about low memory foot prints.
🙂
Troy
I would ask that everyone take into consideration their demands before getting too worked up over RAM requirements.
Talk lately has for the most part been about features. Everyone wants features. They want translucent windows. They want wicked visualizations. They want desktop integration. They want hardware integration. They want little graphical thingies on the desktop that tell them how much hard disk they have left.
The developer community is, for the most part, pursuing adding these features that everyone wants. But that takes a lot of work. And it takes directing programmer focus on those tasks. So you have to take your pick, many times, between running the software on a faster computer, or having cooler features. Or of course, you can personally open up the source code and optimize it yourself.
>troy banther
Nice one! But watching movies in ASCII is no fun (well, not always )
>Alan
I think they should audit and optimize first, then add features. And the suggestion to have a periodic feature-freeze and optimize/cleanup is really cool.
Wether or not Linux is getting fat isn’t the real issue here. Speedups is a good thing anyway, and needs to be a priority.
There is indeed a good reason why I stick to Fluxbox. The bloaty DE’s loadingtime is longer that LOTR. Firefox is damn slow to load too. Its a shame.
You can watch movies in the CLI with full graphics.
Gentoo with gnome 2.6, bmp, nicotine and epiphany (10 tabs) takes 166 MB here. it runs very smooth, although there’s no comparison to BeOS.
But i totally agree with the author, this is all getting way too heavy, and i will have big fun installing GNU/Linux on my brother’s celeron 433 with 96 MB RAM (on an i810 motherboard, means 96 MB for the OS, graphics and sound). i installed mandrake 9 on an similar system (64M) back when it was new (mandrake 9 of course, not the computer), and it was crap slow, windows didn’t redraw, some apps failed to start. you couldn’t even get continuous sound playback.
I love BeOS, still using it when i only want to listen to some music or watch videos. This is what you call snappy. And if you don’t believe me, do a stress test, open 10 videos at the same time, and then start moving windows around. If it wouldn’t lack a decent word processor, i had already migrated all my family and friends who want that their computer “just works”.
And i will have to try FreeBSD. I installed 5.1 a year ago or so, but X refused to work on it so i deleted it.
Hmm, I’m an activist for many things, mostly while I’m not studying I’m working to raise funds for Kidsmart, a program run by the bluelight council to educate and protect young children.
If you think software is innocuous, you should think again, and learn abotu what is going on and how it will affect YOU in the not to distant future.
Anyway, most of this is moot, I gave my experiences, you called me a fanboy. You were right, still, you meant it in a derogative sense, which makes you the one in the wrong. Just like deriding Islam over Christianity, or Christianity over Islam is wrong. I’m a Christian, but everyone has to find their own path. Again this is OT, but I hate trolls like you who mosey in, start a fight, then declaim all responsibility.
FYI, I picked up that linux was the development platform from a textbook on 3d programming which actually was a waste of time. I didnt just jump to the conclusion.
Your claims that every game runs better on windows have been refuted by 3 people now, and noone I see has stood up to support you. Perhaps they run better for you, for whatever reason.
But then, I don’t even know why I bother responding to you, you are so obviously a troll.
I think that being afraid of antoher OpenOS (like BeOS) taking the place of Linux is foolish. The fact is, that linux 2.6 kernel is VERY fast thing. What makes linux box slow, is the X and QT/GTK. If other systems like BeOS were to be faster, they would have to use their own GUI and develop their own GUI apps. What we need now, is to make X faster, and then take a closer look at QT and GTK.
But I think, that this has already began, kernel 2.6 is MUCH faster than 2.4 and so is KDE 3.2 over KDE 3.1, so I guess we’re on a good way here.
“Typically, open source hackers, being interested in tech, have very powerful boxes; as a result, they never experience their apps running on moderate systems.”
bullllllllsh*t…without any hard numbers my guess would be the exact OPPOSITE. When I read blogs of developers and they mention system specs they are often between 600mhz to a 1gighz. Why? Because there are very few games that push someone to upgrade to the bleeding edge.
Oh and complaints that XP works on a system but not FC2…you are out to lunch my friend…XP is what…3 or 4 years old? FC2 is weeks old.
Get a grip.
How about this. Have a performance preference from the OS level that should be referenced by apps to optimize for the specific hardware. ie. have numbers like, 1 being sub-100MHz pentiums, 2 being <250MHz p2, 3: <600MHz p2 … and these numbers is considered to set the default settings for apps(OS/DE/heavy-duty apps etc) . Then we will have a common argument ground to compare or have a reasonable performance by default.
Pardon my English.
Finally someone graps the issues at hand. I’ve been saying and thinking what this article talks about for years now and it’s refreshing to see that there are other sober people out there.
If anything, usage seems to be going down for some things.
My PowerComputing PowerBase PPC 603e Mac Clone with 48mb of ram could run KDE 1.1 very well. KDE 2.0 was dog slow. KDE 3.2 is also slow, but the applications are usable.
Wow, I was just out for a couple of hours to University… I left when there were like 12 comments, and now I just got back and its 190 :S.
But anyways, I think the author has a very valid point. Of course Linux runs great on older systems, but hey, I got a snappier interface from my Windows Longhorn Build 4074… And that’s alpha stuff! It’s just a damn shame KDE/Gnome eat resources like hell. And of course I could install Xfce (love it, really) but all the major distro’s are Gnome and/or KDE based, and they all want the latest stuff.
Speed is no problem on my main machine. It is a problem on my laptop though (PII 366 w/ 64 MBRAM). I am forced to use Windows ME (*evil laughter*, as y’all know I ain’t Anti-MS, but ME was just, well, a joke, a bad joke), because MDK just won’t run on it. And since its “just” my laptop, I ain’t gonna put hours and hours of tweaking into it.
Off-topic: anyone can recommend me a Xfce based distro that would perform snappy on this laptop?
Good, accurate article.
I think your article is excellent and try to open up some minds.
One of the propaganda that there is out there is that linux run on very very old hardware. But all of the distro more buisness-centric are huge resource eater. Red-hat, Suse, mdk, sun’s jds, …
if open-source doesnt produce more eficient code, it is save to say that this code is more secure than windows?
I remember comfortably running Red Hat 3.0.3 (kernel 1.2.13) with X windows (fvwm) and using LaTex to write papers.