While Microsoft’s Security Essentials has been very well received because of its small footprint and unobtrusive nature, it didn’t always rank among the very top when it came to its detection rates. Overall, I’d still say it’s one of the best antivirus tools. Now, with version 2.0, Microsoft has improved the detection mechanisms, but of course, it’ll take some tests before we can see how effective they are.
The major new feature here is improved heuristic scanning, which should improve detection rates, but possibly at the cost of more false positives. Contrary to what most other sites are reporting, Security Essentials did not rely solely on definitions in version 1.0; it has always had heuristic scanning, it’s just that version 2 has a better engine.
Another major addition is network inspection. This feature makes use of thw Windows Filtering Platform introduced with Windows Vista, which is also included in Windows 7 (but not in XP – but then again, XP is an old turd and you need to move on). I’m sure the following explanation makes a lot of sense to those in the know (I only vaguely get it).”
Windows Filtering Platform is a new architecture in Windows Vista and Windows Server 2008 that enables independent software vendors (ISVs) to filter and modify TCP/IP packets, monitor or authorize connections, filter Internet Protocol security (IPsec)-protected traffic, and filter remote procedure calls (RPCs). Filtering and modifying TCP/IP packets provides unprecedented access to the TCP/IP packet processing path. In this path, you can examine or modify outgoing and incoming packets before additional processing occurs. By accessing the TCP/IP processing path at different layers, you can more easily create firewalls, antivirus software, diagnostic software, and other types of applications and services.
Windows Firewall has also been integrated into MSE, but said integration doesn’t amount to more than being able to tweak the Firewall from inside MSE. MSE 2.0 also integrates with Internet Explorer so that it can block malicious scripts.
While this new version is supposed to appear inside MSE and/or Windows Update, but several people are reporting that’s not yet the case. You can always download the new version from Microsoft’s download centre.
I rely on MSE more and more, and on 3rd party malware detectors less and less. But I would still like to see a report that ranks the best.
See av-test, and av-comparatives.
I’ve never understood why. A big part of the reason why its necessary is because of Microsoft’s OS design. I just don’t trust them not to make the same errors twice.
There is a reason why fortune 500 companies with large accounting departments with many smart people in it hire outside accounting firms to audit them. I don’t know why computer security should be any different.
Get a job repairing computers for the public, you’ll find it doesn’t matter how many security design features they added as over 90% (well, close to 100% really) of the users that get viruses install them manually.
Edited 2010-12-17 23:39 UTC
Yeah, I understand that. Why on earth would you want a Microsoft designed solution to that problem? You need a third party to objectively look at the problem with no possible conflict of interest.
I think it’s a brilliant move. msft is large enough for hosting autonomous sections inside it.
The extra deluxe benefit is that there’s (hopefully) some communication with the MSE department and the others in a way not possibly or desired by other companies that thrive on the fact that Windows is shit and ubiqutous.
If Microsoft was a ten man show I’d agree with you, but not now.
It doesn’t matter who designed it, take the Gawker website password leaks where they didn’t properly secure the passwords, 99% of security issues that actually cause damage stem from user error and ignorance of security. Even if it’s just classic calling them up and tricking them into giving out there passwords.
Edited 2010-12-18 17:53 UTC
I don’t understand your argument. I’m saying there is a clear potential for a conflict of interest in having a Microsoft Anti malware program protect a Microsoft operating system. Yes anyone can make mistakes.
There are two potential problems with having the same company
1)If Company A makes mistake 1, they are probably likely to make the same mistake over and over again.
2)Company A may purposefully not protect the system from a class of vulnerabilities that are “ahem” abused for other reasons by their large customers. ( There were some serious security bugs in windows 95 that were preserved in later versions ( through ME) for the purpose of “compatibility”.
Company B on the other hand, is likely to still make mistakes, but different ones than Company A. So, hopefully, a small percentage of vulnerabilities will get through Company B and Company A.
Make sense?
I don’t think your argument makes sense. Microsoft has a vested interest in ensuring that users have the best overall experience using their computers from a performance, reliability, and safety perspective. Windows is, after all, in competition with Linux-based OSes and Mac OS X.
Anti-virus companies definitely don’t have the same holistic interests. They are more interested in getting you to buy subscriptions to their products and continue valuing whatever security they provide. Thus they have an interest in being more intrusive (more popups, balloons, icons, etc) and sacrificing performance and reliability for marginally better protection (if it’s better at all).
It’s not like the AV products are changing the fundamental security model of the OS (and they cannot legitimately do so other than through the limited hooks the OS provides), so they are not ‘plugging any holes’ in the OS that Microsoft would have missed. At best they are providing a blacklisting service at a relatively high cost.
As far as I know (which is not that far ), MSE is committed to using only publicly specified interfaces of the Operating System, unlike other AV vendors, so they operate more within the design of the OS, unlike some of the other products.
If you don’t agree with me, you aren’t paranoid enough about secuirty.
Microsoft Secuirty Essentials do not pass the smell test. Beter than nothing? Yes. Good enough? No.
Well, the problem is that the 3rd party solutions are much worse than MS offering and quite often utterly crap.
The only AV solution I’ve ever heard anyone say anything good about is eset.
Uhhhh…I hate to break the news to ya but it IS third party. MSFT just bought Giant Antispy/antivirus and brought the developers onboard. It is the same guys doing the same job they were before, they just get paid by MSFT instead of selling little shiny discs.
I still don’t really see that 7 is all that much better than XP. And for VMs, it’s lighter weight than W7 in disk, cpu, and memory usage. I intend to keep using it as long as my applications support it.
MSE, however, does seem to be the most resource-efficient AV product I’ve used.
Unless you have PIII or earlier, it makes ZERO sense to run XP for hardware reasons. Windows 7 makes MUCH better use of modern hardware features like the video card and multicore processors, while also being about a million times more secure. It’s also a lot more stable.
Even back when XP was all that Microsoft had to offer I already found it a steaming pile of crap – with 7 out the door, the stench has only gotten worse.
Sorry but last time I looked, XP beat Windows 7 in multi-core performance unless you’re using 32 cores.
XP is much more secure now days after the service packs and it actually seems like the hackers are turning to Windows 7 now and 64bit systems. Microsoft made simple mistakes with XP in which they didn’t even enable the firewall by default on XP’s release(Remember Blaster?).
Simple security issues are why XP was so bad, which was fixed properly in SP2.
Edited 2010-12-17 23:18 UTC
Try to get rid of an infection of HDD Plus on a WinXP box, and then let’s see how fast you change your tune.
I didn’t say XP was more secure than Windows 7, I said it was more secure than when XP was released.
Not sure why my comment got voted down because it’s the truth if you read it properly, since I compared XP from release to SP2.
I will agree with 7 being more secure out of the box, but can’t agree that it is more stable. I used XP for about 8 years, and could count on two hands the number of times it blue screened on me, and most of those were due to faulty hardware. I’m not saying that 7 is LESS stable, but it’s really hard to get any more stable than XP was for me. I also had zero security issues with XP, but I’m one of those individuals that actually used common sense, so perhaps I am in the minority.
As for 7 itself…. meh. I hate the new task bar and the classic win32 theme got switched back on in a hurry. And the whole aero snap thing proved to be a pain in the ass with multiple monitors, so that got turned off too. So externally, it’s basically like XP with a slightly improved start menu, and a file manager that’s even worse than before. Except that they moved everything around for no apparent reason, so I have to hunt for things in the control panel when I used to know instantly where they were.
That might be because starting with XP the default is to reboot instead of showing blue screens – having entire offices “blue” in the morning was just too frightening to corporate customers, while seeing the login screen is somewhat expected (and users forget often enough that they didn’t log out the evening before).
See http://www.tunexp.com/tips/maintain_your_computer/disabling_blue_sc…
Or faulty drivers. I have a (hacked up) video driver on win7 that crashed regularily – the only symptom is that the screen hangs for 15 seconds or so, the driver is reloaded and a tooltip is shown that driver gobbledegook had to be restarted. Applications keep on running.
On XP that would be a sudden reboot.
Edited 2010-12-18 00:21 UTC
It never spontaneously rebooted either. Actually, there’s an option where you can tell it not to reboot instead of blue screen, and I’ve always had that turned on.
Granted, it will blue screen more than Vista/7 with bad drivers, but the solution to that is don’t install bad drivers lol
7 also restarts immediately after a bluescreen by default.
(here: 7/64 ult)
Just out of curiosity, how did you manage to BSOD Windows 7 ? I run it regularily for a bit more than 6 months now, and though DWM/Aero is an awfully buggy piece of sh*t, I never managed to crash the rest.
Edited 2010-12-19 07:32 UTC
Same way as any operating system – memory corruption in kernel space – either by failing hardware, overheating, or installing drivers that arn’t reeeeally compatible.
The days of bsoding through opening a malicious jpeg are over (arguably) but there are still lots of very common non-userspace ways to get a bsod. My last desktop would bsod whenever i accidentally bumped it – turns out i was missing a few standoffs when i assembled it and it was shorting the motherboard to the case
I had a bsod on my first week of vista trying to install an unsigned driver that wasn’t as compatible as i thought it was.
EDIT: You can of course see whether or not it restarts by default after a bsod by investigating the setting (Win+Pause > Advanced System Settings > Startup and Recovery > System Failure).
Edited 2010-12-19 08:16 UTC
Not much can be done about failing hardware, indeed, no matter which OS you run, but why does overheating corrupt memory too ? I thought it only resulted in the motherboard’s firmware (BIOS, EFI, or whatever else) abruptly turning the computer off ?
Edited 2010-12-19 08:43 UTC
Not much can be done about failing hardware, indeed, no matter which OS you run, but why does overheating corrupt memory too ? I thought it only resulted in the motherboard’s firmware (BIOS, EFI, or whatever else) abruptly turning the computer off ?
BIOS/EFI/etc can only power the system off if it catches the situation fast enough, and even then there’s several ‘ifs’, like for example not all CPUs handle heat equally well. Like one PIII could only handle 80 celsigrades whereas another one in the same patch could handle up to 100 celsigrades. If the motherboard/BIOS/et al is configured to take a value of 90 celsigrades as the overheating point the one mentioned earlier would already fail before reaching that point and POOF; system crash.
Oh, and of course, if the CPU is overheating the instructions it’s executing could return wrong values and thus write wrong data in the memory, or the registers holding the write address could have their values corrupt by the overheating situation and thus the data would be written in the wrong place. And this is only CPU overheating; memory can overheat too, both of which can and most likely will result in memory corruption.
Isn’t the mobo firmware supposed to be set up for the right maximal CPU temperature by a good margin ? Or the CPU supposed to be working at the maximal temperature permitted by most motherboards ?
Edited 2010-12-19 08:58 UTC
It is because the memory is so densely packed nowadays. Look up “cosmic rays and memory” and prepare to have your mind blown. You’d be surprised how many “soft errors” are caused even by cosmic rays interacting with the memory cells as they pass through the planet.
Google did a study on memory errors and found that on 8GB of RAM like I have you are looking at tens of thousands of “soft errors” every. single. month. and it is just the robust design of the software and OS that keeps that from being catastrophic. That is also why on ANY machine where the results will be mission critical or risk lives ECC memory is used, as it will auto correct a good 90%+ of those soft errors and rerun any calculations it can’t fix.
But since whether your bullet in the latest FPS isn’t really affected by these errors, and the OS will take care of most of the problems, it simply isn’t worth the extra expense in day to day life. But that is one of the reason I recommend AMD boards, as you can drop ECC memory into most of them which means if you do use it for work like CAD where errors really matter then it doesn’t cost an arm and a leg for an ECC board like with Intel. But check it out, Google it. You’d be amazed how many errors you are probably having at this minute that the OS just deals with.
Oh, but I believe you… I study physics, and once you start to use precise CCD sensors, cosmic rays are nearly guaranteed to corrupt the measured data noticeably once measurements last more than one minute
Edited 2010-12-20 14:47 UTC
You are not alone. This is my experience too. Although I can count zero hands how many times my XP system has hard crashed on me. Of course I have had apps crash but never the OS. I build my own systems and have for years and that could be part of it. I use common sense, which a lot of users lack and I do not leave it running 24/7 although sometimes weeks at a time. I can say the same for Vista. So clearly the people that say that XP is unstable and a steaming pile of garbage should look at the junk they are running it on. XP is picky with configurations, from my experience. I repair computers full time and have seen all sorts of hardware errors and hard crashes, but they were usually from faulty hardware or drivers or misbehaving applications that had improper hooks into the OS. (Can you say Norton or McAfee?)
I like 7 and it is growing on me more but to say that it is faster on the same hardware as XP really depends on the hardware and based on my experience I would strongly disagree with that. My 2007ish core 2 duo runs noticeably faster in XP than in Vista or 7. I have somewhat modern hardware, although not the current latest.
i like 7 as much as you thom, but for the 64bit edition a first generation a64 x2 with 2gb is the bare minimum
and? this is 2010. for all intents and purposes I can’t buy a system that does not meet those requirements. the minimum requirements are pretty low even for a laptop within the last 3 years never mind a desktop.
I have been using Win7 64 bit with 1 gig of RAM since it came out more or less and have not suffered any issues.
I wouldn’t say that, I’d say that just like XP before it you have to turn off the crap they leave on to lower support costs (like Homegroup). My oldest has Windows 7 HP X64 on his early Pentium D 805, and before that it was a 3.6GHz P4. On both it had a grand total of 2GB of RAM and ran quite well, although he still prefers his WinXP X64 system builders I gave him last year as it plays his MMOs a little faster.
I myself have run both XP X64 and Win 7 X64 on a 3.06GHz P4 with 1.5GB of RAM and it ran smooth as butter. I have found on certain math problems (such as WinRAR parity encoding) that having 64bit pipelines does help speed things up somewhat. That said if the machine is more than 5 years old there really isn’t a point in running Windows 7, as the old 478 P4s and early Athlons just are too slow, especially if they have the standard craptastic IGP.
On the other hand if your machine is newer than 4 years old, or has a Pentium D or better, it really seems kind of silly to risk your machine by running a decade old XP. Windows 7 has UAC, ASLR, Low Rights mode for both IE and Chromium based browsers, you can even add Structured Exception Handling Overwrite Protection with a simple patch. Not only is the security better but the whole OS just handles better with Readyboost, Superfetch, bread crumbs, the libraries, etc. Why anyone would want to go back is beyond me.
I haven’t noticed much difference in stability; maybe I’m lucky, but XP wasn’t all that bad. I guess I hesitate to make that comparison even, since I ran (and still run) XP on a much larger number and variety of machines than I run W7 on.
To each his/her own, though. I don’t like the dumbed-down GUI in W7 that takes more mouse clicks to find many things than it did in XP; I don’t like that my wifi doesn’t reconnect coming out of suspend about 70% of the time (could be a driver issue, but Linux doesn’t suffer from the problem; I don’t have XP on the W7 machine to compare). And W7 is bigger, a bloat which for single- or limited-purpose VMs, does not make sense to carry.
As for security, I ran my XP in limited-user mode, without AV, for a couple years with no problems. I had non-computer-savvy friends run it in LU mode (with AV) also for long periods with no problems. And Stuxnet et al showed that W7 isn’t immune to threats. I’m not saying that W7 isn’t more secure than XP, but *for me* the difference isn’t noticeable, and does not outweigh the extra weight in some of the scenarios I run Windows in.
I will say that the wireless network connects on both my wife’s Dell Inspiron 1525 and my HP Pavilion coming out of both sleep and hibernation without any issues. I would definitely look at a possible driver issue.
more clicks? Ever heard of jump lists?
I have a 1.7GHz P4 machine (with that great PC-800 RDRAM in all its expensive, scarce glory) that would have to disagree with this.
In fact, after installing the Creative and nVidia drivers and a couple other what I would call “essential” programs on a freshly-installed copy of Windows XP SP2, even XP tends to run like ass on it. I never dared to install anti-virus software, because that would really kill the usability (plus, I believe no anti-virus software works as it claims to for someone who knows something about what they’re doing in the first place).
Certainly if I have a P4 with such a low amount of RAM, there are PIII machines out there with similar (or even less) amounts of memory. And imagine this… this machine came with *gasp* 128MB RAM. It was ordered with 256, but for some reason came with only 128, but that was “corrected” by shelling out even more money very soon after receiving the machine.
Uh ? Since when is reducing battery life by using the GPU for drawing every single tiny thing making much better use of hardware ?
Check the peak power consumption of a top-notch CPU, then that of a top-notch video card. Add up the years of advance that CPUs have in terms of power saving features. GPUs are for games and other heavy duty, they should not be turned on for trivial things.
Edited 2010-12-18 07:45 UTC
Are you serous-ally always wanting to comment without research. Neolander
GPU vs CPU for desktop power usage. Sad part is a correctly power managed GPU will beat CPU’s in most cases.
Problem is peak power consumption does not have to happen in proper power managed GPU’s. Infact a lot of laptops will over heat if they are held on peak power consumption of the GPU or CPU so both have to run power management pritty good.
GPU in most cases to do graphical operations take less cycles in there processing units todo. So the GPU can return to power saving more often.
Now are there GPU’s that are power hogs yes there are. NVIDIA really do need to work on there low power mode instead of doing the duel solution with intel. Ie Intel gpu for low power mode and turn Nvidia on for high power for the simple reason Nvidia cannot segment its chip effectively at this stage. Are there GPU’s that are designed well to operate effectively using power management to only turn on parts of there GPU’s ATI and Intel.
Biggest problem with comparing a GPU to a CPU is that a GPU is able to process at least 16 times as much as a CPU in the same time frame. Yet the peak power requirement is not 16 times that of a CPU.
Its bit like the study into turning your CPU clock speed down to save power. Infact on most CPU’s it turns out run as fast as you can then go to sleep for as long as you can is the most power effective way to run a CPU. And its the same to run a GPU. Hit hard hit fast get job done then rest and repeat.
Now Windows 7 issue is different you cannot expect to do way more and have a power saving. Ie if Windows 7 was rendering a interface like XP using GPU assist power effectiveness would be well ahead of XP unless the GPU lacks proper power management. Reason time. The amount of time GPU would be a peak to perform the task would be less than CPU at peak by a large enough amount that you are ahead.
Hard bit here peak power consumption means nothing is the average in use that is important taking account for suspends and the like.
Yes what you say is true if you video card is Nvidia but everything else what you said is false. Should we not be yelling at Nvidia for better quality hardware?
Edited 2010-12-18 08:33 UTC
Many issues with what you said :
-You keep invoking the mythical “properly power managed GPU”, even when it turns out that NVidia chips, which are the norm on the vast majority of high-end laptops, do not belong to this category. You claim that Intel and AMD GPUs are lighter on battery than raw CPU power, but you do not provide sources for this.
-You also conveniently forget than when GPUs are doing their thing, CPUs are not totally idle either. So total power consumption of the computer is that of the GPU PLUS that of the CPU. Sure, the CPU is not doing all the calculations by itself, and thus much less active, but as you say yourself power consumption does not grow linearly with activity.
-Next problem is that when you give computing power to a corporate developer, he’s always going to use all of it and ask for more. It’s a fact of life. If Windows 7 had a software rendered UI, its developers would be forced to keep their UI rendering threads light on resources. But now that they can use the GPU… Well… You get unreadable transparent controls and annoying animations everywhere as a default setting (though we can thankfully bypass that). Net result is that Windows Vista lasts 20% less long on battery (source : http://www.mydigitallife.info/2008/03/22/lenovo-vista-battery-life-… ), and Windows 7, which introduces no major change, is probably just as terrible, though I did not find a similar test for it.
Edited 2010-12-18 09:19 UTC
Because they are common sources to people doing embeded development NVIDIA ATI and INTEL power consumption sheets and management methods. To be correct you are wrong on high end laptops. Most high end Nvidia laptops these days are intel/nvidia hybreds that don’t run well with Linux http://www.nvidia.com/object/io_1221136906708.html and has been that way since 2008. And its Windows 7 screen rendering the intel has enough balls todo it without burning the power budget out. Basically if you are talking NVIDIA gpu you are saying I want my battery in my laptop to be able to go buy buy at a moments notice.
This was purely done due to Nvidia power sucking bad habits. Nvidia does produce one GPU that works proper for power usage its in the Nvidia Tigra does not work with windows at all since the CPU is arm.
Incorrect in fact on all counts. GPU when it doing its screen prep work the CPU can be in a complete suspended state. Its only sometimes GPU Plus CPU.
I did not forget the power the CPU consumes. For the CPU to perform the same task as a power effective GPU it takes 1 more time. Lot more time. CPU are not designed to mess around with images like GPU’s are. GPU operations are more suited to the task. So even allowing for the CPU running GPU + CPU still works out less. As long as you are comparing same load to same load. Now if someone expects the system todo more they are going to get less battery life.
ATI and Intel video chipsets power consumption does almost grow linearly with activity. Due to segmentation in the dia design. So only enough of the chip todo the job has to be turned on at the one time. Ie the big Nvidia problem the massive on off switch solution to power management.
Now the last bit that is the true vampire of the power. But with multi core cpu systems even with software rendering some where getting heavy.
You get unreadable transparent controls and annoying animations everywhere as a default setting
Even when you turn those animations off Windows 7 still performs a GPU noop for them that is basically double process a buffer without need. So you don’t get as much power saving as you should turning it off. This is one vampire.
So some of Windows 7 power eat is pure bad coding.
Anyone who has followed Linux and some of the power hunters. Will know this http://www.pubbs.net/200901/fedora/30923-rfc-disabling-blinking-cur… Yes blinking curser even if the cpu is rendering wakes the GPU up and burns 2 watts extra power. So it really does not take much extra to completely blow away all powersavings from using CPU and GPU with each other. Also you have to remember there are programs out there like blender that by default use the GPU for there rendering and are particularly bad for battery life on Nvidia/intel hybred laptops. Yet are fine on ATI laptops. That is not a Windows 7 thing that is generic XP or Windows 7 opengl interface rendered apps Nvidia performs badly for battery life.
Indeed, totally forgot about Optimus when checking which GPU the average high-end laptop sported, though it was a big selling point when I bought mine. My bad.
Okay, I give you the benefit of doubt at the moment. But I need some experimental data before I believe that totally. We’ll see how much I can make this notebook last on software rendering vs Intel GPU-accelerated rendering once I’m able to test both on a task similar to UI rendering with no background process tainting the results, that should be a good test.
Indeed, but then we go back to my initial point : even high-end multicore CPUs have a much lower peak wattage than GPUs.
Why does it matter ? Any good UI layer should have a worst-case refresh time stated somewhere in its scope statement. Supposing it’s 1/30 of a second, it means that on supported hardware, the CPU/GPU shouldn’t work on refreshing the UI for more than 1/30 of a second before being done, and therefore going back to its idle state.
Even if our developer is a power vampire, he must respect that criteria or he’ll be fired. His refresh routine must be completed in a 30th of second. But during that 30th of second, he can waste much more power using a GPU than using a CPU.
u_u Horrors like this should never be sold as a finished product.
I think poor design is a major culprit too. After all, desktop OSs wouldn’t take more than a few seconds to boot on a modern computer if they were designed properly.
Indeed, this is why power saving should be tested just as carefully as other critical aspects of operating system software in my opinion.
Well, I don’t blame blender for doing that because it is a power vampire by its very nature. No one in his right mind would do 3D modeling and rendering on a battery-powered computer AND expect good battery life and optimal performance at the same time. Since these are full of CPU- and GPU-bound tasks, they are sucking a lot of power naturally, and hardware power management features can do nothing to prevent that from happening.
If I’m not misunderstood, what can be optimized from the point of view of power management are only the (numerous) tasks of a desktop computer where part of or all of the hardware is left idle most of the time, and those where there are precise performance goals (e.g. real-time audio synthesis must output 44100 frames per second (or whatever else the sampling rate of the audio hardware is), or the playback will be choppy)
Can you explain why exactly ? Not sure that I followed you on this one.
Edited 2010-12-18 17:14 UTC
Yes the GPU has higher peak watts but there is a be problem. How many processing cores does you cpu have. Most modernday GPU have hundreds and into thousands. Good designs like ATI can turn each of those on and off 1 by 1. And the Thoughput but of data on the GPU means having.
http://www.youtube.com/watch?v=sVkDx_4GP5M This shows the time completion difference. Basically 30 of sec using CPU developer might to make it either. Due to how many times slower a cpu is against a gpu.
Its also suppring how power effective each gpu processing core is. If GPU cores consumed as much power as a x86 core for data processed there is no way you could power them at all you would be look at 10 to 20 thousand watt power supplies just for the video card. Ie wire melting.
AMD is working on CPU GPU hybred for supercomputers for the power effectiveness features. Basically bang vs buck. If you make x86 cpu core and gpu cores that consume exactly the same ammount of power at peak. The gpu cores can process more data in the same time frame. Way more.
Its the difference between the ATI and Nvidia chip design. Blender triggers Nvidia/intel hybred to switch over from the intel to the Nvidia what is basically all Nvidia cores turn on to respond to the processing so power ouch. Where the ATI unless blender trys todo something heavy with the GPU only a few GPU cores are turned on when they are needed.
Basically way better power management in the ATI and it shows.
Okay, so this resolves around the two facts that GPUs are much more powerful than CPUs (which is a well-known fact), and that GPUs consume less data per core.
What I pointed out, though, was not in contradiction with that, but rather was the sad remark that if you give a GPU to a developer, you must expect him to use it as a whole. No wonder how he manages to do that, no one would have imagined the amount of translucency, visual effects, and blur that Microsoft put in their UI before they did.
And as far as I know, in most computers sold today (though maybe only in the higher-end, don’t know about intel chips), GPUs consume more power under heavy load than CPUs.
Therefore, giving UI devs access to a GPU implicitly means reducing the battery life of all computers with that UI in the long term.
Why doesn’t blender try to use the Intel chip first ? Is it always insufficient, or is it a flaw in Blender’s way of doing things ? Why should Blender have to care at all about the switch from one GPU to another, isn’t this hybrid graphic thing supposed to be perfectly invisible for the user and his programs ?
Besides, I thought that running the tasks at full speed in order to have the GPU go back to sleep as soon as possible was more power-efficient. Are there exceptions to this rule ?
Edited 2010-12-19 07:34 UTC
I really hate these MS first myths. For number of the visual effects including blur were done in http://www.enlightenment.org/ back pre 2000 were doing more than Windows 7 without GPU assist at all. That was not light. 3d effects even those were done first in the open source camp before Vista started development. Now reason why open source interface is not ahead of Windows 7 in effects allround. Open source found out that its video card drivers for Freebsd Linux and other open source OS’s could not hack it and were failing. Also there was signs of major extra power usage if stuff was not done wisely.
Now notice Vista first thing they do is redesign the driver stack. Linux put out a new desgin to its video stack in 2006. And at this stage there is still not 1 closed source driver done in it. Pure cause of Linux graphical glitches. The old driver design is flawed.
So yes open source developers have more years under there belts doing these GPU assisted interfaces than MS without breaking the power management bank. Its like KDE 4. It uses less power with GPU on than off on ATI chipsets and more power on Nvidia chips than off. The pattern repeats and repeats.
Basically there is a problem. That your point of view allows to be put under the carpet.
This is not 100 percent correct not all UI devs are foolish with GPU usage. The big important thing under most normal usage a GPU very rarely reaches 10 percent of the GPU. Now if the GPU is power effective 10 percent of GPU is way less power draw than the CPU and even that 10 percent usage tasks will complete in less time letting CPU and GPU be in idle more often. Unless user decided to stack more tasks on because the system is so responsive. More work of course equals less run time.
First big problem. Application is told it has all features of the Nvidia but not told hang on if you use these features you will suck the power live out this machine. Invisible is bad application developers cannot code for power effectiveness. Android applications have proven this as well. Android is transparent about things like this.
Not exactly exception. Its more a case of using a V8 engine to mow a small house lawn. Massive overkill for the job.
The issue is blender does operations the intel chipset cannot do. So triggering the Nvidia GPU wake up. Problem is Nvidia wake the complete thing up. When only 3 to 4 GPU processing units will be requested. Because the Nvidia logic is presuming its been woken because a big game/application will be coming.
ATI on the other hand all operations are being done in the same GPU cores. So there is never a operation like in the intel/nvidia case that cause a massive fire up. Also its only using the number of cores need. Even so when the ATI has nothing todo the GPU can still go into fully suspended.
Ie ATI can be using just cpu + 3 to 4 gpu cores for the blender interface. Where the intel/nvidia mess. Ends up cpu + 1 intel core + Most of the Nvidia GPU.
Simply more GPU unit types don’t equal better power usage it normally equals worse unless you can selectively turn on just the number you require.
Power management design inside the Nvidia GPU is letting everything down and a hack has been done to attempt to fix the issue. As with all hacks it has major downsides and should have never been done.
You want fullspeed with everything required todo the job. No more. intel/Nvidia has big problem of allocating more than what should be required.
Now a nice feature at times could be gpu core limiting for particular applications when you are on battery. Since some applications when they have less gpu cores reduce the quality they are attempt todo so reducing the amount of processing the require so reducing power usage. Now this is a sane way to try to correctly manage this problem. Control the resources applications can have. Don’t allow GUI designer to be more greedy when you don’t have the power to spare.
Linux is heading down the path of resource control per application. Since they have aready been there and see the issues. Hopefully MS will be there normally copycat.
You’re right, “invisible” was not the right term for what I had in mind. Let’s try to explain it better :
The goal of operating system software, including its power management components, is to make the exact nature of the hardware invisible to the user and its applications unless they have to know about it. If applications want to do power management by hand and mess with hardware-specific stuff, it’s good, but they shouldn’t *have* to do that, and there should be a good default behavior when they don’t. This is part of what makes good OSs so hard to design.
When writing an OpenGL game, as an example, you want to know what the GPU is up to, not who made it and what its internals are. You can optionally support vendor-specific features for better performance at the cost of code complexity, but your game should first run properly on all hardware from today and tomorrow.
The abstraction provided by NVidia for hybrid graphics is to blame here, because software which just uses OpenGL normally will break it quickly, as you pointed out.
A better implementation would be like this :
-Initially, Blender only sees an Intel GPU, unless Blender devs decided to introduce Optimus support in their code
-NVidia driver monitors blender’s performance, to see if it’s sufficient
-If not, NVidia driver powers up the NVidia GPU, and silently makes the switch to it
-Performance improves, everyone is happy
-When the demande for power decreases, the NVidia driver switches back to the Intel GPU
A problem with that is GPU features detection. Most of current software detect them once during startup and assume they will remain the same all the time. This will cause some more subtle problems, and I need an example to explain this.
Let’s go back in time and imagine that Intel chips support shaders 1.0 and NVidia chips support shaders v3.0. I run Far Cry. Far Cry detects and Intel chip, thinks “crappy GPU around”, and either fails to start or display extremely ugly rendering.
There comes the problem of user preferences. Does the user want to play games with full performance or full battery life ?
We could imagine that when the user is in “performance” power management mode, NVidia chip would be always on, while when the user is in “power saving” power management mode, Intel chip would be on by default, with the system described above for in use for demanding applications.
Edited 2010-12-19 12:03 UTC
This is part of the problem. I am talking to a person without a heavy background in embed and mobile devices. Android sybmian QNX. All treat there applications differently to how a lot would expect. State of device answers. “feature is available but is turned off” or “feature is available and on/costless to turn on” or “device does not have feature” are state answers you can get. Now application can consider if what is turned on will do its task or not and if what is turned off will help. Applications can contain different code paths using different API all depending on what is turned on and what is avaible. Lot of cases it don’t add any more code. Since the application to cope with different hardware combinations would had the code anyhow
Even in some cases Que up tasks to wait for particular parts to be turned on.
Even meego and normal Linux distrobutions are starting to head down the interactive path. Since it provides so much power saving and even faster startups. There can be a power cost turning hardware on and off each time. So if you can que up a list of things reducing the number of times it has to be turned on and off you get a powersaving on devices like this. But to que you have to know critically the application needs it. If it really criticial start the device.
There is no particular advantage to hiding power management from applications. Of course on those three OS’s applications can choose to disregard power management. But users normally stop using those applications because there devices don’t last as long.
Of course well designed power management information apis to applications are really hardware netural. Like a device with all network access systems turned off. Best part is application on some of these can send a message back to OS and if it something power heavy user can be informed for a yes or no answer so they don’t blow the runtime without good reason.
Embeded world learnt long ago power management has to be cooperative. Or it just will not work effectively. Yes the limit of batteries be effective or have your device fail on user. Desktop and Servers still have to learn this.
Nice if you did not have to use vendor-specific features to avoid opengl implementation bugs by vendor. Yes Opengl is partly at fault for not exposing what is real hardware and what is software and so on. Basically Hybred graphics are not really support by opengl design or direct x for that matter.
Simple fact a idea like this does not work. Blender for some operations with rendering and the like it will be worthwhile firing up the Nvidia.
Lot of people don’t know there is API differences in opengl between cards. So blender knowing off the start line that the interface has to be done to intel specs could be altered not to cause the power draw.
Yet it still has to be told Nvidia features are available if required.
Simplest solution is don’t have hybred and take the ATI solution and improve the GPU power effectiveness.
Hard solution is expand Opengl and Direct X to support these hybreds in a more informed way. More replicating embed and mobile world norms.
Indeed, but we’re talking about the embedded world. In that world, 3 MHz CPUs do still exist, virtual memory and multitasking are sometimes considered as bloat beyond the affordable level, a large part of OSs is coded in assembly and optimized by hand for performance reasons, and there are often no standard hardware features besides the ability to execute code with a (limited) instruction set.
Thankfully, the desktop has gone beyond that dark age by now.
No. Applications which are interested in it should have access to it.
Yes, but again we’re talking about the laptops, which have a much bigger battery. Automated power management can lead to fairly good results. Sure, if applications did manual power management, the laptop could last (much) longer. But that would make coding applications much more complex, too, and this cost in complexity would be paid somewhere else.
Consider the average PC game. It might be possible, by extreme code optimization for power management, to have a laptop last 3 hours instead of 2 when playing this game. Sure, that’s nice.
But if the price of the game remains equal, the development time spent on it remains equal too. For each hour of development time spent working on power management, there would be one less hour of development time spent on gameplay, graphics, etc… Result : the game would be crappy in another way. Problem is, gamers tolerate more crappy battery life than crappy games. Consequences : game developers have no time to spend on power management, the OS should do that for them in the best way it can.
Another argument for automatic power management is that if we start to ask developers to care about everything and do the job of the OS, we’re going back the the dark ages where only gurus with years of education could remotely think about writing software. The democratization of software development is in my opinion one of the biggest achievements of recent computer history, I really don’t want computing to go back in the corporate era.
It can be done by having an extensive standard API (a bit like what Apple does with OSX and iOS) which DOES care about power management. The more developers use this API directly, the more effective it is, since it silently collects a lot of data about the applications which the power management subsystem can use.
Then vendor implementations should be fixed, or if they do this on purpose OpenGL implementations should not be left to them. The whole purpose of a graphics API is not to care about which hardware you run it on, if it does matter your API can go to the trash.
Indeed. But Nvidia have heavy competition from Intel in the low-performance market and from AMD in the mid-performance market, so they must keep focused on the high-performance market, which is the opposite direction of power savings.
Software vendors would not care about it, for reasons like the one given above. Bad idea.
Edited 2010-12-19 16:06 UTC
Sorry, but on my (early) Pentium 4 (Willamette) (1.6 Ghz)
It’s not that suitable for Windows Vista or 7. I guess, that machine is better off with XP.
It got 512 MB of RAM. But installing more RAM is difficult, since it uses PC800 RDRAM, so, a rare type of memory and the mainboard is picky about it’s modules. Also, having to replace the video card, just to run the OS in a normal way seems rediculas to me (I am no gamer whatsoever, so I don’t need 3D stuff)
In my opinion, to run an Operating System like Windows Vista or Windows 7, the “Pentium III or earier” doesn’t work. I think you need a recent Pentium 4 or newer with a shitload of RAM and a modern 3d card.
My media centre is a 2002 PIV-2.8Ghz, 2GB of RAM, and a GeFore 6200. You could get all that for like 30USD off eBay, and it does full HD without a hitch.
I also run Windows 7 comfortably on an Intel Atom dual-core 1.6Ghz/2GB of RAM/Intel with the crappiest of crappy Intel 945GC chipsets (no dedicated video card). My now-dead netbook, the same spec but with the single-core first-gen Atom, also ran Windows 7 just fine (incl. all the graphics stuff).
Hi Andre! If you have a XP Pro license you might want to look into “TinyXP Rev 09” as I have found that it works MUCH better than standard XP on low RAM systems such as yours. in fact if you choose “bare-without IE and OE” (it provides FireFox and you can of course use Chrome as well) you are looking at just 50MB! on the desktop. I tested it on a 400MHz P3 with 384MB of RAM, and it was actually snappy, on a system like yours it would fly!
That said if you keep an eye on Newegg and Tigerdirect you can find MB+CPU kits pretty cheap, so maybe you ought to think about getting a cheapo AMD dual core. Sadly you can get a dual core PLUS MB for less than getting a decent amount of RDRAM. Add a 1GB DDR2 and you can easily get out with less than $90 invested, and you would have nicer EVERYTHING-nicer CPU, nicer GPU, faster RAM, etc.
Nevermind, forgot I already posted in this topic…
Edited 2010-12-19 18:54 UTC
ASLR alone is enough of a reason to upgrade.
But also:
UAC
Virtual registry
Strong account separation
XP should be dumped for security reasons just like IE6.
None of those things will prevent malicious code from getting on to a Windows 7 machine where the author has deliberately included malware wrapped within a trojan horse style application, and the user (who is given no means to vet the code or have it vetted by someone else who did not write the code) has then proceeded to install it, conciously clicking “allow” on the UAC prompt.
Yes indeed. If one wants to be assured that there will be no malware compromises in the future, the best approach is to make replacement system open source.
Indeed. In the future, all new software systems should be open source by law. And maybe GPLv3, I haven’t decided that yet.
Edited 2010-12-19 11:30 UTC
There is no need to make it law. All that is needed is to allow it to be offered to people, so that they can buy it in stores.
Cheaper system, better functionality, no malware. Side-by-side in the store with Windows systems, let consumers choose.
Debatable. A Windows license is like EUR 10. Cheaper can also be seen as crappy.
Utter nonsense. Call me when Linux has working graphics and audio stacks. Until then, it’s a toy on the desktop.
…and no Photoshop, no Microsoft Office, no games. It’s a trade-off.
They’ll pick Windows (or maybe a Mac). Not Linux.
For your EUR 10, with a Windows license, you don’t get Photoshop, Microsoft Office, or games. Its a tradeoff.
Ring, ring … hello Thom? Works fine!
Why the FUD? Kickbacks?
Actually, I can get GIMP, darktable, Krita, digikam and others, Libreoffice and some games, for zero cost. I can cover of a good deal of what would cost on Windows over 1000 EUR extra, for nothing. More than good enough for the vast majority of users, except perhaps gamers. This is not a tradeoff, this is a slam dunk win for FOSS applications.
Gamers can buy a console.
That is a matter of advertising. If enough ordinary people don’t have ready cash, and decide to go with inexpensive Linux, and then they found out that they didn’t actually miss out on a single thing except malware … word would spread soon enough.
Edited 2010-12-19 13:06 UTC
If, after all these years of me doing OSNews, you still dare to accuse me of being paid by Microsoft, there’s clearly something wrong with you.
I think that if you were to look at our posting histories, you’d see if there’s anyone being paid to comment here, it’s you.
No worries Thom. When he brought up kickback, a very personal accusation, really hard to prove either way he basically admitted he’s rabbling nonsense.
This troll has been around too log and with Windows fixing its flaws and catching up in stability and security over the years he’s running out of silly “arguments”.
Sorry, that was not reasonable.
However, I simply can’t for the life of me figure out any other reason why you would try to present to your readers that they needed to use a 1000 EUR program like Photoshop when there are these days great FOSS applications that would fit their needs wonderfully.
I really can’t. What kind of service are you doing for people?
Edited 2010-12-19 23:04 UTC
Because I have yet to encounter any Photoshop user who would take something like GIMP over Photoshop, or OpenOffice over Microsoft Office.
Even just myself – when I load up OpenOffice, I’m presented with this quaint, old-fashioned menu-driven interface where I can’t find shit. All my documents look garbled up, stuff people send to me is all garbled up, and stuff I make using OOO is all garbled up to those who open it. It’s a mess – I know this is in part due to Microsoft’s closed formats, but that doesn’t change the fact that for anyone doing serious work with document editing (i.e., make a living with it, like I do), OOO is simply not suitable. If it can’t even render documents right – what’s the bloody point?
As a freelance translator, I cannot, in any way, use anything other than Microsoft Office to work on my customers’ documents, and I’m pretty sure the same thing applies to people using Photoshop.
Whenever I read your comments about FOSS this and GNU that, I’m always left wondering: has this guy (or girl) ever done any serious work with any of the tools he recommends? Or is he just drumming up the bullet points from their websites without really knowing what’s going on or the kinds of trials people in the real world encounter when switching to things like OOO?
This doesn’t answer the question. There are indeed many use cases where expensive, professional software is required, of that there is no doubt whatsoever. Indeed, I routinely do such work myself, and I am fully aware of such “use cases”.
Even so, for the significant majority of uses cases, ordinary desktop users are simply not that sophisticated or demanding. Aunt Tilly, wandering into a computer store looking to “get into this computer thingy because she wanted to do Internet banking, print her happy snaps from her digital camera, do some email and go on facebook” would have absolutely no chance of ever keening for some feature or ability that she couldn’t find perfectly satisfied by FOSS software. FOSS software applications are perfectly adequate, functional and performant for the significant majority of ordinary users. More than adequate … they are perfectly fine and fantastic value, and as a bonus they carry no risk of being trojan horses carrying disguised malware. Aunt Tilly is actually better served, because her Internet banking is not a risk for her.
I really, really cannot see any point in pretending otherwise. I’d go further … I’d claim that to pretend that ordinary people really need to use MS Office and Photoshop is to do them a severe disservice.
Edited 2010-12-19 23:37 UTC
Incidentally, I have set up in my time quite a number of users to use Linux on desktops or laptops, in each case being perfectly satisfied with a system with full desktop software applications costing them less than a quarter of what they would have spent otherwise, had they gone to a typical computer store.
Even if they want to play the odd game on their system, even this is becoming not too difficult a request to accomodate these days.
http://www.phoronix.com/scan.php?page=news_item&px=ODkzOA
http://www.phoronix.com/scan.php?page=article&item=unigine_oilrush_…
http://www.phoronix.com/scan.php?page=news_item&px=ODkzNg
http://www.phoronix.com/scan.php?page=news_item&px=ODkzMw
Edited 2010-12-20 00:43 UTC
Caveat: Aunt Tilly is better served for Internet banking by using FOSS solutions as long as there aren’t vested interests (who paradoxically themselves use Linux) getting in her way.
http://linuxlock.blogspot.com/2010/12/bank-of-america-rep-responds-…
Once again, I can’t see any legitmate excuse for Bank Of America to insist that Aunt Tilly uses a desktop solution that puts her Internet banking at risk (through the possibility of Aunt Tillie’s desktop getting compromised via malware trojans), when there exists a perfectly viable and inexpensive desktop solution for Aunt Tilly that carries no risk of malware for her, and that selfsame Bank Of America is perfectly fine for Aunt Tilly to run Linux via her Android phone or tablet.
One is lead to suspect that there is something very illegitimate going on here.
Edited 2010-12-20 01:34 UTC
Hi MR Holwerda! I’m afraid this is something I’ve run into quite a lot, I call it “FOSSie Delusion Syndrome” or FDS. You see as a shop owner that actually sells PCs, and after talking to both other small shop owners as well as regional managers of large shops like Best Buy and Walmart, we have come to the conclusion that NOBODY WANTS LINUX…full stop.
Oh we’ve all tried selling it, after all in the cutthroat world of PC sales anything that can lower cost means you can undercut the other guy and maybe gain some share. But a funny thing that, we found that with Linux the price actually WENT UP. Why? Because we all were looking at about an 80% return rate, as well as the increased after sale support costs eating up our profits.
Now people with FDS tend to get a little…well pissy and start screaming SHILL! if you break their little fantasy world view. they truly believe that if Linux was just on shelves people would run away from the bad old “M$” and it would all be hearts and kittens and RMS dancing in a field of posies. They seem to forget the world is NOT full of IT nerds and in fact are full of people that just want their software and hardware to work.
Working retail I’ve actually gone around to the local shops with a little pen and done the math, and you are looking at MAYBE 35% on a given day of the devices being sold at B&M stores actually working in Linux. That new USB AOL printer? Nope. That iPod granny gave you for Xmas? Not a chance. And of the current software on sale about 1 in 100 might work in wine if you fiddled enough.
So I’m sorry you got called a shill Mr Holwerda, but take heart, those of us in retail get it all the time from those with FDS. They just won’t accept folks aren’t willing to throw away the time and money invested in say Quickbooks or PaintShop Pro for some mess like the Gimp or GNUCash, and they sure aren’t gonna toss their nice Apple PMP because the new ones won’t work without iTunes.
They can scream about us retailers not offering FOSS OSes, but we ALL have tried it at one point and found them MORE expensive to deal with than Windows. remember that by law anything returned has to be sold as used, often at a loss when it comes to PCs, so it really don’t take many returns for Linux to be a money pit. BTW on my worst week I think I saw MAYBE 5% returns on Windows, and a good 50% of those were just clueless users I was able to teach and convince to keep the machine. BIG difference between 2.5% and 80%.
lol @ suggesting that the OS replacements for things like Photoshop and Office are anything more than laughable, unstable, ugly toys.
http://krita.org/features
http://krita.org/screenshots
Stable, functional and certainly not ugly.
http://www.digikam.org/drupal/about/overview
http://www.digikam.org/drupal/about/features9x
Stable, functional and certainly not ugly.
(Between them, these two applications have Photoshop covered).
http://arstechnica.com/open-source/reviews/2010/02/hands-on-new-sin…
A bit more old-school, but very functional and the new Photoshop-like interface is just about to be released as a stable version.
http://www.documentfoundation.org/download/
Nothing whatsoever wrong with this for over 90% of use cases. Very powerful, and it does cross-platform interoperability way, way better than MS Office does.
Here is another up-and-coming contender:
http://www.calligra-suite.org/
(this one does miss out on stability, however, I grant you. Perhaps it is best to wait for the 2.3 version which is currently a release candidate to become a final release).
http://www.calligra-suite.org/words/
(however there is nothing ugly about it, and its UI is ribbon-like and makes great use of widescreen monitors)
Anyway, LOL at your unsupported criticism being so 2005-ish. Welcome to 2010, quite a lot has changed since you have been away.
I even work for a place that’s open source friendly and if I suggested that they switch from MS Office to OpenOffice or Calligra, I’d be laughed out of the room.
While it may be absolutely 100% accurate, this point (anecdote actually, but lets not quibble) does absolutely nothing to counter the observation that for the vast majority of users and they way that they use an Office suite, Calligra Office would be perfectly functional, performant and attractive.
No no and no. I regularly use MS Office and I gave Calligra a try on Linux the other day and I was thoroughly disappointed. It may have a few checkbox features that are competitive with MS Office, but it is so lacking in polish and detail and all those extra little features that make MS Office so powerful that it’s barely worth a second look. I know for a fact I couldn’t do the work I do with Office using Calligra. I know for a really good fact that the folks I work with who do financial stuff would find Calligra’s idea of a spreadsheet program to be nearly worthless for what they do. MS probably has more people working full-time writing the (shitty) documentation for MS Office than Calligra/KOffice has ever had working full-time or equivalent. I’d rather use a suite backed by a company that has people who can actually fix bugs and deliver useful features. You can talk shit about MS Office all you want, but it is miles above the competition. Miles. I still hate it, but I’d rather use it than KOffice or OpenOffice for anything other than the basics.
Fair enough. LibreOffice does have a lot more polish than Calligra. And MS Office is indeed a very good application suite with only a few major weakness … MS Office absolutely sucks at interoperability, it is expensive, it has an “upgrade treadmill”, and it has a sole-source supplier.
Horses for courses … but the point remains that there are indeed perfectly good FOSS alternatives for Office suite for 90% of users and uses cases. The vast majority of users of an office suite would never run into any problem at all and be perfectly able to function with Calligra Office or Libre Office.
Sure there is the security benefit but it comes with a massive trade-off. Not only is there a loss in hardware/software compatibility but learning a new system is considered a cost to ordinary consumers.
Currently the best way for Linux to achieve any mainstream adoption is to not compete with Windows or OSX directly and instead be sold on ultra-portables where consumers are not expecting a full system. Windows 7 isn’t expensive enough for consumers to consider Linux alternative a desktop or laptop. It had a chance to gain some traction when MS botched the Vista release but Linux wasn’t ready either and you’ll just have to live with the results.
There’s an idea! Have Linux be something good on its own instead of trying to beat Microsoft at its own game. Glad to see some people are finally realizing this. Unfortunately, we have a pile of developers who would like to make what amounts to FreeWindows with a Unix command line.
BTW Thom, watch the video:
http://www.phoronix.com/scan.php?page=news_item&px=ODkzOQ
It works fine, Thom. Don’t try to pull the wool over your readers eyes.
More video of another game here:
http://www.zeroballistics.com/videos.php?um=2&lm=2&PHPSESSID=998u5t…
Edited 2010-12-20 10:34 UTC
*sigh*.
Sure, it works. But it’s miles behind Aero. An application crash can bring down the entire graphics stack, causing you to lose ALL your data. It’s layer upon layer upon layer of crap – heck even Red Hat’s X.org team has admitted as such, and yet here you are, claiming to know better than the very people actually working on X?
On Windows, I can install graphics drivers without the need for a restart. Most graphics driver crashes are gracefully recovered from (all you see is a screen flicker). It has graceful degradation when it encounters an old application that is incompatible with the new stack.
X.org developers themselves have had it up to HERE with X, and are now beginning to plan the move away from it, for the very same reasons I’ve been saying fr years. Yet, here you are, claiming that X.org is just fine?
No, I agree with you, there is considerable scope for improvement. Perhaps that improvement is coming with Gallium3D drivers and the Wayland compositor/display server, and better middle layers such as newer accelerated versions of Cairo and Qt, but nevertheless … it works right now.
However, by no means does this mean that the OS is “a toy”. It is the dominant OS in most spehres of IT, and it does work and is perfectly useable right now on the desktop for the significant majority of use cases, without a single hint of trouble, and uptimes as long as you like.
Why pretend otherwise?
10 EUR? Are you serious? If so, can someone point me to where I can get that? I’m currently in the process of choosing the components for a desktop system I will build so that cheap license would be welcome.
Become an OEM, ship millions of systems, and then, yes, a Windows license will only set you back about 10 EUR a pop.
Edited 2010-12-20 15:34 UTC
Too bad! I’m feeling stupid now as I really thought that license price would be available to me.
Now, that’s where you’re wrong. Consumers have proven that they will systematically choose the more expensive, less functional, malware-ridden system. We’re just not smart enough. That’s why we need to force open-source. Just to make sure, for the greater good.
So what? They have stopped a long list of drive-by and email attacks. That recent drive-by firefox exploit was using javascript to filter out W7 and Vista users since it would have been triggered UAC. Downplay those technologies all you want but they have a clear record of being effective at improving security.
Agreed. The technologies you mention do indeed improve security a great deal on Windows systems. Without them, Windows systems could easily be compromised through casual Internet use no matter what the end user did. With them, the only still-common means which remains is for the user to be persuaded to download and install a trojan horse.
Nevertheless, even with these technologies in place, trojan horse attacks of Windows systems are still common. If a user has up-to-date anti-malware installed, such as MSE, then there is still a small failure rate for detection of malware within the trojan horse … for MSE this failure rate is about 1.6%.
The only known, effective way to overcome this remaining security hole is to require that the source code of applications is visible to anyone and everyone. Some applications of course don’t need that, they can be trusted without the source code being visible, but other applications cannot be trusted, and they contain deliberate code intended to compromise the end users system.
In the Windows ecosystem, with closed-source applications being the norm, there is no way for ordinary users to tell the difference. As a consequence, an estimated 50% of ordinary user’s Windows systems are compromised. That is massive.
The article’s about MSE, the debates about XP vs Windows 7. Seems that XP has still many fans over Windows 7 and a lot of people are going to be upset when it is no longer supported (and can’t be activated?(interpret as a question)).
Personally I don’t think XP is really fit for purpose horribly insecure etc, and i don’t think MSE can patch that. Windows 7 doesn’t seem to me to be universally loved (I know Thom rates it, but why?). I find it a vast improvement of Vista, but thats about as nice to use as a dose of the pox. I find win 7 a bit slow and clunky, wireless networking is fairly horrid as is printing, some driver issues (I have some older hardware) and some compatibility issues with older programs. Windows 7 is OK but not great I can see why some folk want to stay with XP – but their probably wrong to.
After using Windows 7 at home and at my old job for quite a while … going back to XP in my current job feels like I gone back to using CDE in the University Sun Ray Lab.
There are lots of nice little UI improvements (Windows key + left/right arrow I couldn’t live without).
I don’t know why everyone seems to hate 7’s Wireless networking, works a lot better with 7 than it ever did in XP, but then again I got a Centrino Laptop and so everything on here is intel (including wireless), I have had problems with cheap wireless dongles, but tbh that is to be expected.
“it didn’t always rank among the very top” ?
Nah, it always rank among the latest place!
http://www.chip.de/artikel/Microsoft-Security-Essentials-Test_43525… Last place on an actual test in June 2010. I don’t think that it’s far better with this new version.
If you care about detection rate and speed you won’t trust MSE.
I can’t read German but the article you sited seems to go counter to this,
http://www.theregister.co.uk/2009/10/01/ms_security_essentials_revi…
“MSSE was able to detect 536,535 samples what’s a very good detection score of 98.44 per cent.”
“In case of the ad-/spyware testset, MSSE detected 12,935 out of 14,222 samples what’s a detection score of 90.95 per cent.”
However the article does say that it falls short in dynamic detection
“We have then tested the dynamic (behavior-based) detection with a few recently released malware samples which are not yet detected by heuristics, signatures or the “in the cloud” features. We found no effective “dynamic detection” features in place. None of the samples were detected based on their (suspicious) behavior. However, other AV-only offerings doesn’t include dynamic detection features either, in most cases they are only available in the Internet Security Suites editions of the products.”
However I think the important bit is at the end of that quote, It falls short in this area if you compare it to fully paid versions of other products, as opposed to a free one.
That is pretty good, considering that there were two million new pieces of malware for Windows that have emerged in just these last 12 months. If a Windows system with MSE encounters a threat on average once every three days, then with a little bit of luck it could last up to a year perhaps before it fell to a threat that got past MSE.
Oh dear.
Yes, and how much of that malware targets out-of-date systems, or use social engineering?
Most of it would target out-of-date systems, or use social engineering. The idea of ant-virus, BTW, is to detect malware once it is on a system, however it got on to the system. Once malware is on the system, anti-virus updates after-the-fact probably won’t work. The only chance is if the anti-malware system detects the malware prior to or on installation, or on first access. After that, most likely, game over.
<sarcasm>BTW, the Windows systems are still compromised, even if the users did commint the horrendous errors of being out-of-date or of installing something that looked useful but which they had no chance of vetting.
Compromised systems are almost always due to evil end users.
Apparently, current estimates put it at about 50% of Windows end users who are evil in this way. Shocking.
</sarcasm>
Edited 2010-12-18 13:27 UTC
“The idea of ant-virus, BTW, is to detect malware once it is on a system, however it got on to the system.”
Sorry but that’s not correct. I don’t know about other products, but Antivir warns before a suspicious js file is loaded by the browser; it warns before a suspicious file is written to the HDD. So no, it’s not always medicine after death.
Oh here we go, Anti-Windows again. You are like a broken record.
Firstly,
Where is this “encounters a thread on average once every three days” you pulled out of thin air?!
Otherwise your claim of “it could last up to a year perhaps before it fell to a threat that got past MSE.” is total rubbish.
Secondly,
Common sense is more important than running Anti-Virus software. Anti-Virus will always be “last line of defense”.
Actually, what would be most useful would be visibility of the source code. People who could understand source code, and who did not write the software, can get to see what is in the software source code.
That alone would eliminate the vast majority of malware.
I made no claim, I made a calculation:
MSE detection rate: 98.44 per cent.
If a Windows system with MSE encounters a threat on average once every three days, then with a little bit of luck it could last up to a year perhaps before it fell to a threat that got past MSE.
Now, how often a given Windows system encounters a threat is entirely problematic.
However, I have seen a number of Windows systems used frequently (perhpas daily) as Internet clients which have had MSE installed, and which have gone down to malware in less than a year.
This is just an anecdote (just as my calculation is just a calculation), but there it is.
Edited 2010-12-18 13:34 UTC
Is there any evidence of this?
Development practices are far more important than whether someone can or cannot see the code. Being able to freely view the code does not make everyone instantly magically understand it, nor does it make it magically more secure.
You stated it as if it were a fact … which now you are backtracking and saying it wasn’t.
You have no hard numbers your calculation means absolutely nothing as I stated before.
Which means nothing since they are anedotes. Go and push your agenda another time please.
Sure there is. Wherever there is solid integrity between an open source project development environment (where anyone and everyone can see the source code) and the eventual installation on and end user’s machine, there has never been a single solitary case of malware getting on to the end users machine. It is simply too hard to hide something malicious.
Hardly. There are malware authors whose active, prime intent is to write malware. The only way that this can happen is that the end user victims (and everyone else for that matter) are not allowed to see the source code. It doesn’t matter one whit what “development practices” the malware author employs, he or she is still writing malware. In point of fact, distributing that malware as closed source is just about the ONLY requirement of the development practices the malware author should use.
Not everyone has to understand it, just one person somewhere (who did not write the code) has to understand it. Visibility of the source code doesn’t make code more secure, but it does make sure it is honest code, and that there is no deliberate malicious intent written in to it.
I did no such thing. I neither stated it was a fact (in fact, the very first word I wrote was IF), nor do I backtrack from anything. My original calculation and the sentence in which I expressed it still stands, unmodified.
It isn’t my problem if you are unable to read and understand what was written.
It means exactly what it said. A system with a 98.4% success rate is also a system with a 1.6% failure rate. My calculation just put that into perspective, and serves only to illustrate what it could possibly mean when the system was the sole anti-malware provision on an ordinary person’s Windows machine.
You do what you want, and I will continue to do what I want (which is to speak the truth in the interests of ordinary people), and we should get on just fine.
Edited 2010-12-19 08:58 UTC
Again no evidence. You state it as fact and there isn’t a single shred of proof. Again you are mixing up “malware being shipped”, with malware getting onto a system. They are different and stop pretending they are.
Malware =! Closed Sourced Application. They are not the same. What are you on about??
Being Closed source does not make the code automatically evil. Just means that developers have put a considerable amount of work into something and wish to keep their code theirs, since they want to make money from it which is how the barter system has worked since the dawn of humanity.
Also as I point out time and time again, being able to understand how a large application works is usually impossible for an outsider (of the original development team), unless they invest quite a lot of time into it (few are willing to do this).
Honest Code != Secure Code.
Your calculation is still rubbish, since you made up the frequency of threats in a particular time period.
I understood the intent of the post, whether or not you put an “IF” in front doesn’t matter it is a minor point, the tone of a post is almost as important as its content.
It doesn’t illustrate anything since you pulled the “3 threats per” whatever frequency out of thin air and still haven’t provided any evidence.
In reality the number of threats someone encounters is totally dependant on their browsing and downloading habits, and will vary considerably from person to person.
You keep on pushing that Open source is the only way and closed source is automatically evil (it can be). This simply isn’t true.
Different developers and companies have different business models and having open code may or may not be the right choice for them, it depends on the business and their revenue model.
It works for Redhat sure, but it wouldn’t work for other businesses like my previous employer.
Edited 2010-12-19 18:43 UTC
No, of course not, that is not the point.
The point I am amking is that being evil, malicious code requires that it is closed source.
That is a world of difference. It is like the difference between the statements: “all fish live in the water“, and “everything that lives in the water is a fish“. These two are not the same statement at all, and one statement is essentially true, whereas the other is complete nonsense.
I am NOT saying that being closed source is automatically evil (that is purely your invention, and it is a strawman argument).
What I am saying is that being evil malware means that it is necessarily closed source. That is my position, and in order to knock down my argument, that is the point which you actually have to tackle.
Edited 2010-12-19 23:14 UTC
So??
Doesn’t mean that because “evil code” can be distributed if the license is closed source, doesn’t mean that all source code should be open.
A few questions right back at you
1. If there was available to you an open source application for a download manager, such as
http://fatrat.dolezel.info/
http://aria2.sourceforge.net/
… and a closed source one in which malware might be hidden
http://www.bearshare.com/free-music-program.html
http://www.emsisoft.com/en/malware/Adware.Win32.BearShare_Download_…
… which would you choose for yourself?
2. Did you realise that Microsoft shows its code to some parties in order to dispel any question that the code might contain something malicious?
http://www.microsoft.com/resources/sharedsource/gsp.mspx
http://www.lockergnome.com/theoracle/2010/07/09/windows-source-code…
http://vista.blorge.com/2010/12/06/chinese-government-may-have-acce…
So I put to you the opposite question:
3. Why shouldn’t source code be made available for at least some independant parties to examine?
Having some independant parties examine the source code of applications is the best means available (and the only one so far known to work) to ensure the honest intent behind the software is not malicious. Why shouldn’t it be mandatory?
4. If other industries have examiners to ensure public health and safety, why shouldn’t something similar be the case for the software industry?
http://www.fda.gov/
5. If some industries have independant audit bodies to give customers/consumers assurances of fairness, why shouldn’t the software industry?
http://www.kpmg.com/AU/en/Pages/default.aspx
Edited 2010-12-20 09:26 UTC
1) Whichever was good enough to suit my needs, if the free one suits my needs I will use that. If the free one isn’t good enough, I will look at paying for the alternative (assuming it is better/suits my needs). If I think it is too expensive or I can’t afford it I will probably look into making do without.
Same with my development environment, Visual Studio is the best for my needs, and I am willing to pay for it. If it was too expensive, I would consider using something else such as Eclipse.
2) No I wasn’t, and all that proves is that they have that arrangement with some particular customers e.g. a Government, because they have some specific requirements that are completely for their workstations and servers than I do.
3, 4 & 5 Because unless a software system is doing real time computer e.g. controlling an plane, nuclear reactor etc. It isn’t a matter of public health or safety.
These systems do not run a desktop operating system, so the point is completely moot.
It isn’t a matter of life or death if my excel spreadsheet get infected or Outlook crashes when syncing with the exchange server.
As they aren’t in the same realm, they hardly need to be treated the same as a car, electrical appliance or gas boiler.
You might choose the bearshare? Seriously? Are you nuts?
Are you aware that bearshare is a malware vehicle?
http://www.emsisoft.com/en/malware/Adware.Win32.BearShare_Download_…
Is it really so impossible for you to admit that there is merit in open source software? To admit that there is merit in transparency, and in people being allowed to know what is in products they are using?
Sorry, but with the rest of your post you are starting to get really silly, now. Give it up as one of life’s little lessons … just because you thought another person was wrong doesn’t mean that they were wrong.
Edited 2010-12-20 11:19 UTC
I didn’t really look at them, I just told you the reasoning I would use the one that is the best … it a download manager, which I don’t really use since we have 20mb/s broadband here. However I would at least do a google whether it was a known malware carrier.
No, I use quite a lot of open source software all the time. WinMerge, Paint.NET, Tortoise SVN, Vim, WinSCP etc etc.
However I don’t believe it the be all and the end all, unlike you.
How are they silly?
The requirements for a critical system are completely different to that of Desktop OS, and the development process is fundamentally different and they cannot be considered the same.
Apparently you are the one who does not understand what “requirements elicitation” and “specification” mean.
I know when I am wrong and in this case I am not. You are the open source zealot not I.
What on earth are you on about?
If software is developed to a purchaser’s specification, then the purchaser has paid for the development effort, and hence gets the source code. They paid for it, they get it.
Sheesh!
What is so precious to your mind about source code? Why shouldn’t people (users) be able to look at it? What is there to hide if the developer is honestly trying to make a product that is good for the user? If other mundane intangible products (such as monetary transactions) are subject to independent quality inspections and audits, why shouldn’t software also be? What possbile reason is there for an exception (especially considering that the software industry is rife with malware of all kinds)?
What on earth is wrong with you? Swallowed too much kool-aide perhaps?
Edited 2010-12-20 11:54 UTC
Depends ultimately what the agreement is, When I buy a license for Windows I don’t expect to have the source code as well. I don’t have that agreement with Microsoft, I expect I would have to pay a lot more money
However as your link pointed out governments may well do have that agreement.
There is a difference between open sourcing and being able to look at the code.
Again it depends on the agreement. For bespoke software projects, I would let someone look at and even have the source (however they would not be allowed to give it to someone else).
If it was a more generalised piece of software (text editor, operating system, web browser), only I would be able to look at the source.
If a electrical item goes wrong it can burn down a house and kill everyone in it. If my gas boiler is faulty it can produce carbon monoxide that could kill me. If my desktop operating system crashes, I lose some work and maybe my emails don’t get sent. Surely you can see the difference?
I ask the same of you.
When you buy a license for Windows, you haven’t written the specification for it nor have you let a contract to Microsoft to write it for you.
BTW, just saying “I want Windows” is not specifying anything. That is merely placing an order for an off-the-shelf product. That is entirely different to specifying software.
Writing a specification for software is making a document which says:
the software must do this, and
the software must do that, and
the software must also be like so …
and so on. Then you pay someone to develop it for you. They do so, then test it to your satisfaction, then they give you the source code to complete the contract.
That is specifying software.
If they are paying you for your time to develop bespoke software they have specified, the software is not actually yours, its theirs. You don’t get to say what they may or may not do with it.
Here is the law that applies:
http://en.wikipedia.org/wiki/Work_for_hire
Sorry … not if you are paid to develop that particular text editor. The source code is not yours, it belongs to whomever is paying for your time.
Doesn’t apply if you are doing work for hire.
If you are writing your own application with your own funds, you can of course do whatever you want with the source code … in this situation it is your code. You cannot simply expect me however to implicitly trust you if you are not willing to let anyone else at all see your source code. Why should I trust you? You might be a malware author out to hurt my interests, for all I know. You are especially suspicious since you won’t let anyone at all see your code.
Doesn’t answer my point at all. Intangible products, such as monetary transactions, presumably won’t kill anyone either … yet they are subject to scrutiny and quality audits.
here is a firm whose very business is mostly financial audit:
http://en.wikipedia.org/wiki/Kpmg
So, if intangible products such as financial transaction work are subject to quality audit, why shouldn’t work such as writing software also be subject to such audits? This is the question. This is an especially pertinent question because the entire industry of writing software is clearly full of malware and other types of fraud, mistrust and plain ordinary rip-off.
Where intangible products such as financial transactions were full of fraud, the solution was to make the transactions transparent and auditable. The solution even has a name … it is called bookkeeping. Auditors (such as KPMG) get to inspect the books.
So, once again I ask you … why shouldn’t transparency apply to the writing of software? Without transparency, it is clearly not possible for me as a customer to trust you as an author. Transparency would benefit us both. So, where is the problem with it? Why shouldn’t it apply to software in some form?
Edited 2010-12-20 13:09 UTC
Ok what does your Doctor use to store you medical records. Lot of places Windows and those machines have MS Office and other programs on them.
So yes failures of Windows can kill people. Just as effectively as carbon monoxide or being electrocuted.
Really what is wrong with you not wanting this stuff to have some level of require quality. Yes is insane if you want to place a computer in an aircraft where it can endanger lives it has to be inspected heavily. Yet a system in a medical practitioners office that can kill does not have to be. Or worse in some countries running a full hospital.
Yes I can understand why a home machine may not be important. Problem is OS’s that have only had home machine level quality inspections are being used for possibly life taking stuff without a second thought.
I am not saying it shouldn’t.
As I have been saying all along it depends entirely on the arrangement and the needs.
Lemur 2 is trying to compare the needs of say the US government with the needs of me (a web developer).
They are simply not comparable and this is why Microsoft will offer a different arrangement for the US Government than for me.
Also the US Governement are more likely running Windows in a completely different environment and configuration than I am (and I suspect version).
Again depends on the need, a Doctor in a Hospital or Clinic maybe very different than say a GP e.g.
When I was working in Medical IT systems (web based patient booking system), the machines are ridiculously locked down (no internet access, set point versions of software etc etc) with a standard configuration.
All software we deployed went through at least 4 levels of user testing before deploy.
It maybe Windows XP, Office amd IE6, but everything is still extensively tested and locked down.
Actually, I am not talking about the needs of developers at all, I am talking about the needs of users.
Users need assurance that the code the are going to use has no malicious intent towards them. This applies just as directly if the user is the US government as it does if the user is Aunt Tilly. Each and every user has a right to expect quality in products that they use, and that the same products contain nothing that is against their best interests.
If you can’t see that there is something wrong with you. If you can’t see it from a quality and a consumer’s right to know perspective (such as applies with food and the FDA) then look at it purely from a copyright perspective.
Copyright protects source code, just as it does the words in a book. An author of a book publishes the entirity of his/her work for all to read … it is effectively published in “source code” form.
The very fact that everyone can read it makes it harder for another party to steal. If everyone can read the text in every book, and another author tries to steal some part of another’s earlier work, then when the second author publishes they will get sued for copyright infringement.
So, once again, everyone is served by the transparency, and no-one except those with bad intent (malware authors, code stealers) is really served by secrecy.
Talk as much as you like.
I am a user as well a developer … my needs are different from other users but nonetheless I am still a user.
You do not have the same rights as anyone else to the source code because you use it. That is an agreement between you and the company that is providing the software, as I have previously stated again and again.
Governments are bigger customers than I for Microsoft and this Microsoft will bend over backwards for them, whereas I who buy a single user license .. they won’t .. this is to be expected, it is called reality.
If they didn’t hold up to the terms of the £100 I paid for a windows 7 license … then I would be upset .. I knew what I was getting into before I bought it.
The GPL, the BSD and the MIT license is still an agreement between two parties. It is just a different agreement.
Apparently from your arguement because those licenses are open that agreement is okay, but because other licenses are closed an agreement is now not okay and everything should be open .. talk about a double standard.
Apparently something is wrong with me because because I don’t agree with you and I consistently provide arguments against which you neither do not like nor do not have a decent arguement against. I am sure you will make up some criteria which I am some sort of idiot … I really do not care at this point in the argument.
And how is the original author of the code supposed to make cash money (i.e. a living) if everyone uses his code … oh wait a sec he has the credit … doesn’t pay the rent.
If they release the code as is, someone will nick it and read it and reimplement it … so the original author of the code ends up having his ideas nicked and people profiting from his hard work…
What a massive incentive to reveal your code. NOT!
What on earth are you on about? You make no sense at all. I can’t figure out what you think my point is, but you aren’t even close to anything I said. I can’t see any point in trying to shake down your strawman arguments ranting against things I have never said.
OK, now for this argument, which is a new tack for you.
Point 1: Most programmers just write code and sell their labour for a living. Most programmers don’t actually sell code.
Point 2: Book authors publish their works for everyone to read, yet they can still sell books. In fact, if someone tries to literally copy their work, the original author has an open and shut claim of copyright infringement against the copier. This is just as true for published source code as it is for books.
Point 3: Why shouldn’t other parties be allowed to re-implement code functionality? This is allowed for books … anyone can write their own new stories about young male wizards, as long as they don’t literally copy word for word the text of Harry Potter books. If they don’t copy text directly, but compose a new story (even one with a similar theme), then the new authors have done their own work. What is wrong with that?
If the new story is better, why should people be forever stuck with just the old story? If the original story is better, then the new story won’t impact the sales of the old story, if anything it might even enhance/renew public interest in the old story.
Why should it be different with software?
Point 4: There are plenty of businesses with open source software business models that work just fine. Here are a few examples:
Google releases Android 2.3 “Gingerbread” source code
http://www.h-online.com/open/news/item/Google-releases-Android-2-3-…
Red Hat to Become First Ever Billion Dollar Open Source Company in 2011
http://www.techdrivein.com/2010/12/red-hat-to-become-first-ever-bil…
Open source: IBM’s deadly weapon
http://www.zdnet.com/news/open-source-ibms-deadly-weapon/296366
AMD steps in to help Intel and Nokia with Meego
http://www.theinquirer.net/inquirer/news/1898111/amd-steps-help-int…
Edited 2010-12-21 01:54 UTC
I directly countered you points. Don’t pretend not to understand. It is tiring and quite sad.
1) But some of them do.
2 & 3) Software isn’t a book. Software is an engineered system (not necessarily engineered well). You cannot make the comparison you are making. It is invalid.
4) For their business model it suits them, doesn’t mean it is right for everyone.
All those companies are also big players. They are in an entirely different situation than a 5 or 6 person software house.
Edited 2010-12-21 11:29 UTC
No, you did not. You went off on some lunatic tangent that had absolutely nothing to do with anything I said.
So? Some of them also sell open source code!
How does throwing in a single word “engineered” make any difference? You think it is easy to write a book or compose a song, that anyone can do it? You think that software engineering is a sacrosanct activity in some way just because it can be labelled as engineering? I happen to be an engineer myself, and I can’t see how what I do is any more deserving than what someone else (with different skillsets to mine) does.
Malware isn’t right for almost everybody. I can’t see why everybody must put up with malware simply for the sake of a wrong-headed buisness model (closed source software).
A 5 or 6 person software house probably writes bespoke software anyway, in which it is obliged to give the source code over to its customer. Where is the problem?
Edited 2010-12-22 09:54 UTC
Simple fact for an aircraft control system not being able to inspect the OS core for defects is not tolerable. So windows is not used on them.
Nice to say everything else extensively tested. Its like saying lets build a house on quicksand we tested everything else about the house and we wonder why it sunk and had problems.
Locking to particular versions means fixes cannot get into system effectively either.
Four levels of testing. Complete software audit at source level using automated tools was missing that can find leaks and other errors that are one in 1 million events that the levels of testing you are talking about would miss and could result in deaths.
Does not matter how many of levels of testing you claim to have done. If you have not done full source level you have not done a proper job of something that is life critical.
Yes part of the insane steps come from not being able to audit at source level proper too.
Problem here the need to make proper secure and tested systems does not change between being the USA goverment to a Single web developer working on a hospital system. Both are areas a flaw could kill someone. Both are areas were the full source code should be on hand to allow code paths to be checked.
But people like you Lucas_Maximus try to claim by doing enough testing without the source you are going to do equal job to having the source. Simple fact you cannot. Simple question what price is a human life? From your eyes low enough that you will not require having the full source code to audit correctly so flaws will be missed that would have been detectable with full source.
Actually, what would be most useful would be visibility of the source code. People who could understand source code, and who did not write the software, can get to see what is in the software source code.
That alone would eliminate the vast majority of malware.
I really doubt that. Like for example I do programming every now and then and even though the source-code to most of my favorite programs is available I never actually go through the code, and even if I did I still wouldn’t understand it. Like for example to check a messenger client’s code you’d actually have to understand the whole freaking API, just looking at the code isn’t going to do any good.
It simply doesn’t work like “you have source-code therefore the application is secure!” There can for example be a deliberately made vulnerability that needs very specific methods to take advantage of. That would be impossible to find for someone who doesn’t know the API inside-out.
Having the source code goes along way. Also partly checking the code does not require knowledge of what the code is planning todo. http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis and https://developer.mozilla.org/en/Dehydra.
Now due to the improvements in static code analysis there is really no excuse for any program to have a buffer overflow defect. Yet many closed source and open source still do. Even more complex defects are now findable.
Open source allows you to go back and audit what you are using with automated tools. If you are worried about secuirty. Where with closed source you have to depend on the maker of the program doing the right thing.
Even without the source attackers can still by time locate the flaws anyhow.
This is something else some of the coded in privilege breaches hidden away in api’s are dug out buy modern day static analysis tools. So claim of impossible to find them without knowing the API inside out is false in some cases today. In the future most likely will become more false.
One of the things that dig out some of those backdoors that static analysis miss is API usage monitoring looking for cold paths. This is only effective if you have the source code.
Question is with static and cold path monitoring that both really need source code to perform well. Will backdoors avoid showing themselves as a possible problem. Ie cold paths are either code bloat or secuirty issue either way an issue that should be investigate thinking both can be sign that the program is poorly coded.
Open source allows you todo rough quality assessments using third party tools. Closed source kinda does not.
Edited 2010-12-19 01:37 UTC
Having well written documentation is far better. Being able to view the code != documentation and never will.
That is my main point. Being able to view the code is all good and well. However having adequate documentation is far more useful. I don’t have to go look through code trying to work out what it does, I have a document telling me what it is supposed to do.
If there it does not work as the documentation says it does I know there is something wrong. Looking at code this is not immediately obvious.
Sigh!
Imagine a malware author writing a trojan horse application. He/she may choose an easy method and merely tack his/her malicious code onto a perfcetly legitimate existing application, or he/she may write a new application (such as a download manager, or similar functionality missing from Windows out of the box) from scratch.
The very best approach is for this malware author to write a perfectly good, attractive application, which makes every appearance of doing what its well-written documentation says it does. To be attractive, it should be free to download, have no ads or nags, and it should work well.
In addition, however, the malware author has tacked on to the installer program a separate key-logger or a rootkit.
No amount of good documentation of the download manager program is going to defend you against the keylogger or the rootkit.
Edited 2010-12-19 22:46 UTC
Sign lets make up imaginary scenarios. A application cannot be legitiment if it has malware bundled with it. If it is legitiment software it does not have malware bundled.
So your imaginary scenario has a fundamental problem.
I said:
What? Boy, are you ever confused.
There are millions of examples of malware peddlers taking a legitimate application and writing a new installer bundle for it which has malware included.
Here is an easy way to find such a set of exmaples:
Try Google for “crack” “key” and “excel” as keywords.
http://www.google.com/search?q=crack+key+excel
I got “About 1,460,000 results“.
Not bad, hey!
You can use just about any game, or really any legitimate Windows application of any kind, and easily come up with nearly as many hits. Malware fun for all!
Strangely enough, it even works for “OpenOffice”!
http://www.google.com/search?q=crack+key+openoffice
How crazy is that?
PS: please everybody … don’t click on any of the search result links from the above searches if you are using Windows. Be warned, OK?
Edited 2010-12-20 09:48 UTC
Oh come on, the Trojan/Crack is not the same as the application is it?
Most users would not be aware of any difference. Give them a link to “download excel for free” and they think … you beauty! A lot of trojan malware offerings look like very legitimate websites … how can a user tell?
The only thing that can save them after that is if their installed anti-malware does detect the malware in the trojan package after they have downloaded it and tried to install it.
And remember, anti-malware typically has a 1.5% failure rate, or worse.
Edited 2010-12-20 11:08 UTC
Still isn’t a problem of a legitiment program is it?
No, it is a problem for users to be able to tell if a given download is legitimate. It might look and behave entirely like a legitimate program to a user, even though it has actually been tampered with.
That, after all, is the entire point of malware.
The whole trojan horse scenario comes about because users routinely expect that no-one but the author is allowed to know what is in a software package for download. There is no system in place to allow it to be audited by the end user.
So, what is your point? Are you trying to say that there actually is legitimate closed-source software? of course there is. No argument.
But there is also a vast array of trojan malware disguised as legitimate closed-source software. The end user has no means to tell them apart.
Why is this even a question? What on earth is your point?
:facepalm:
Okay so the end user is soo dumb they will download and install everything but will be clever enough to check if the software will be audited?
Edited 2010-12-20 12:43 UTC
I totally agree and I would have upped you if I hadn’t already posted.
The thing is, it doesn’t have to be you, yourself who goes through the code. All it takes is that there are people somewhere who could understand source code, and who did not write the software, can get to see what is in the software source code.
So imagine that was the global “paradigm”. We could only have code where anyone and everyone could read the source code … everything else was somehow the same as today apart from just that one thing.
In that imaginary world, it would be many, many times more difficult to write code with malware in it.
Your system would be far, far less threatened by malware, even if you never in your life read a single, solitary line of code yourself.
Uh huh…explain the SIX YEAR OLD X server bug that was passed through nearly a dozen kernel revs with NOBODY noticing it? Not even once? Your ENTIRE premise hinges on a logical fallacy, the “if the code is out there SOMEBODY must be checking it” which the X server bug drove a stake right through the heart of.
The simple fact is all OS level code is COMPLEX…full stop. Even the guys who work on it every single day can’t figure up the complex interactions between their code and the rest of the OS, which is why we have patches. Pretending that “ohhh source code is magic protection!” is just a complete fallacy. There is good and bad of BOTH.
Everyone being able to see source code doesn’t ensure that obscure bugs are spotted. It would be good if it were so, but it just isn’t. There are some bugs that are very obscure.
However, what transparency of the source code does ensure is that there aren’t included a couple of hundred lines which have malicious functions such as “launch a daemon to capture all keystrokes and send a copy of them to denizen.hackersIP.net”.
Please argue against the actual points being raised, and not some strawman points that you just make up only to tear down. Sorry, but strawman argument attacks don’t work.
Edited 2010-12-20 22:04 UTC
You missed the critical part. 6 years is not the critical part here. In the last 4 new tech for auditing code has appeared. This is why the 6 year old and other bugs were found. The oldest one found in the audit of code in Linux kernel related to X11 is not 6 but 10 years of not being noticed.
Please be aware the new tech in code auditing need source code to test against. Linux kernel has on average a very low bug detect rate vs the amount of code contained. This is keeping on improving even with 10 year odd bug being dug out.
Tech changes with time. What we could automatically audit for 4 years ago is quite primitive to what we can today. 10 years ago what we could automatically audit for was stoneaged. We are really only starting to enter the IT auditing equal to the industrial revolution. So things will keep on improving.
Source code means the latest tech in auditing can be used against it even if the first maker is no more. There are a lot of games and other things out there were the source code has been lost for good since the company died. Made before the current advances in auditing tech. So they are also not fixable in an affordable way if a issue turns up.
Source code advantage is not just secuirty. Its also the means to correct a secuirty flaw that you would not be able to otherwise.
Yes, that’s the ticket. Every computing platform should be so fragmented that the only way to spread malware is as source code with build instructions.
That approach works wonders for the FLOSS world!
WTF?
Making the source code visible does not prevent simultaneous distribution of binary executables for the purposes of installation on end users machines.
What in heavens name are you on about?
Despite your apparent attempt at sarcasm, if the source code is visible and it can be shown that the binary distribution packages can be made from that source code, then there can be no malware in the binary packages.
Indeed, exactly as you point out, this system does work very well for the FOSS world. It has a history of over ten years distributing thousands of software packages to millions of end users without any malware.
Edited 2010-12-21 05:48 UTC
WTF?
Making the source code visible does not prevent simultaneous distribution of binary executables for the purposes of installation on end users machines. [/q]
Ding-ding-ding-ding! Congratulations, you just inadvertently pointed out the MASSIVE flaw in your own half-baked “solution” to malware. There’s not a single reason for malware authors to distribute their source code, and every reason not to.
Right, brilliant plan. We’ll somehow convince all malware authors to publicly release their source code – and when they distribute binaries, we’ll just take their word that the binaries were built from that source with no modification. Because malware authors are widely known for being trustworthy.
Not that UNIX OSes have ever needed malware to have rampant security vulnerabilities, sendmail anyone? And where do you think rootkits originated?
Der. That is exactly the whole point, genius.
If a programmer IS prepared to show and/or distribute the source code, then that program is NOT going to be malware.
Where were you when they passed out the brains?
All we need to do is install only applications where the authors ARE prepared to show their source code, so that anyone and everyone CAN, if they have the skills and inclination, satisfy themselves that the source code that is posted makes the binary that is being distributed. It only takes one person somewhere to check, and it only takes one person to raise an alarm if the source code does not make the same bianry as the one being distributed.
Any program being distributed where this cannot be done is suspect, in that it COULD harbour malware. Don’t install such programs.
Having transparency source code does not guarantee that there is no security vulnerability, but it does guarantee that there is no malware.
Having closed source applications where the source code is visible only to the original author and nobody else does not guarantee that there is no security vulnerability and it also allows that there might be malware.
Which of these outcomes is better for users, do you imagine?
As closed source malware for UNIX.
Edited 2010-12-22 09:36 UTC
Users make mistakes, many know next to nothing about security etc and it would be nice if they had common sense, many don’t they need some help, maybe lots of help thats the point of MSE isn’t it? And lets not absolve MS from all responsibility.
Now MSE just looked it up on http://www.virusbtn.com/ it been tested 4 times and failed once to detect a virus found in the wild (Avast failed non of these tests) the best performing AV has been tested 65 times and failed 3 times.
Edited 2010-12-18 13:55 UTC
“MSSE was able to detect 536,535 samples what’s a very good detection score of 98.44 per cent.”
> That is pretty good
98.44% SOUNDS very much but it isn’t very good if you compare it to the other tested FREE antivirus tools. in my (unfortunalety german) linked article MSE comes to 96 percent.
avast 97.87%
avg 98.66%
avira 99.41%
panda cloud 99.88%
so in relation MSE had the worst detection rate… beside the other bad facts:
– no email scan
– bad on-demand & on-access speed
– problematic update interval
– MUST registration to microsoft’s spynet community
i’m not anti-ms; i bought a couple of ms products but MSE is lousy.
Sure, the others scored higher but that may or may not be important depending on the sample. Does it include DOS boot sector viruses from 1991? Who gives a shit of MSE does not detect viruses that does not exist in the wild anymore and can not even spread these days.
MSE is free, doesn’t nag or throw pop-ups all the time and is well integrated into windows.
No E-Mail scan? Mail is scanned by the server already.
…brings to the table.
I found 1.x to be sufficient and inobtrusive. Hopefully 2.0 comes with some interesting improvements.
One other comment, people talk about memory usage for operating systems… that it being 2010 is a good excuse for an OS to require a ton of RAM. The only reason an OS would be chewing up a lot of RAM is because it is trying to be faster by caching data and/or apps/processes. The other reason is if it starts a myriad of processes. I *still* don’t understand why operating systems with UI’s like X-Windows (VMS, unices) or even built-in like BeOS, Mac OS 9, etc., got away with using minimal RAM, but that cannot be accomplished today. I guess I’d have to see the internals for all those OSes but … it just seems like they aren’t TRYING to make operating systems memory efficient anymore.
Very simple. Optimising code is tedious and very difficult. Hence.
Open source: why optimise when I can dish out frivolous features which are much more fun to work on?
Proprietary: why optimise when it’s much easier to advertise with frivolous features?
Very simple, really.
The first step of optimizing is not to make things worse, and it’s not exactly tedious and difficult.
If OS manufacturers were simply not so quick about adding new features and took more time to design them, everything would magically run much faster.
But well, as you said, why should they do that when it’s easier to sell shiny features than to sell clean, reliable, and fast code ?