Despite all the anti-malware roadblocks built into Windows Vista, a senior Microsoft official is lowering the security expectations, warning that viruses, password-stealing Trojans and rootkits will continue to thrive as malware authors adapt to the new operating system. “There is no guarantee that malware can’t hijack the elevation process or compromise an elevated application,” Russinovich said after providing a blow-by-blow description of how UAC works in tandem with Internet Explorer (with Protected Mode) to limit the damage from malicious files. Even in a standard user world, he stressed that malware can still read all the user’s data; can still hide with user-mode rootkits; and can still control which applications (anti-virus scanners) the user can access.
Especially with Vista’s UAC since everybody is disabling it because it’s more annoying than malware itself.
Please don’t speak about something you never tried!!!
UAC is the best Windows Vista feature and UAC is much better than Linux’s sudo
Edited 2007-04-27 19:06
Do you actually KNOW what sudo is ?
Please. If you have been reading around you will see that this is not the first time people in MS have backed down from UAC.
UAC is just like DRM, it just keeps the honest people honest. Anyone who wants to get around it can.
There has been more then one person in MS who has said UAC is not even a security feature??? LOL!
http://www.networkworld.com/news/2007/021407-microsoft-uac-not-a-se…
http://talkback.zdnet.com/5208-10533-0.html?forumID=1&threadID=3161…
MS fan boys fall for the slick marketing every time.
UAC is just like DRM, it just keeps the honest people honest. Anyone who wants to get around it can.
Actually, I’m pretty sure it keeps the honest people busy trying to get around the flaws in yet another buggy Microsoft implementation of a technology others might just be able to get right.
Please, list how UAC is buggy and poorly done.
Everyone says it, but you all look like a bunch of idiots, as you never say WHY.
Yes, burry my comment instead of pointing anything out.
Again, give an example of how UAC is buggy.
You don’t need to ask me again, since I hadn’t had a chance to reply, and no I didn’t vote your comment down.
Linux asks for a password – real security – whereas UAC just asks any tom, dick or harry who happens to be passing to “cancel or allow”.
3rd time now in this article alone (btw, you really show you don’t know what UAC actually is)…
UAC does require a password when running as a standard user.
UAC is nothing like sudo. Well it’ll ask you for a password like sudo will. Sudo is completely non-obtrusive though. The only time it’ll ask for a password is when you actually are trying to do some administrative things, and it also will keep it cached for five minutes (you can configure the time on that).
On the other hand, with UAC enabled, if you want to do something as simple as create a new folder outside of your home folder, you have to click continue on two dialog boxes and enter the administrator password twice! That is just overkill for everyone.
If you don’t believe me, open up your Program Files directory, then right click and select “new folder” then it’ll ask you to confirm, then ask for the Administrator password, then it’ll prompt you to rename the folder as always, but then it’ll ask you to enter the Administrator password a SECOND time.
This is just plain overkill and is badly implemented.
Given the conflicting opinions I’ve seen, I suspect UAC’s main offense is that it exists.
Prior to Windows Vista, Windows never asked for permission. Now it’s asking, and I’m betting ANY sort of UAC would irritate people who were used to the old behavior.
Now, maybe they have taken it too far; maybe they haven’t. I haven’t had any first-hand experience with Vista yet so I can’t say for sure.
So the lesson is to continue to be cautious with what you do in life because the majority of the other people out there would love to take advantage of your mistakes; and a few of them know how to.
It’s always an interesting trade off with security: Convenience verse privacy. The important question will be whether or not the malicious programs will need events that we can clearly blame on the users: For example, visiting “installviruses.com” is not an event we should blame on users. Clicking “ok, install” at “installviruses.com” is an event we should blame on users.
You obviously want programs to be able to easily read your data, otherwise work could get too restrictive to be productive. But not every application: You don’t want your browser accessing your drive except through explicit circumstances.
I still believe UAC may be one of the better things Microsoft has tried to do along the lines of Windows security, but I haven’t researched it (and may not) enough to know if they’ve really done a proper job of it.
One thing that seems to be universally missing in UAC systems is a useful method of telling the user exactly which application is requesting the authentication. OS X has an easy way to show it (which they don’t actually use) by checking the menu bar. Vista could color the windows associated with that process. Gnome could probably do something similar.
please don’t speak about something you don’t know!!!!
Agreed….I just dont care for antiviruses or antispamware software etc. If you are using Windows use a LOT of caution you will be fine. If you are using OS X use a LESSER amount of caution because of its Unix roots. On Linux use a bit of caution or NONE and you would still be fine.
Been using the smae install for XP on my laptop for about a year now…tweaked….nLited…services disabled…using IE 6 as a browser…no antispyware antivirus….if you would believe me or not I dont care but 0 issues. All I do is a ccleaner once a month, after Windows update, defrag at that time and reboot once after offline defrag. Thats it….rest of the time, I am gaming, watching movies, doing my work etc no problems. XP and Vista CAN be made awesoem for your own needs just need a bit of patience and a bit of care and a bit of knowledge and a lack of desire to go to questionable websites for pr0n. If you do the above you will be fine.
I just wish there was some transparency to the install process.
I tried to install that 3D model tool from Google (forget the name) on my Mac, and rather than having Ye Olde “Drag n Drop” install, it wanted root access.
Now, of course, I have no idea why it wanted it, and I basically refused to continue the install. It made no sense to me why a 3D modeling program would need root, and I, unfortunately, place Google in the realm of many other vendors in terms of them installing crap on my system I don’t want (no, I don’t want Google desktop or google toolbar or any other google thing plunging its tentacles in to the depths of my system).
Now, of course, if I had some reasonable explanation as to WHY they wanted root to install a 3D program, I may well have let it through.
But the problem with Windows has long been it’s requirement that most everything needed admin to install (or even just to run), so as a user you pretty much had to blindly say “OK” and give every installer admin access (if you hadn’t just given up and run in admin mode already anyway).
On the Mac, tho, I’m suspicious of any program that needs root to be installed. It’s a bad habit to get in to, and Mac developers shouldn’t “just do it” “just in case”.
http://blogs.zdnet.com/security/?p=181
Maybe if the developers out there stopped using the 140,000 unsafe system calls which Microsoft has marked unsafe, Microsoft wouldn’t need to keep in there.
The problem sits at the corner of those third party vendors who refuse to get with the times, strip their code of unsafe calls, and starting working with Microsoft to eventually weed out the real causes of security in Windows – lazy third parties.
As for UAC; it is only there as a warning, “I’m about to do something with your computer, do you want me to continue” – that is better than nothing. If there are people being infected by malware, they’ve chosen to ignore the warnings, they’ve chose not to set a password, they’ve chosen a cavalaire attitude to installing any old thing from any old site.
If MS had made a better base perhaps there wasn’t a third party userbase as there is today.
I have to agree with you. MS needs to break backwards compatibility. I mean what IS there anyway in terms of software that one needs to run? Ok Photoshop….if MS says they are bringing out a new OS which will be brand new all new with no backwards compatibility…do you not think they could convince Adobe to make a version of Photoshop to work with that new OS? It aint rocket science. It is MS’s reluctance to break backwards compatibility. THey could have done it with Vista and it would have been MUCH better off for it but they dropped the ball as usual…
Well, the thing is, they have the marketing muscle and in alot better position than Apple when they bought out MacOS X. The fact that these companies don’t have Linux versions of their applications makes the situation even easier.
Strip this backwards compatibility, then simply drag these companies through the mud who refuse to provide free updates and make their next versions based around these superior secure API changes – basically make the market and consumers know that these companies are refusing to update because they put their sort term profits ahead of doing things to benefit their end users.
Microsoft will come off as the concerned company who has moved mountains to make their system secure, and basically those companies who software refuses to run are simply money hungry organisations who don’t have the interests of the end user at heart.
As for corporate’s, they’ll eventually move, even if it is 6months later than what they do today; but ultimately if Microsoft provides all the tools necessary, offer free upgrades and documentation for companies to port their internally written applications and provide free upgrades for those third parties who refuse to maintain their software.
Hence the reason I don’t sit in the same pool as those who think that Windows is so broken that the whole thing needs to be thrown out and started again; Windows technologically has a better footing than most operating systems, what holds it back is the layers of crap they’ve kept holding onto for the sake of compatibility.
Years and years ago, they might have had a valid reason, but now they have a valid reason to get rid of it – for security reasons.
Breaking backwards compatibility breaks everything that Microsoft really has. It is not in their interest to let ISVs look around at alternative platforms. It is also strongly in Microsoft’s interests for ISVs to keep maintaining their current code rather than rewriting it (they might make it more portable!).
All in all, Windows is doing okay. Backwards compatibility exerts a huge testing burden on Microsoft and forces them to keep certain things the same, but for the most part they can implement new stuff on the side in a way that’s orthogonal to the old applications. In the next version of Windows, I bet we’ll see massive changes to the rendering engine that will run alongside the current GDI without really interoperating with it.
What are the alternatives? Mac OS X that routinely breaks compatibility? *NIX world where opensource revel in the ideal that they break compatibility on a regular basis for the sake of a superior design/approach to a problem.
The way you make it out, it would require massive re-writes. These ‘insecure calls’ have been known for inexcess of around 6+ years, if there are companies who are still using these insecure calls, I’d say the problem lies with those companies.
If it were me, and I was running Microsoft, I would find out those who would make changes and bring about compatibility, those who don’t – and simply bring out software to replace those software titles which have refused to ‘play ball’.
Oh, and there is a benefit to *NIX in the long run; applications would no longer be using outdated and unmaintained calls, meaning that wines project should be a damn site easier, beign there is less to implement for compatibility. When 140,000 calls are ripped from the stack, it will make implemening win32 of *NIX a damn sight easier.
You don’t need to guess; just look at where WDDM is going to in regards to managing GPU resources, for example.
The problem isn’t with GDI but the API calls themselves which Microsoft has deemed unsecure.
Exactly. And I’d go even further: they need to make a new OS from scratch (how about stealing some ideas from *nix land?)
On top of that they could put a compatibility layer which doesn’t affect the rest of the OS, very much like linux is doing with wine (except that wine doesn’t own the code, MS does).
Vista is a downgrade, not an upgrade.
I don’t value security that much. I can make my Windows box secure, but I realize that I’m in the 10% minority that can do that. The rest of the people are clueless about computers and would probably have the same problems on Linux or OS X. It doesn’t matter if they run as root or not. Malware still has access to the user’s data.
Windows shines in other areas. The two things that kept me on Windows are:
– hardware support
– applications
Unfortunately, this covers just about everything an OS should do. Let me just say that Linux’s hardware support is incredible if you consider a big chunk of it is reverse engineered. It’s sad that there isn’t support from the hardware makers, but I blame the people behind Linux for this. Offer the manufacturers a stable environment. A BINARY driver created 5 years ago should work on all distributions and it should work just by clicking next>browse>next>finished.
It’s the same with applications. Offer a sane directory layout where every application is contained in its directory and where the system is contained in its own folder, make it possible that a BINARY application written 5-10 years ago works in all distributions with a simple, or without the need of an installation – just drag the package to an install icon for example and it just works. Next, make sure all apps work regardless of desktop environment and still look consistent with it.
Also, desktop Linux should be that well done, that even if you shipped a system without a terminal application and without the possibility to get to the CLI you would still be able to install and do EVERYTHING.
It’s that easy. Take backwards compatibility more serious and make a stable API for everything and Linux will have 90% market-share in 10 years instead of Windows.
Edited 2007-04-27 20:37
I can make my Windows box secure, but I realize that I’m in the 10% minority that can do that. The rest of the people are clueless about computers and would probably have the same problems on Linux or OS X.
I doubt it, unless they all changed to a system which encouraged/forced you to run as root by default. Assuming there are any that still do that.
And root DOES make a difference. Try deleting all the files in / as a normal user and you’ll see.
It doesn’t matter if they run as root or not.
Care to back up that assertion?
It’s the same with applications. Offer a sane directory layout
Yeah, because Windows has that too. And the Registry makes perfect sense.
Offer the manufacturers a stable environment. A BINARY driver created 5 years ago should work on all distributions
Funny, they seem to do just fine writing drivers for Windows without that insurance – the only reason Windows has been using the same driver architecture for five years is because Vista is late.
Also, desktop Linux should be that well done, that even if you shipped a system without a terminal application and without the possibility to get to the CLI you would still be able to install and do EVERYTHING.
You can’t even do that with Windows. Seriously. Try telling the GUI to rename 500 files called *.txt with the extension .bak without performing the action yourself five hundred times. Or try programming without typing a single line of code. There’s a reason Microsoft wrote PowerShower: Because the people who said you need both a GUI and a CLI won the argument. Those who said you needed only one or the other were wrong.
I don’t value security that much.
Clearly. Note to the uninitiated: That is not a good thing.
I’d like to take some exceptions to what you’ve said, twenex. You should work a bit on your certitude here: you CAN rename multiple files at once with the Windows GUI. Check this out: http://support.microsoft.com/kb/320167. I just tested it on Vista, and it works better than it would have on XP because of the enhancement to how renaming works with multiple filetype extensions.
It’s not that discoverable, but a quick Google search for “Windows renaming files” is the ticket to getting it done. It’s also undoable within explorer, which is a nice touch.
I think it’s true that you need a CLI if you’re an administrator of some sort or want to script something up. On the other hand, Mac users and Windows users to a large extent have been able to live entirely without the command line. At worst, you have to edit some registry keys (but this is only if you’re doing something advanced and sysadmin-like). I think this has been good for the desktop deployment of the Mac and Windows. Using a GUI is more or less discoverable and does not force me to memorize inane details like the specific incantations a particular program’s ‘.conf’ file requires.
I think moving away from text files will greatly boost Linux’s viability on the desktop. Put an API around making configuration changes and just make some easy config browsers that use this API (for the geeks, admins, and for features that don’t fit within the UI). Standardize the whole mess and config becomes less of a hassle. What do you get from this?? The Registry!
I’d like to take some exceptions to what you’ve said, twenex. You should work a bit on your certitude here: you CAN rename multiple files at once with the Windows GUI. Check this out: http://support.microsoft.com/kb/320167. I just tested it on Vista, and it works better than it would have on XP because of the enhancement to how renaming works with multiple filetype extensions.
OK, maybe I was wrong there, but that’s really only scratching the surface of what scripting and (especially) programming can do, both of which involve at least some typing.
I think it’s true that you need a CLI if you’re an administrator of some sort or want to script something up. On the other hand, Mac users and Windows users to a large extent have been able to live entirely without the command line. At worst, you have to edit some registry keys (but this is only if you’re doing something advanced and sysadmin-like). I think this has been good for the desktop deployment of the Mac and Windows. Using a GUI is more or less discoverable and does not force me to memorize inane details like the specific incantations a particular program’s ‘.conf’ file requires.
1. Windows and Mac are based on the idea that if you have a personal computer, you don’t need a sysadmin: Not true – you become your own.
2. I’m not sure what you mean by “discoverability” but if you mean that it’s easier to find out what to do in the GUI than at the command line, I’d say that’s true for Windows and Mac OS < 9 but not for UNIX/Linux/OS X.
I think moving away from text files will greatly boost Linux’s viability on the desktop. Put an API around making configuration changes and just make some easy config browsers that use this API (for the geeks, admins, and for features that don’t fit within the UI). Standardize the whole mess and config becomes less of a hassle. What do you get from this?? The Registry!
Oh, please. Do you not know that it is easier for a computer to convert a text file into binary than for a rich man to enter H— sorry, wrong movie — than for a person to convert a binary file in into text? The Registry is a good example of everything that’s wrong with Microsoft’s approach, not least because loading a gigantic binary file all at once into the computer’s memory is one source of Windows’ gargantuan memory consumption and reboot-every-time-a-system-change-is-made problems. I would agree that clicking a box is easier than editing a file full of text, but I would NOT agree that is is necessary to get rid of the latter to be able do the former.
And root DOES make a difference. Try deleting all the files in / as a normal user and you’ll see.
No, I can’t delete all the files in /, but I can delete all the files in ~, where are the files that matter.
And root DOES make a difference. Try deleting all the files in / as a normal user and you’ll see.
Why would a malware writer want to delete all files in /? Doing so would be totally pointless. First of all, as someone else pointed out, all the files I really care about are stored in ~.
Secondly let’s look at what most malware does. It either sends spam, lanuches DDoS attacks or sends someone your personal information. Non of that needs root access.
OK, then we need to look at WHY no software on Linux exists to do that. And don’t give me the “nobody uses Linux” crap — Linux runs more mission-critical systems than it runs desktops, not to mention the Internet (Linux and UNIX). If you were a cracker, what would you rather compromise: national security or Kelly, aged 5’s, pictures?
Installing malware is a different beast from hacking servers. The two have nothing in common. Malware in almost all it forms require both some form of user interaction to get installed and then that user doesn’t notice the abnormal behavior. Making it pointless for attacking servers. Although it is quite common to use malware infested windows machines as a launchpad for attacking linux servers. And for what it’s worth there is plenty of point and click automated Linux server hacking software out there is you know where to look.
Linux desktop users are on the whole more savvy than windows users. They are far less likely to double click that binary attachment from an unknown user. Thus it’s hardly worthwhile writing malware for Linux. If you need another 1000 boxes for your botnet it’s simply easier to get them on the windows side.
If you were a cracker, what would you rather compromise: national security or Kelly, aged 5’s, pictures?
Depends on my goal. If I want a machine to relay spam through for personal profit, I’d go for the softest target possible. A sysadmin for a server responsible for national security is far more likely to notice someone trying to take over his server than Kelly, aged 5. These people don’t want what’s on your machine they want your bandwidth and, to a lesser degree, your storage.
This comment is quite off-topic. The article isn’t about Linux at all.
BTW, maintaining binary compatibility with 10 year-old software is a mistake, IMO. It is the reason Vista is so bloated today. I think Apple made a much smarter decision by breaking compatibility.
As far as Linux is concerned, breaking compatibility is usually trivial, since most of the software is free. Also, Linux is actually quite backward compatible if your programs has statically-linked libraries.
I also completely disagree that backward compatibility (which isn’t much an issue, as I’ve mentioned) and a stable API for everything would give Linux 90% market share. There are other factors at play here, such as marketing, consumer inertia and just plain old personal preferences.
In fact, I don’t think another OS will ever hold as big a market share as Windows has held for the past 10 years…and that’s a good thing!
BTW, maintaining binary compatibility with 10 year-old software is a mistake, IMO. It is the reason Vista is so bloated today. I think Apple made a much smarter decision by breaking compatibility.
Maintaining binary compatibility with 10-year old software is a mistake, IF the design of that software was a mistake in the first place. AFAIK PDP-11 Unix-in-C didn’t make any concessions to PDP-7 assembly Unix, for example.
The fact that the design and programming practices of the world’s most arrogant software corporation can’t hold a candle to people who admitted to BUGS in the manual pages and said “Never mind do it right – just do it!” is a sad indictment of where monopolies – legal or otherwise – lead you.
Edited 2007-04-28 00:34
That backwards compatibility drives the decision of most people using it. That’s because software developers drive the need for backwards compatibility. If your entire system is distributed with all the software you need, or it’s painlessly portable for some other reason (I wish), then of course you can turn it into something different.
Windows is the 800lb gorilla precisely because the old and feeble programs that people like hardware manufacturers, employers, and developers of ancient software have thrown at users have, for some part, functioned.
If you really wanted something to change, you’d want to stop the rhetoric about monopolies; you’d want users to figure out what they need and who can provide it. Eventually, they might know enough to drive development of things like drivers that work, maybe even open-source drivers (it would make the hardware more desirable, everything else the same), and perhaps cross-platform productivity software that makes them feel at home.
The comment above about a win32 VM akin to OSX’s classic environment was spot on. Users want and sometimes even need it. Work to fill the needs, not weeping in the hopes of some disgusting destruction to satisfy your hate.
That backwards compatibility drives the decision of most people using it. That’s because software developers drive the need for backwards compatibility. If your entire system is distributed with all the software you need, or it’s painlessly portable for some other reason (I wish), then of course you can turn it into something different.
I didn’t say that software compatibility is a bad thing, only that it’s a bad thing if you make a rod for your own back. According to some sources (Computer Shopper, UK) even Microsoft is now realizing that Apple did backwards compatibility between OS 9 and X right, and are doing it the Apple way for the next version of Windows.
If you really wanted something to change, you’d want to stop the rhetoric about monopolies; you’d want users to figure out what they need and who can provide it. Eventually, they might know enough to drive development of things like drivers that work, maybe even open-source drivers (it would make the hardware more desirable, everything else the same), and perhaps cross-platform productivity software that makes them feel at home.
OK, on this point you obviously have no idea what you are on about, because this is exactly what projects like KDE and Project Utopia have been doing.
I don’t value security that much. I can make my Windows box secure, but I realize that I’m in the 10% minority that can do that. The rest of the people are clueless about computers and would probably have the same problems on Linux or OS X. It doesn’t matter if they run as root or not. Malware still has access to the user’s data.
Any argument about Linux giving access to user data is BS. Obviously any system has to give you access to your own data or it would be worthless as a PC. Even if you lock yourself out of your own data it is still at risk of hardware failure. Hard drives are some of the most common hardware failures out there. Protecting your data is not only about limiting access to it but making sure you have backups in a secure place. User data is irrelevant to system security. User data integrity will always be up to the user.
A BINARY driver created 5 years ago should work on all distributions and it should work just by clicking next>browse>next>finished.
No, it shouldn’t. Linux could have went that route if they wanted to but what is the point of holding Linux back and not updating interfaces within the kernel just so some people who DO NOT want to contribute to the kernel can still insert their code. The whole idea of binary interfaces in the kernel destroys everything that Linux is and the whole open development process that makes it so great. We wouldn’t have half the features we have today if we were worried about binary driver compatibility.
Offer a sane directory layout where every application is contained in its directory and where the system is contained in its own folder, make it possible that a BINARY application written 5-10 years ago works in all distributions with a simple, or without the need of an installation – just drag the package to an install icon for example and it just works. Next, make sure all apps work regardless of desktop environment and still look consistent with it
Windows doesn’t offer a sane directory layout. In fact it doesn’t even offer sane directory names. Documents and Settings? Are you serious. Thank god they did away with that in Vista. Windows is completely inconsistent with itself, nevermind third party applications. You seem to be talking about how OSX works for the most part but all that it hasn’t helped them gain much desktop marketshare.
Also, desktop Linux should be that well done, that even if you shipped a system without a terminal application and without the possibility to get to the CLI you would still be able to install and do EVERYTHING.
Not possible. The terminal is infinitely more flexible than a gui.
It’s that easy. Take backwards compatibility more serious and make a stable API for everything and Linux will have 90% market-share in 10 years instead of Windows.
No, it isn’t. Backwards compatibility is the main reason why Windows has so many problems. Adding these problems to Linux isn’t a solution, it’s just another problem. Consistency is a non-issue because Linux is more consistent than Windows in every way. Directory layout is also a non-issue since all you really need to know is where you home directory is and possibly where the /etc directory is. How many windows users know where their system files are and where their programs reside?
How is binary software of any use in FOSS systems? You should remember that GNU/Linux and other Unices don’t run on just one architecture.
As a matter of fact I am recompiling Slackware for the Chinese Loongson processor that is MIPSEL compatible. This wouldn’t be possible with closed source software and binary drivers.
When I am finished I expect everything to work fine except closed source architecture specific binaries. If Adobe and other companies want to stay relevant they’d better port at least their player software to this processor or better yet make it open source or else they’ll be irrelevant to millions or even billions of people over the course of the next few years.
I know there is Windows NT 4.0 for little-endian MIPS, but it is next to useless because it’s so old and doesn’t have applications. Maybe Reactos could be ported to this processor to satisfy Windows aficionados.
I am a linux user but I simply don’t use sudo (I usually don’t handle root and if I do is normally from a virtual console outside X with a separate login. And yes, I know I am paranoid)
Can anyone explain (or better, point me to a review) how sudo and UAC compare to each other ? design flaws, features, all those nice stuff.
And please, trolls, use your time with better stuff than just saying “This sucks” or “That rules”. We have just too much trolling already without you actually TRYING to Troll.
They work the same.
The problem lies in how apps are installed that throw crap in places it should, and as such anytime you want to install something, etc… you get a UAC prompt.
Don’t forget that logging in as root, using su, and using sudo require the use of the right password, whereas UAC only requires you to know the difference between “Cancel” and “Allow”.
Don’t forget that logging in as root, using su, and using sudo require the use of the right password, whereas UAC only requires you to know the difference between “Cancel” and “Allow”.
And you wonder why Linux is struggling to make it on the desktop…
This is only on the default account, which is a psuedo admin account.
If you do as it reccomends, create a standard user account, then it will ask you for the root password.
They are nothing like the same.
sudo is a command that lets call other commands as the superuser, hence SU”peruser”DO… sudo.
I can sudo rxvt, which will allow a full terminal session as root, and all commands typed in the terminal, go in as roots commands…
Or I can type sudo aptitude install mc
this will run the installer to install mc file manager as the root user, then drop me back to normal user at the end.
UAC on the other hand, is Hal9000 asking, “Are you sure you want to do that DAVE”
SUDO and UAC are not comparable at all… APPLES AND ORANGES ONCE AGAIN.
Windows fanboys need to get a bit of a clue sometimes, CPUGuy expecially :p
You’re missing the main original point of sudo. Sudo can limit which users are allowed to run which commands as root. The administrator can give certain users access to certain programs, and only those programs.
I did not miss the point, I skipped it.
We are trying to explain the basic differences here for Windows users who think sudo = uac…
I don’t think bringing group policies into it will make it easier for them to understand.
Especially, when someone raises the question about the differnce between being in roots group and being in sudos group lol
You call ME a fanboy even though it is YOU who have no clue.
You want a terminal that is ruhttp://www.osnews.com/reply.php?news_id=17784&comment_id=234807nnin… under super-user rights?
Start>Run>runas /user:administartor cmd
You can put whatever command you want in there, or go way beyond just the standard /user switch.
Or, right click whatever you want, hit run as admin.
sudo is used to elevate rights, UAC is used to elevate rights. Just because UAC is a lot more power-ful, turning admin accounts into psuedo-admin accounts, which is what does the allow/dissallow, or with a standard user requesting an admin password.
UAC and sudo are not at all alike. Calling UAC more powerful than sudo is a joke, considering UAC simply grants you full rights for one particular action right now, whereas sudo can grant you a wide range of rights for X amount of time for Y numbers of actions.
No, UAC grants you admin rights for that entire application that you opened, and then for every other application that spawns from it, EXACTLY like sudo.
Sudo does not grant the rights to the application. It is granted to the user instead. UAC and sudo has some of the same targets but they are implemented differently.
Therefore UAC != sudo. But they are trying to do some of the same things, but they are not closely related.
Like what I said here….
“We are trying to explain the basic differences here for Windows users who think sudo = uac…
I don’t think bringing group policies into it will make it easier for them to understand.
Especially, when someone raises the question about the differnce between being in roots group and being in sudos group lol”
The difference is that sudo will temporarily move your account into a different grouping.
ROOT user can set things like rm * -f to be disallowed by sudo users. He can also disallow sudo users to install anything…
UAC under Windows will let the user do ANYTHING if he has the administrator password.
If you have the admin password you can do anything anyway…. what’s your point?
Point is….
I might be root on my own machine, I know the root password.
I can set up sudo so that my normal account cannot do anything silly like open up mc and delete all the files in certain directories.
Root can do it… but sudo cannot.. even though as sudo, I still have the root password.
I can install software as root, but limit sudo accounts to not install software.
I can give my friends sudo accounts, and let them install/remove anything, but limit their abilities to create other accounts.
My friends, even though having sudo accounts to do superuser commands, will not know the root password.
My friends cannot rm * -f /
as they are not root
sudo users should be disabled from things like fdisk
uac on the other hand will let someone run things like fdisk when they give admin password. Silly MS mistake.
Sudo commits a specific action as the root user. It’s just a way to do temporary login to do some action. I think sudo also has the feature that it leaves the door open for a configurable amount of time so that subsequent requests for privilege go through without requiring password authentication.
UAC has a component that works very similarly to sudo, in that it takes authentication to elevate the user to admin rights. The authentication does not always take the form of a password request… if you are already an admin user, you just click “Continue” to acknowledge that the administrative action is really being done on your behalf.
Let me back up here a bit. An important part of UAC is that even “administrative” users launch processes with a “Restricted Token.” The security token is a thing granted to users which has a series of bits which specify which “privileges” you get in the OS. It’s kind of like a capabilities system (see wikipedia), but doesn’t have the sophisticated revocation system of experimental capabilities OSes. The security token also includes a list of what groups you are part of (one of which is Administrators). A process inherits the token from whoever launches it, and when the OS wants to do an access check, it applies the most permissive access mask in the token to the ACLs on whatever object you wish to access (files, kernel objects, window stations, etc) to see what rights are allowed.
Under UAC, even admin shells (like Explorer) naturally get tokens with special privileges removed: the applications launched from this shell only have the rights of a standard user. You go through the elevation dialog box to get true root-level access (which involves restarting the process that requests the elevation… a thing you might notice if you look carefully).
To avoid breaking apps that write to Program Files or other non-kosher locations, a piece of UAC virtualizes writes to those folders for standard-user processes and redirects them to a per-user folder.
Last, but most important, UAC takes some steps to avoid having its own dialog boxes spoofed or manipulated by other applications (any app on a desktop can send a message to any other app in the same user session regardless of the different tokens each process might bear). It achieves this by starting its own Window session that is inaccessible.
The problem is that once UAC has done its business and started a new process with an administrative token, that process is still running on the user’s desktop. You have two apps that are effectively running as two different users on the same desktop, so you can get “shatter” style attacks from a lower privileged app against a high-priv one. It’s fairly trivial to elevate a piece of malware by just squatting and waiting for an insecure high-priv application to need an elevation. This is why someone said, accurately, that UAC “keeps honest people honest.” I’m not sure if X Windows takes any steps to prevent this same kind of attack either, so I’d say your paranoia is well-founded on linux as well.
Word for the paranoid: if you really care, log off or use Fast User Switching for elevating securely (different user sessions get different desktops across which shatter won’t work). Also, don’t leave elevated applications running while doing potentially insecure things because some lower-priv app might bite them.
is a big problem. while you can limit what the user can do to the rest of the system, you cant limit what the user can do in his own area.
and thats where all the interesting stuff, from a id theft perspective, is stored.
for it to work you more or less have to protect the user from him or her self. and just about the only way you cna do that is by hardwiring everything so that you know that the only input signals really do come from the person sitting in front of the keyboard.
the moment the signals jump into software, anything goes…
Just break all backwards compatibility, then allow users a VM’d version of the older OS for using apps they can’t live without until new versions come out, similar to what Apple did with Classic Mode when OSX came out?
With dual-cores and VT on the core level, I don’t see why that’s not a good solution.
Because for the user there will be no added value to a virtual machine on a Windows host if it can run just as well on Linux, Mac OS X, BSD or Solaris.
How many people will choose Windows as their host operating system? The majority may choose Windows but I assure you many users will go to alternatives as they will not be worse off than with the new Windows version.
Promising to keep backward compatibility in Windows itself prevents the user from looking for alternatives while the virtual machine approach might incite them to do so.
when i did a fresh install of vista on my girlfriends moms computer, which three people use,..
Mom, Step-Dad, Young Daughter.
Step-Dad never uses computer, nor is home.
Mother dont care enough to “raise” the kid, and gives the 12 yr old the admin password to ‘make the uac go away’.
this is why windows will never, “be secure”. because americans are to lazy to raise kids or spend some time. (and yes im american, even worse i live in california, land of step moms driving suv’s).
“Malware will thrive, even with Vista’s UAC”
Well durrr. You get any desktop OS and leave it in the hands of a novice to install and operate and you can garentee that at least 1 in 10 users will screw their system up by doing something careless.
Windows may not be the /best/ OS in the word in terms of security, but then no desktop system can be 100% protected against the pure stupidety of some inept users.
That such an arrogant company is not punished even when it is forced to admit a mistake?
The North East (England) User Group’s webpage says: “If your OS is not your most reliable employee, fire it.”
Windows is so unreliable it should be banned from working anywhere that gives a sh*t about productivity levels exceeding those of tranquilized corpses in straitjackets.
Normalize the war on Iraq, then why not Normalize Windows Insecurities?!!
MS want us to believe that if their software is highly breachable and insecure then it is OK, and we have to carry on and feed that lazy fat cow from our meddows.
MS must break compatibility and choose Unix in its new future OS.
Funny, I hadn’t realized you wrote something so similar just in the post above mine