The first publicly available test release of Vista SP1 has been released a few days ago, release candidate 1. “The Windows Vista Service Pack 1 Release Candidate is now available to the public. In addition to previously released updates, SP1 contains changes focused on addressing specific reliability and performance issues, supporting new types of hardware, and adding support for several new technologies. SP1 also addresses some management, deployment, and support challenges.” Ars reports that it finally enables the hotpatching support in Vista.
“Hotpatching is a process in which Windows components are updated while still in use by a running process. As you can imagine, this handy feature eliminates the need to reboot,”
No shit, now where does this dejavu feeling come from?
Without knowing we had a feature in our midst for years on various *Nix platforms.
Edited 2007-12-16 19:01
Yeah, the “file in use” is one of the biggest misfeatures of windows. Especially when windows will completely randomly decide that a file I’m trying to delete is actually in use and I can’t delete it (even though it’s just some random video file I haven’t played in months). Unfortunately they still haven’t fixed it properly, instead opting to create a hacky workaround for updates.
The nix updates are so much more logical. If a component is updated, apt will ask you if it should restart the corresponding service, instead of having to reboot the whole computer (which, given that no version of Windows has any kind of session support, can be a big pain).
The reason is because there is an implicit flock() on running files in windows. The problem is sometimes it doesn’t automatically ulock properly. In linux they took a different approach, running files can be deleted, when the file is reloaded off the disk it will just be the new one. This can lead to some odd behavior, but completely eliminates the issue that windows has with not unlock files properly.
Regardless, linux has fuser -k, to do the same thing in windows is pretty complected, even though it is needed alot more because of the implicit locks.
Windows does have very clean API that achieve the same thing.
It’s fixed in vista. As that’s the first thing I noticed when trying vista…
Not on mine….
Unix systems gladly replace system libraries that are in use, and just hope that not problems happen because two different versions of the same library are in use simultaneously. The further away from the core libraries you get, the lower the odds of a problem, but it’s still a risk. The Unix approach is basically “Let’s just go ahead and do it, it’ll probably be ok.”
Windows takes the safe approach of only updating libraries that are not in use. I’m sure you’d wind up with weird glitches if your apps were using multiple versions of GDI simultaneously. The Windows approach is “It may be ok to update this now, or it may not. Just to be safe, let’s not update it until we can guarentee it’s safe.”
The Vista implementation tries to free up libraries, and if it can, will then update them in place.
But at the end of the day, I want control over my operating system. I’ll decide whether or not something is deleted, whether or not it is over written. UNIX treats me like an adult and says, “if you want to do that, you know the risks, you’re a big boy”.
But it continuously fails everytime; as pointed out by one person, claiming that a file is ‘in use’ but never used; claiming an application is in use even though the application has been killed (and all dependencies). If Microsoft can’t get it right, then they should take the UNIX approach until such time that they can get their ‘secure solution’ working.
But when things go pear shaped you end up with half finished updates. I’ve had numerous updates fail from the very first Windows – all due to this stupid ‘locking’ idea Microsoft adopted.
Like I said, give me the end user power, if I balls up my system, its because I do so of my own choice, don’t think that you as the operating system know what I want as an end user. If I want to over write, delete or modify a file, I want to do it for a good reason.
While I agree with you in a general way, it doesn’t fail every time. It doesn’t even fail every thousand times. Considering how many files are opened and closed even by the hour, it is probably close to a one in a million thing.
But I do agree. And if you really must do it that way, at least give us fuser. a simple fuser -k could have saved me quite a few reboots, it would be a simple solution to a stupid issue.
Mark Russinovich wrote the tool you want:
http://www.microsoft.com/technet/sysinternals/utilities/handle.mspx
But at the end of the day, I want control over my operating system. I’ll decide whether or not something is deleted, whether or not it is over written. UNIX treats me like an adult and says, “if you want to do that, you know the risks, you’re a big boy”.
And that would work really well for most Windows users…
Why should Windows be castrated because the ineptness of a few? Sometimes a bit of tough love and mistakes is the only way for some people to learn.
But they won’t learn. You HAVE to make things safe and recoverable. To not is to be naive about the realities.
Most people don’t care to learn… it’s not interesting to anyone but us OS geeks.
I think that the choice made in Windows is the right one. The highest priority bug is one which can cause loss or corruption of permanent data on the hard-drive. If users can somehow have multiple versions of a dll accessing the some shared file or shared memory region (for instance take a look at the ESE engine, which was mentioned so vociferously in response to your recent mac post on Ars), then replacing a dll in one process but not in all of them could result in total loss of the data in that file.
It might seem far-fetched, but Windows is used by a lot of people in many environments and who knows what random third-party dlls are doing with shared files or shared memory. Choosing safety over convenience and efficiency is often a valid engineering choice when the cost is low and the cost of failure is unknown.
The ineptness of a few?
Try…many?
And it is ok 99% of the time, unless you do something you shouldn’t. Thats your fault though, not a fault of unix.
Its a lot simpler and less error prone than you make out. Different versions of a library are almost always Newer versions which are also backwards compatible with older versions. They either add features whilst keeping the interface and old features intact, or they fix bugs in existing features whilst keeping the behaviour the same. If an app worked with an older library it will work just the same with the newer one. All you need to do to make sure your app is using the newer library is close it and re-open it. If you’re worried about the newer library containing bugs that will crash your apps…consider that this is just as likely to happen in windows and has nothing to do with what we’re talking about.
I’ve used Linux for 6+ years now and never noticed a problem with library versions. All I noticed is that reboots are only necessary if I’m upgrading the kernel.
safe approach? or easier approach? Its just the way they chose to do it. admit that its not as good…
and if it cant free them up? we then have the same problem we always had, with no solution. reboot?
Yes. Reboot
They aren’t used “simultaneously”, at least not in the way you’re implying. To put it in a simplistic manner: running processes have already read the previous version of the library and have everything that’s needed with them. New processes will read the new version. The replacement of the library on disk happens transparently, and atomically, may I add.
What you probably mean is that at some point you may have two processes running with different “snapshots” of files (config files, libraries etc.) That happens to be one of the most useful and powerful features of *nix OS’s. It does extremely little harm (most of which is purely academical) and a lot of good.
Consider for example that I can run the same application several times side by side with different libraries and configurations and compare them first-hand. Doing the same in Windows would require duplicating the entire install base of the application and tweaking each copy.
If you want practical uses, they’re all around us. I for instance tweak my fontconfig settings by launching several gcolor2 processes and changing my .fonts.conf in between. Then I can compare the font rendering variants in much greater detail.
“Unix systems gladly replace system libraries that are in use”
Not really
usually you have the library abailable as:
liblibrary.so.1.2.3
and two symbolic links
liblibrary.so.1.2 -> liblibrary.so.1.2.3
liblibrary.so.1 -> liblibrary.so.1.2.3
What happens if someone “updates” on linux to liblibrary.so.1.2.4 ?
very simple: the old version is kept for programs which need the old one, and the symbolic links are reset to point to the new version.
And the new version has to be binary compatible with the old one, otherwise it would have to be a 2.0.0 release.
I experienced not one updating problem due to replaced libraries in 8 years of Linux usage. What happened was a mangled Kernel after a Kernel update (must have been too serious byte-flipping on the hardware or in the wire, because a re-download and reinstall cured the problem).
So does that mean it will take days just to get the prerequisites to download the service pack? That totally blows for testers.
Screw that. I’ll wait until the final release.
If you download the standalone package (all 500+MB of it), it will install the pre-requisites before installing the SP. Just be prepared for 4 or 5 restarts to make it all stick. It can more or less run unattended after you start, after each restart the packages continue installing automatically without going to a user login.
Of course, the automatic restarts are a bit of PITA when you use grub as a bootloader, and Vista isn’t your default OS…
Grub has a ‘savedefault’ option that should make your last OS choice the default.
So whichever OS I’m in becomes the default on a restart.
Or at least that’s how my system is setup. You can even configure which boot options will use this feature and which it won’t remember.
Which may be nice for you, but for someone who predominantly works in Linux and boots up Windows only occasionally this makes no sense.
They can release service pack 5000000 it does not matter companies and businesses are not going to roll it out because it does not work with anything…
What if the next service pack made it work with everything?
Edited 2007-12-16 21:44
‘everything’ is a vast array of different hardware.
There is no system that has been able to work with EVERYTHING – not even the beloved Linux.
I just got SP 1 installed.
I switched over to the new RDP compression algorithm and it rocks over my WLAN. Rdesktop only freezes now for one or two seconds every several minutes verses minute long freezes every 30 seconds! Much more efficient bandwidth transmission!
Local file operations are much smoother too. I copied a 2 GB directory at around 15 MB/s. That brings things to about normal. Deleting things from the recycle bin is much faster as well.
Let’s praise the prophets, a new era has arrived indeed
Do I sense a slight sarcasm in this statement?
Seriously, the remote desktop protocol is quite good.
You can even use aero effects over remote desktop for normal applications. Of course if you try to play back a movie you notice the difference, but for normal work like browsing, email and office stuff it is almost completely transparent.
Well, compared to X11 on a 10Mbit LAN this RDP of Windows seems to be a bit of a joke.
in X11 you ARE completelz transparent, and only CAE applications which make heavy usage of the 3D features of the graphic card have noticeable latency.
Well, if you do, that’s because it’s there I know how it works, I use it every day as a necessity not as an option, but still, “improvement” like only freezes now for one or two seconds every several minutes verses minute long freezes really sounds as a joke, and in my book, it is one.
2GB at 15 MB/s that is atrocious.
That is better than before. I was copying about 60GB of data from a USB caddy to the local hard drive on a Vista laptop, and it had an ETA of 4 days. Since I have the resources, I connected the Vista drive via USB to another laptop running XP, and went the USB-to-host-to-USB route, which was evidently faster: it took about 100 minutes. I know I could have booted to Knoppix or whatever, and that may have made things faster, I don’t know.
But 4 days?! People who don’t have the resources or don’t know better are just expected to put up with that?!
I tried out SP1 RC and it trashed the OS. Aero went AWOL, and the control panel showed no applets – it was just empty. I couldn’t even get to “Programs and Features” from the start menu search, to uninstall SP1. I ended up having to use System Restore. Thank god that worked.
Why has this topic devolved into an argument over whether Vista or *nix updates work better? I’d like to hear feedback on how this update improves or degrades Vista’s performance. Save the little geek-boy arguments and trolling for another topic, such as one called “System Updates: Vista vs Linux”
Cheers! Anybody found any reviews yet? I am tempted to put this on my production machine.
Don’t. I had a similar experience to joshv:
…except that on my machine Aero kept working and System Restore did not undo all the special updates to the Windows Update client related to SP1 RC1, so I ended up reinstalling Vista.
If you try this SP1 RC1, I recommend doing so with a clean installation on a separate partition. That’s really why they say not to use the software on a production machine anyway.
I believe one reason it is difficult to replace a DLL in Windows is that each DLL can carry around DLL specific memory state.
In Unix, every process allocates its own memory, or has to explicitly create mmap’d files to share state. Imagine what would happen in Unix if you overwrote those shared memory files.
I’m pretty sure that 32-bit versions of Windows have always worked like Unix with regards to using Copy-on-Write pages for DLL data.
From the Wikipedia article on DLLs:
“Optionally, data sections can be made shared, allowing inter-process communication via this shared memory area.” — http://en.wikipedia.org/wiki/Dynamic-link_library
I am pretty sure that refers to the address space of the DLL and not to the disk image of the DLL.
Yes it does refer to address space.
What happens when you replace the DLL on disk and new programs start using the new memory area?
New programs using the new DLL will not see the shared memory area of the old DLL. Or if they do, in-memory objects may have shifted size and position in the new code.
I was just trying to point out that because of design decisions made long ago, replacing a DLL is not as easy as replacing a .so library.
Which is exactly what you want, isn’t it? Once you shut down and restart all programs still using the old DLL, everything will be back to normal again.
Replacing a .so library is not without its problems either. But like some other poster pointed out, usually the old .so and the new .so coexist and there is just a symbolic link that is changed.
I think Ars got it wrong on this one (though I like them because they’re usually pretty perceptive and correct). Hotpatching has been in the product for quite some time now… take a look at this KB: http://support.microsoft.com/kb/897341.
The way it’s achieved on x86 is by placing a mov edi, edi instruction at the top of every function and a 5-byte nop region just before it. This allows the hotpatcher to atomically update the entry of a function to jump into new code.
Hotpatching is fairly limited though: it’s only good for security problems that involve an implementation flaw in a single function. It might have been good against he ANI vulnerability from a while ago, but it won’t be useful for any flaw that involves a design problem or an issue spread between functions. It’s great for situations where you have a large, non-clustered machine which has to stay up while receiving the updates, but the KB article recommends testing the hotpatch on a parallel test installation before doing it to a production resource. I’d personally just prefer to use a cluster and normal patching because that’s a more conservative solution.
Mandatory locking is the single most important issue facing Windows. Whenever you have to reboot to delete a desktop icon, or you can not move a video file because explorer is using a codec to generate a preview, mandatory locking is to blame.
If you have an application that runs from a network share, it is ever worse. As long as a single client has opened the application, it is impossible to update the executables. So you have to run around the building and chase down every single open application.
There should be a solution to this problem though: all modern file systems such as ZFS and NTFS usually do not make destructive updates. Instead they create a new version of the file in a different location. That is how the “previous versions” feature in vista works.
So whenever you open a file just to read (like when generating a preview or when running an executable), you should open an immutable, readonly snapshot of the file. Only when more than one process wants to write to a file is synchronization via locks really necessary.
But I guess fixing such a basic problem of windows was not as important as adding stupid DRM and bling..
Or you could open an elevated command prompt and type:
Openfiles /Query
to see what files are open, then use one of the options under
Openfiles /Disconnect /?
to close the connections.
Of course there are workarounds.
I did not know about openfiles, but I am using a tool called unlocker. But why should I have to use workarounds for something as basic as copying and replacing files? The snapshot (“shadow copy” in microsoft speech) approach I described would work for the vast majority of all situations, since in most cases files are opened readonly.
I’ve installed it yesterday and now my windows games don’t run, my panel control don’t open, but I see applcations like Nero open more fast. Big Shit RC1!
I installed it and everything works fine except now whenever I plug in my external HDD (250GB Baracudda with vantec enclosure) I get a blue screen and Vista restarts. Happens if its been turned on and I plug it in or if I turn it on while its plugged in.
Bloody annoying …. any ideas?
I havn’t installed VISTA yet!
I run Vista 64-bit and I never had issues with RDP or slowness in general, aside from copying small files (Vista tries to estimate how long it will take and the estimation takes longer than transferring the files, it’s weird).
According to various sources, Vista was always meant to be 64-bit, I think they should have stuck to their guns and only made the “proper” version and let everyone else just run XP.
All new PCs can do 64-bit anyway.
D
It may have been the intention, but Microsoft doesn’t write most of Windows hardware drivers.
All the drivers will have to be updated to run 64-bit.
64-bit Vista is very probably in an even worse position re drivers unavailable than Vista 32-bit is.
If you want a 64-bit OS, run OSX or Linux. They both write their own drivers.
As I’ve already blogged, I think Vista is more than a little slow and bloated. I know other people who are concerned with Vista’s reliability, but unfortunately I have not yet gotten there (I find Vista’s performance too lacking to have used it enough to run into reliability issues). Here are Vista SP1 improvements I find interesting …
CrossTalk: http://dataland.wordpress.com/2007/12/10/vista-sp1-improvements/
I just finished my first Dell Inspiron 1720 laptop setup (Intel Core 2 Duo, 256 MB Nvidia, 3 GB RAM) Vista Home Premium …
Damn. Vista Home Premium is sloooow.
I’m glad this laptop is for my neighbor (who insisted on Vista).
I’m hoping SP1 will speed things up for him … cripes.
Suppose MyServ.exe uses SomeLib.dll and this .DLL needs to be patched. With this feature, I wonder if Windows can patch the DLL without even stopping running copies of MyServ.exe?
Linux cannot do this right? With ELF and .so I mean of course.