MSDN posted three articles describing the new features found on Longhorn and the next VisualStudio.In this first installment of the Inside Avalon column, Jeff Bogdan gives an overview of the goals behind developing “Avalon,” the presentation subsystem of the upcoming “Longhorn” release of the Windows operating system.
In the first installment of the Indigo Lingo column, Yasser Shohoud explores Visual Studio .NET project templates for creating Indigo applications.
The WinFS data store provides a far richer data storage model than traditional file systems. Because it supports data, behavior, and relations, it’s difficult to categorize WinFS as a file system, a relational database, or an object database. It’s a bit of all those technologies in one product. WinFS provides a common definition of ubiquitous information that is globally visible and available to all applications running on Longhorn. Applications can leverage the query, retrieval, transactional update, and filtering capabilities of WinFS; therefore, the developer spends less time developing data access and storage code and more time working on unique application functionality. Read the rest of article at MSDN.
And this is exactly why Longhorn is doomed to fail: when developers finally catch up with the latest toys, they release yet another version. Longhorn will never succeed on the desktop as long as Microsoft has this kind of attitude.
You are kidding right? XP is backwards compatible with code with for every other MS operating system for the last 11 year. On Linux binaries that are 4 or 5 months old won’t run on the new platforms. How can you use this point to say MS will “not succeed on the desktop”. Maybe Linux will “not succeed on the desktop” for this exact reason.
Desktop Linux has slower and has more bugs than MS windows. I would like to see JUST ONE article that gives an honest evaluation of Linux as a desktop. Even with a 1% market share and several lacking features Desktop Linux is already slow and bloated.
The comment will probably be modded down for my views on Desktop Linux but why? People are encouraged to support Linux but it is wrong to point out its shortcomings and people call this “free” as in speech?
exactly, i would like to see a brutally honest desktop linux review. i’ve found all distro’s to be complete shyt with the exception of Fedora and Slackware. The rest do a poor job of trying to be more user friendly. If linux is going to make a move it needs to do so in a timespan of three years, giving it til 2007 to get it’s act together, because once Windows 2006 comes out, the technology and advancements in that OS will blow anything linux has to offer. Right now linux resembles Windows 98, of course more stable, but the usability is around the same, when will it attain Windows XP comparitability? 2006? It’ll be too late. Linux needs to stay focused on things it can do best, such as servers, mobile devices, and kiosks. It’s nowhere near ready for the average desktop user yet. 802.11 G wireless support in Linux is almost non-existant. The commoner has no clue what the heck driver loader is.
At a time when Microsoft’s market share is slipping (note multiple countries that have rejected MS software, Linux gaining in popularity, Apple in stride again, US government forcing MS to open its source and protocols) they decide to go and change everything. More than anything else, I think this is to keep themselves proprietary and make everyone think they are doing something new. Linux almost has full read/write NTFS filesystem access. Quick! Change NTFS and put new spin on it! The government is forcing us to open our data formats and protocols. Quick! Change everything!
Two of the goals of the articles seem to be backward compatibility and a new proprietary storage format. These things are holding everyone back. Windows 2003 server can still run software written for early versions of MS-DOS from twenty years ago. That is just silly. Rip out the backward compatibility and make the OS small, efficient, and scream with performance. However, my data is far more important than any OS. Consistent and open data storage would be a far more useful goal. The filesystems and data formats should be more accessible, even if they must be proprietary.
NTFS has had the ability to store multiple “streams” of data and meta-data (filename.exe:1, filename.exe:2, etc.) since it was HPFS under IBM’s OS/2 operating system and Microsoft is only now doing something even remotely useful with it! But do you really think everyone will take advantage of a searchable meta-data storing filesystem while we are running all our old applications? I think not.
Several people reported Longhorn Beta’s use the same NTFS format as XP and W2K3 and the Linux-NTFS drivers and user space tools (resize, clone, undelete, etc) work fine with the Longhorn’s NTFS. If you would find otherwise please let [email protected] know.
Ooops, of course the list address is [email protected] and not [email protected] (no subscription needed but the list is moderated). Sorry.
like it or not. longhorn maybe the best os when its out.
new codebase means more stable, more secure + all the software & hardware support + .net development framework.
longhorn will save ms from the threat of linux more or less.
anyway i choose unix.
face it. the current ‘desktop linux’ is only for geeks, technical users & those who are willing to ‘learn’.
“Several people reported Longhorn Beta’s use the same NTFS format as XP and W2K3…”
Does the beta have winFS implemented? I havent tried it.
…except for the price point, and perhaps also for the fact that Microsoft has released a number of products in the past that didn’t exactly live up to their pre-release hype.
In my own estimation, based on 15 years of dealing with that company and its product releases, I think it’s wise to wait until we’re much closer to the actual product release before speculating about its nature. Who knows what might be added or dropped from the product before then, and who knows what features might actually exist in its competition in a few years?
What ever happened to common sense?
“Does the beta have winFS implemented? I havent tried it.”
Search the net how slow it is. WinFS is on top of NTFS.
How many times have they said things like “this will the be biggest change ever”? I can remember a couple
In the article it is mentioned that ways of importing and exporting to and from NTFS and WinFS will exist. One important concern I have is where do extra meta data go when a file is exported which doesn’t normally allow storage for that meta data in it’s file format. So for example, if my BMP file doesn’t have field for LOCATION but in WinFS that corresponding image data “stream” did, when I export it, give it to a friend, delete accidently, and then get it back from my friend is that meta data still there? This scares me… I can se a lot of confused users. The only way I can see something like this being solved is by making a unified XML like file format and when exporting items extra data which is stored by WinFS would be simplly added into the file. Sooo, each file would be like a .xml file maybe maybe they would call it a .win file? Other systems would just have to learn how to strip away this unwanted information and get the useful data inside (maybe the original BMP file data?). When read by a compatable system the fields in this file may require a user to decide if the meta data should be integrated into their systems also as new catagories. Anyway thats just some of my thoughts.
It’s the biggest change yet, and yes frequently it was, Windows 95, Windows XP, and now Windows Longhorn.
And this is exactly why Longhorn is doomed to fail
It’s this kind of shotgun thinking that gets the OSS crowd nowhere. Longhorn will succeed whether any of us like it or not. Why? Because it will be distributed on practically every new PC that comes out in the year it’s released. That’s why. Sadly, OEMs like Dell, Gateway, and IBM aren’t seeing customer demand for Linux.
And another round of media exhortation to dump the lousy Win-XP and adopt the “Aero-Crash” experience of longhorn. BTW once again you’ll need to buy a new PCI-X box. The Dell,HP tax, the P4 tax, the MS tax, time to pay the piper.
WinFS as implemented in the PDC build just sits on top of NTFS. I imported a jpg file into my “default store” on a Longhorn system and the file was actually placed here.
“C:System Volume InformationWinFS{28963AD6-FB82-494C-840E-138D129184A9}filestream10 00000007a000010c2c37669-cfe7-41f0-a93b-d521918adfbb”
A Linux system will be able to read this file, but its not going to help much if you have several thousand of them all similarly named with no identifiying information.
The original file name and all its metadata is stored in an SQL server database, which AFAIK linux is not able to read in binary format.
You’ll be able to use Linux tools to backup your data, but trying to find a specific file within a WinFS store will be more difficult than if it were just a hierachical file system.
@Edward:
I can’t find the info at the moment, but that is an implementation detail currently exposed in the PDC build that will not be end-user accessible in later builds.
@kainhart
RE:Question about exporting:
From WinFS newsgroups
http://msdn.microsoft.com/longhorn/community/newsgroups/default.asp…
Removable Storage and WinFS:
when you copy a winfs item to Fat32 it gets serialized
into a blob and put on Fat32 as a stream. When it gets
copied back into a winFS store it’s deserialized back
into an item. No data is lost.
The same happens in the Zip scenario. The Zip vendor
will get serialized blobs out, zip them, pass the zip
file around, then unzip the serialized blobs and
deserialize them into WinFS.
copying items out of WinFS and into other non-WinFS media is supported by WinFS Serialization and DeSerialization. All of the contents of an item, including its relationships and the items that are at the target
endpoints of holding and embedding relationships, get
serialized with it.
If you copy a document item that is the source of a
holding relationship to, say, a contact of type “author”,
then both the document and the contact items would be
serialized into a single blob. That blob is a
traditional file and can be moved around from format to
format until it’s put back into a WinFS store. No
properties of an item are lost in the process.
toby whitney
GPM, WinFS
There are really two kinds of copy operations out of WinFS. Toby is referring to the lossless export supported by the WinFS API. If the item is backed by a Win32 file (e.g. a jpeg file), then you can also copy the file stream only using the Win32 API. In this case, the file stream will contain the modified metadata PROVIDED that the system has a demoter which can push the metadata properties into the file (not all file formats can support this). Sanjay’s presentation for session CLI326 includes an overview of this
mechanism.
Ward Beattie
WinFS GPM
CLI326 can be found here: http://microsoft.sitestream.com/PDC2003/CLI/CLI326.htm (MS Producer Presentation)
PowerPoint Slides for that session are here: http://microsoft.sitestream.com/PDC2003/CLI/CLI326_files/Botto_file…
above comment was from me
new codebase means more stable, more secure + all the software & hardware support + .net development framework.
Stop having dreams of grandure. Longhorn isn’t a new code base, mearly based on the same NT core with some consolidation of various technologies and some new.
Stop trying to come up with stories to fullfill your own fantasies. Some of us here actually like to live in a world of reality some times.
btw, are you Mahathir Mohammad? with dellusions like above, it would be hard to mistake one misinformed creature from another.
I second ChocolateCheeseCake with perhaps a few differences:
Even if it IS a new codebase (Microsoft can’t afford to rewrite too much of the OS: they’ve already done that with Win2K, and made major mutations with XP) a new codebase (or part of a system that is new) only translates to new code, but that means absolutely nothing, as the old code is often sufficiently debugged, if only through the process of many people finding issues that are then fixed. However, you have to start all over again with new code, and the defect density of new code is often as high or higher than the defect density of old code that’s gone through revisions and that has built up too much cruft.
It is exactly for this reason that many don’t upgrade to the latest and greatest of any OS or software that’s critical to their business: the known evil with all the warts that are known about are less risky to go with than trying a new set of bugs that you don’t know about. This is true for server software as well as embedded software development, or anything else of a mission critical nature. Even Linux is embraced in the same way by many users: wait for a release to get a few bug fixes under its belt before installing it on a main system.
Now, as to backwards compatibility, that’s both a blessing and a curse to carry around. It’s a blessing, because most old applications still function, and a curse, because it requires a lot of overhead in the system design and implementation, as well as requiring previous bugs to become features so previous software that depended on the way the system worked with the bug still works. In that respect, you MIGHT be able to argue that a new codebase will have fewer bugs, assuming that the new codebase doesn’t support backwards compatibility with all the old bugs 🙂 and even if the new codebase is buggy, it might still be less buggy than all the cruft it omitted.
There’s a key issue which makes rewriting much of the codebase for Longhorn beneficial however. The benefit is security.
The WinFX APIs are being written in managed code to reduce the attack surface of the OS and the applications.
Overall defect density would likely be lower than equivalent C++ code since much of the functionality is new, and many features that exist in current versions of Windows are being reimplemented. This new code gains the benefits of the CLR, and limits many of the issues that would lead to further security patching if they stayed unmanaged. The system can have greater control over whether code should run, and give it various levels of trust depending on evidence provided by the code. This can decrease the potential impact of malicious code in a way that can’t be accomplished if they continued to patch and use old, unmanaged code.
Win32 will still provide backwards compatability, but it’ll sit beside the WinFX API, not beneath it, and it won’t gain many additions. Also, due to new security policies Microsoft is planning, some Win32 applications may not run correctly (or at all) with the default OS policy.
The kernel will still be C++, however it will also change, mainly to support new features. There will be additions to support new driver models, transactions, logging, synchronization, scheduling enhancements, etc.
trolls by Huh (IP: —.tc-22.roc-pt.ny.localnet.com)
“On Linux binaries that are 4 or 5 months old won’t run on the new platforms.”
Stop reading other trolls comments for your knowledge – that statement is totally incorrect.
“I would like to see JUST ONE article that gives an honest evaluation of Linux as a desktop.”
You could say the same for the Windows desktop.
to Steve (IP: —.bct.bellsouth.net)
“Right now linux resembles Windows 98, of course more stable, but the usability is around the same, when will it attain Windows XP comparitability?”
The linux desktop does not resemble Win 98, you can’t press escape to get avoid logging in. Where does it not compare with XP because KDE has more features and is faster. XP is a slowed down Win 2000.
Drivers is an issue for all OSes with new hardware. I installed an XP system not long after it was released along with an EPSON printer and scanner. You could install the EPSON drivers and XP wouldn’t work with them. Everytime you shut the machine down and started up again it would autodetect a completely different scanner and printer – not even the correct manufacturer.
Not everyone wants the latest bit of hardware immediately it appears on the market, thats an option for a small minority of peolpe that like to have bragging rights.
The only thing lacking in the Linux world is more manufacturers to release their software on more platforms. If they had been foresighted enough they could have chosen Java or QT (maybe others) for cross-platform development and have one code-base for all platforms including WIndows, Apple,Linux etc..