Yesterday, during the opening hours of the D6 conference, Walt Mossberg and Kara Swisher jointly interviewed Steve Ballmer and Bill Gates. While the interview dealt mostly with the past, Yahoo, and a bit of Vista, by far the most interesting part was the first ever public appearance of Vista’s successor: Windows 7. Earlier today, the team behind D6 posted a video of the demonstration, which was conducted by Microsoft’s Julie Larson-Green. From a graphical user interface point of view, there were some interesting things in there.The video shows Larson-Green talking us through the various touch features while answering questions from Mossberg and Swisher.
Let me give you a little history on Julie Larson-Green first. She joined Microsoft back in 1993, and throughout her career at the company she focused on user interface design. Her most important responsibility was the user interface design of Office XP, Office 2003, and most recently, Office 2007. Larson-Green led the massive interface redesign of Office 2007, a bold redesign that led to a completely new and – yes – innovative user interface. She was brought onboard the Windows 7 GUI team almost a year ago.
The demonstration obviously focused on the various multitouch features built right into Windows 7 – they will be available system-wide. Larson-Green called it an “evolution of Surface”, and they’re of course working together with the Surface team itself. The multitouch features require a digitiser built into the display, which is already shipping on various displays today. So obviously, you’re going to need a new monitor.
To me, it seemed as if Larson-Green and the rest of the GUI team realise fully well that multitouch is not an answer to everything, but that it is “much faster to do certain tasks”. As Larson-Green explained: “Use touch when it makes sense, use the mouse when it makes sense, use the keyboard when it makes sense.” I believe it is indeed wise not to focus all efforts on multitouch as if it is the only sensible input method, but rather see it as an additional input method, that make sense for certain tasks. Larson-Green confirmed Microsoft is working on adding gestures for things like window management.
The applications that were part of the demonstration will not necessarily be part of Windows 7; they are applications written to demonstrate what can be done with the multitouch features in Windows 7. Interestingly, the Concierge application made use of circular menus, a user interface element frequently appearing in mockups lately. As we know, circular menus are potentially easier to use thanks to – dead horse alert – Fitts’ Law.
The final interesting part was the rather odd-looking taskbar – assuming it even was a taskbar. The bar was twice as high as an ordinary taskbar, and lacked text, using what looked like icons or thumbnails instead. It reminded me of the RISC OS icon bar, mostly. Apparently, Larson-Green was not at liberty to discuss it, because when Mossberg asked her about it, she replied: “It’s something we’re working on for Windows 7 and I’m not supposed to talk about right now, today…”
While all we received was a small glimpse, I’m excited about everything that’s going on behind the scenes. Larson-Green and Steven Sinofsky have delivered a truly innovative product with Office 2007, and as a GUI enthusiast, I’m excited to see them working on the Windows interface in quite a – for Microsoft – secretive manner. Some people are extremely cynical, and that’s fine – I’m more of an optimist and await more information from Redmond.
Every multitouch demo trots out the same silly demos. Throwing photos around, resizing them. Well no one does this. The mouse is much more accurate at moving things around, and the mouse wheel is much better at zooming.
Same with the Google Earth demo (or whatever microsoft decided to call their knockoff). There’s nothing wrong with the mouse here. It’s much easier to use the mouse+mouse wheel to zoom around on a globe than it is to use gestures.
Waving around water? On-screen piano? Please.. These are neat toys, but not anything useful.
Of course these things are nice on a public access kiosk and handhelds, but not for the traditional laptops and desktops, and those aren’t being replaced anytime soon.
I agree. Then again, I’m still waiting for 3D on the desktop to reveal advantages which are not of the “reaching” nature.
I suspect that we will be well into the “Touchiz Fusion” era before anything solid shows up. But solutions in search of a problem are like that. They eventually find one, if there is really any benefit to them. 😉
The advantage of 3d accelerated desktops is offloading the processing load to your graphics card from your main processor. That and it can make remote desktop type apps vastly more efficient.
I will agree most of the user visible effects are cute toys and nothing more though
Edited 2008-05-28 20:56 UTC
How so? Remote desktop type apps are bandwidth and/or network latency limited. Even on a LAN. You’d be hard pressed to tell, even on NX, VESA from 2D acceleration from 3D acceleration on a LAN or a WAN.
Edited 2008-05-28 21:23 UTC
Current systems, as I understand them capture what’s being displayed and apply varying degrees of compression to it before sending the output across the network. Theoretically, with a 3D accelerated desktop you could just send the instructions for drawing the primitives the desktop is composed of across the network and have the GPU at the other side render them freeing up processing power on the server side and allowing for higher quality and a richer experience on the client side.
Current systems, as I understand them capture what’s being displayed and apply varying degrees of compression to it before sending the output across the network. Theoretically, with a 3D accelerated desktop you could just send the instructions for drawing the primitives the desktop is composed of across the network and have the GPU at the other side render them freeing up processing power on the server side and allowing for higher quality and a richer experience on the client side.
There’s quite a few catches here actually: the whole windowing subsystem would have to support network transparency all the way from the lowest graphics functions. X does support network transparency so that would be possible with X, but it just hasn’t been done. With Windows it is not possible without rewriting the whole thing. There’s also a few other caveats: both ends would have to have exactly same fonts or the end result wouldn’t look the same, also you’d still have to send all pictures like icons and web page elements over the network.
Not every remote desktop implementation works by capturing a bitmap and sending it across the network. VNC works like this, but RDP or ICA or even X work differently.
They do exactly what you just described: they send graphics command across the network and the client interprets them, by rendering what the server sent.
As far as I know, X already does this (sending drawing primitives across the network). On the other hand, its network performance isn’t stellar, so there may be problems with either the protocol or the implementation (hence the need for NX or similar).
RDP also has an extension for this: http://msdn.microsoft.com/en-us/library/cc239611.aspx
“For example, instead of sending the bitmap image of a filled rectangle from server to client, an order to render a rectangle at coordinate (X, Y) with a given width, height, and fill color is sent to the client. The client then executes the drawing order to produce the intended graphics result.”
The point of sending bitmaps is that you waste bandwidth, but you save CPU. Which can be good for thin clients. I don’t think it’s a good trade off, but hey, I didn’t create these protocols (VNC, RDP without this extension)
It all depends upon the network’s characteristics. X performance on a LAN is quite good. A bit of light compression might help there, but it’s debatable whether it would help or hurt. X is, however, a very “chatty” protocol. The serialized round trips are excessive. (Run X over a ppp connection and watch the modem lights.) Latency kills. Even just 10 or 20 ms. The neat thing about NX is that it *is* X, and not one of those framebuffer transmission kludges like VNC. Fast as lightning… and still X. One day, it will be an official extension of X and I can’t wait.
Edited 2008-05-29 18:38 UTC
The advantage of 3d accelerated desktops is offloading the processing load to your graphics card from your main processor.
The issue here is that atleast the way things are done now, only window drawing is done in hardware. All the GUI elements like buttons, window frames, text, all picture related actions, color gradients and all that are done in software. GTK+ uses Cairo, and Cairo supports hardware acceleration through Glitz, but GTK+ devs have seen it better not to take use of that. So, in short, to FULLY take advantage of modern GPUs in GUI everything should accelerated.
Well its kind of impractical to accelerate everything, even OSX doesn’t do that. There are still some things that get don eon the cpu. It would be nice to see the gtk devs support hardware accelerated widgets though. It could allow for a much smoother experience and people would stop complaining about how their desktop looks liek its from the 90’s (which is an exaggeration). Compiz is great but it only handle one aspect of the effects it, the toolkit should be more robust and support more features.
Well its kind of impractical to accelerate everything, even OSX doesn’t do that.
Still, OSX and Windows both accelerate more drawing functions than X (or any X-based GUI toolkit) currently does. IMHO they should modernize the available graphic drawing function set and accelerate them where possible. And yes, GTK+ should allow the user to take advantage of Glitz. Even if it was not on by default it should be possible to enable and disable it at runtime.
I remember when the acceleration idea first came out before Windows Vista. The issue that was discussed was this; given the amount of bandwidth, latency, memory on the graphics card and computation power of the said graphics card – do the extra complication one has to go through in terms of designing the necessary subsystems, can it be justified for the apparent (in theory) performance boost?
I think that you need to look at WDDM model, not only now, but the model in the future for version 2 and 3, and the added complication which Microsoft is putting on the hardware vendors. Infact, by the time version 3 of the WDDM roles around, the drivers in itself will almost be like an operating system – where there is threading, resource management, scheduling on the gpu and so forth. If you think that things are bad now, just you wait till that point.
As for Xorg, there was a move a while back to actually put the whole of the Xorg server running ontop of OpenGL; so basically you would have the whole server accelerated. Unlike Microsoft, Xorg doesn’t have the luxary of just being able to just jump up and do things; there are more operating systems that use Xorg besides Linux; there is *BSD, OpenSolaris and a few other ones I forgot to mention. What ever is added has to be agnostic to all platforms. With that being said, I don’t think the issues are as dire as some people here make out them to be. The issue in alot of cases isn’t the right underlying technologies but getting the toolkits to use them. Take GTK+ and Qt, and libxcb for example – and yet, there has been no move from what I see by GTK developers to move to it – which provides improvements over libX11 in terms of latency hiding and so forth.
Agreed though a new kernel would be nice too 🙂
It’s a demo Beavis…something that anyone looking at it for 10 seconds can easily grasp and understand. It’ll be up to application developers to make useful software that will run on it, and I’m sure many such as myself are already cooking up some great ideas.
Ok, the videos look amazing and really funny, but, apart of being eye-candy, computing is more than resizing photos and zooming maps, right?
Yes it has so Microsoft are bringing this to mass market with Windows 7, it’s the same as these.
Lowfat http://www.youtube.com/watch?v=GkrM4ymkiDo
Compiz with touchscreen http://www.youtube.com/watch?v=Yx9FgLr9oTk&feature=related
Macslow did a talk about this at GUADEC and how it would integrate and work with Nautilus file manager. it’s a shame the marketing buzz around them are not the same.
Edited 2008-05-28 21:08 UTC
Not talking about the sense/unsense in that but it’s not the same.
In this compiz video you only had one input source, not multiple fingers. Though I guess that is soon to change with MPX (Multiple-Pointer X).
Wrong wrong. What they showed in Compiz demo was just plain using finger as mouse type, that is not what Windows7 demo was about. It was multitouch with gestures, Compiz demo had none of these. I’m sure Linux will someday have same kind a system when multitouch screens get more common in laptops and hopefully desktop, but what you showed wasn’t that.
All this is is more marketing to mask the fact that this is just a repaint on Vista. Note that there was alot of talk about Minwin….they took that out. This reeks of the same overpromise underdeliver strategy that we have come to expect from Microsoft.
Quite frankly what good is sliding around pictures to most techies in an enterprise. If they focused more on security, stability and explained how they are addressing it, rather than trying to be a poor mans steve jobs (I am not an apple fanboi, but SJ does deliver solid products most of the time with solid marketing to back the technological) Microsoft is all about marketing these days and not enough about solid R&D or product stability. If they were we would not have the Vista we have today. We would have an OS as solid and far more usable than OSX and Linux. They have had years and near infinite resources to make this work.
So the MS fanbois who will come here to try to bash my comments, note that MS has not been revolutionary. They have been reactionary and in so doing become a devolutionary product. Making something that taxes the system just for making it look nice (which is the typical MS MO these days) just is crap. My .2 cents.
Signed a Disenfranchised Windows user
http://en.wikipedia.org/wiki/Windows_Server_2008#Server_Core
MinWin as far as i know is a part of the Windows 2008 lineup?
It is nice to see features of my iPhone will be coming to Windows. That means in about 2 years it will standard on Gnome after it is copied and refined somewhat.
I mean seriously, how can you run the biggest software company without any great innovations? Why is there marketing so end-user unfriendly? I don’t want to be a troll, I’m using Windows XP right now to post this comment. But I’m confused why MSFT is not able to create something brilliant? Something new? Regarding Apple Mr Ballmer always highligts the enterprise capabilities of MSFT (which are not trivial at all). But when I look at Windows Vista or Windows7 I miss a clean, professional and easy to use GUI that would look good in a corporate environment.
So what is the new, fancy killer app that makes everyone wanting Vista (Win7)? MultiTouch? WinFS(okok, kidding ^^)? Aero? I don’t know any. A great aproach would be a cleaner, rewritten MAPI/TAPI that fits into .NET, a faster .NET framework and so on. (For RAD .NET is obviously a great tool and I enjoy to use it, but it is definitly _slow_) …
Microsoft has some things which are working (nearly) great, they should improve them to strengthen their position in the corporate world (Active Directory, Excange Server and so on) …
All I have seen in the presentation above where some copycats from MacOSLinux*nixWhatSoEver. Finally I want to use multitouch on my (i)phone, because it enables me to handle a _small_ device easily. NOT ON MY 24″ TFT where I have to sit back ~1m (approx 3feets) to get a nice user experience …
Sorry for my bad english … And sorry for the lack of structure … I was just stunned about this waste of money.
Regards.
Patrick
what if you have a laptop? then multitouch is useful
photosynth is stunningly unique, and there are lots of tablet users who will make use of multi-touch.
What is the “fancy, new killer app”?
The same it has always been: Office.
The next Office will probably require at least Vista to get users to hop on the upgrade treadmill.
I can’t imagine myself doing that while eating french fries or a burrito. I’d rather keep using my mouse.
-2501
Isn’t it already possible to have multi-touch inputs on *nix systems with MPX XServer?
MPX code has just been merged into the Xorg development tree a few days ago.
While MPX enables multiple pointing devices, such as connecting multiple mice on one system, if the input device was a multi-touch compatible display then MPX should permit that, correct?
Edited 2008-05-29 06:16 UTC
I personally hate touching screens or other people touching my screen. Why would I want this, then?
And in the video you can clearly see how the display of the notebook starts wobbling around as soon as she puts her hands on it. I can’t believe that this is fun.
On a PDA you’re at least using a stylus. And even if you use your fingers, the display is small enough to just wipe it with your hand. But on a desktop/notebook display? No chance.
To me it feels like the wrong direction taken with Vista gets even worse. Offering multitouch and the stuff to business users? Are they outta their mind?!
Seems like another useless release of an MS OS likely to require a quad-core CPU and 4 gigs of RAM. Stick to WinXP forever.
Please move along, nothing new to see here…..
http://www.ted.com/index.php/talks/view/id/65
I simply don’t understand those guys at MSFT any more. As already pointed out by a lot of users, all that was shown is not only a ripoff, but generally speaking a ripoff of the lamest applications you can find on, let’s say, an iphone. The iphone is a great phone not because of these aye candy features, and not because it supports multitouch, but because it has a intuitive rethinked interface which makes working with the thing a breeze. The fact that this interface required multitouch is just a side effect. In here we see the usual msft approach instead: first build the technology, then build the product. Wrong wrong wrong. They should have first redesigned windows interface in order to create something truly useful, and ground breaking. THEN they should have thought of the technologies needed to run the product and develop them. In the end: without a complete interface revamp with never concepts this multitouch thing is just another alternative input method, nothing to write home about.
It is pretty meaningless to first design an interface that greatly depends on multitouch, and just AFTER that start designing the technology behind it. That just is not an efficient way of doing stuff.
Besides, didn’t they do it that way with Longhorn already? They showed us great demos of eye candy and “Coming in October 2003”, afterwards realizing that they can’t get it work as a single operating system, and had to dump pretty many lines of (to some extent) working code.
Edited 2008-05-29 08:51 UTC
If anyone still doubted Microsoft was in league with hardware manufacturers, their illusions should be shattered if this multitouch makes it into the next Windows. Maybe thats why Windows is so popular to preinstall on machines, because it helps shift new hardware. In other fields this would be described as a cartel.
atleast a year before office 2007 came out, i had a chance to use keynote on my boss’s powerbook and later when i saw offce 2007, my first thought was that its a good ripoff from the apple software. why is it called “innovative” here?
I am more worried about performance, a rework of uac, and the removal of DRM. I think most users are sick of poor performance and the less than stellar Uac model. What kind of recommended requirements are all these new “features” going to bring? Personnally because of DRM I have not got around to learning Vista’s quirks. I don’t pick my Linux for compiz fusion I pick it on performance and ease of use.
As others also pointed it out, this is all good as a pound of candy for our collective eyes but usually one doesn’t have 10 images, we have tens of thousands of images, organizing them by hand/touch wouldn’t seem a sane way. Also, what is mentioned pretty rarely, using your hands to do these things can be really tiring, and I mean really, nobody would be able to do this longer than a few minutes at a time. And about 3D application of the touch interface, I’d say it would have some limited use, like easy demonstrations, and nice previews, but nothing else, it’s just more easy and precise to do the architecting steps with a high precision device. Of course it would have its uses in places like CNN these days, but on the desktop, well, we’ll have to wait and see, but I’m fairly skeptical.
Well, if we focus on the technology instead of which company innovated what first (I guess that many here are reluctant towards software patents anyway), it is pretty clear that no one has brought this to the desktop just yet and Microsoft might very well be early this time around. Although I and many others it seem have yet to see them, there will probably be interesting desktop features innovated around the multi-touch concept in the future.
However, isn’t one of the points of computers and computer GUIs to liberate the user from the constraints of the physical world? For instance, if I can toss around and resize a picture with barely noticeable movements of my hand and fingers using a mouse, why would I want to emulate the “real” thing? Why would I want to sort through a pseudo-3D stack of pictures to find the right one, when a wall display of thumbnails is much more efficient?
I guess the answer is that this technology is not aimed at computer geeks and the way we interact with desktop computers, but at improving the computing experience for non-techies and casual users. My great grandmother would probably learn using a well designed multi-touch interface in 10 minutes without any help at all.
I still enjoy hammering the gamepad while playing good ol’ Track & Field on the 8-bit NES, but that does not make the Wii controller any less fun for Wii Sports playing grannies.
Edited 2008-05-29 12:49 UTC
created in the year 2006!!!
and it works in a wonderful way!!
look here: http://it.youtube.com/watch?v=89sz8ExZndc
jefferson y jan personal page: http://www.cs.nyu.edu/~jhan/
on wiki: http://en.wikipedia.org/wiki/Jeff_Han
..’azz di microsoft che ladra in giro..
Edited 2008-05-29 15:58 UTC
Would somebody at OSNews mind not to link everything containing Windows7 in the title for the next few years. Features of some distant future Windows releases are boring and irrelevant – for the most part, their pure marketing ..