Windows 8 is still in heavy development, and still has a long way to go before it ends up on our desktops, laptops, and tablets. One of the major concerns is how Windows 8 is going to deal with a traditional keyboard and mouse/trackpad combination – especially on non-tablet computers where touch input is pretty much not an option. While Microsoft assures us it’s all good, Synaptics decided to see what it, as trackpad maker, can do.
Windows 8’s interface depends on a number of gestures and flicks – things like swiping in from the sides to switch between windows, to open the menu-thingamabob, and so on. When I tested all this with my mouse for a little while (Windows 8 breaks down into a blue screen on my machine a few minutes after boot), it all seemed a bit arbitrary, but then again, if you stop and think about our current desktop interface, it’s filled to the brim with arbitrary nonsense. Learning curve is to be expected with a new interface.
Synaptics, however, decided to approach this issue from a different angle. Instead of having users rely on mouse clicks and keyboard shortcuts mapped to the new touch gestures, Synaptics mapped the surface of the touchpad to the actual screen. This seems strange, but it means that, for instance, the flick from the right edge of the screen to bring up the menu can now be done by flicking from the right edge of the touchpad.
I’m embedding TechCrunch’ video because the video on Synapticts’ site runs unbearably slow for some reason.
This actually seems like a pretty decent compromise for laptop users. Having to raise your arms to a vertical screen is entirely pointless, but I’m not so sure about the gestures-mapped-to-mouse/keyboard either. The raised edge of the touchpad is an ideal tactile target for these edge-flicking gestures, and while you can only judge these things by looking at them, I might actually prefer this solution (on laptops) over a regular touchpad.
“The time for PC OEMs to design for Windows 8 with touch has arrived. Synaptics is excited to deliver on the promise of advanced touchscreen and innovative TouchPad technology, which will play an important role in how users interact with their Windows 8 PCs and tablets,” said Mark Vena, senior vice president and general manager at Synaptics, “We’re especially enthusiastic about new product concepts like Intel’s thin and light Ultrabook which will take special advantage of Synaptics’ technology.”
I wonder just how much proprietary software magic is going on there, since I don’t think it will require very special hardware – i.e., this ought to work on existing touchpads too, right? Let’s hope Linux and other operating systems can benefit from this stuff as well.
This reminds me of 10GUI: http://10gui.com/
It seems like a good idea, but only if the touchpad were made bigger and pressure sensitive so that it could display where you are touching before registering a touch as in the 10GUI video.
I remember that. I liked the hardware idea but to me having multi-finger swipe gestures seemed wasteful of all those touch points that could have been used to move multiple sliders at once. They even implied that would be useful earlier in the video.
And it seemed somewhat unintuitive. I can’t remember what the gestures did and I can’t even figure out what they must have been. And not everybody has that many fingers.
Actually, I doubt that being able to move several sliders controls at once would be that useful, at least for current productivity tasks.
-If all those sliders control the same thing, like an audio mixing console, which is what I would expect, then one needs to modify only one parameter at once to be able to fully isolate its effect and decide which value is best. The only operation which can efficiently be made with several fingers as once is muting all console channels before turning off the console and leaving the room
-If they control independent things, then the human brain’s inability to multitask well will come into play, and again make the task a bit painful.
Well, I guess stuff like games could make use of multiple touch points though. Multitouch World of Goo sounded fun.
I’d definitely hate using something like that. It’s way too imprecise as you’re constantly guessing where your finger will “land” on the screen. It would be somewhat useable if it could sense your fingers hovering over the touchpad and show on the screen where they are, but as it is I seriously doubt it’ll ever catch on.
As for implementation and hardware: I don’t see anything that couldn’t already be done. I mean, I have a Synaptics touchpad on my laptop and it can do gestures, multitouch and all. I haven’t tried with 4 fingers, but it sure can register atleast 3, that I have tried. So, the only new thing here is the software implementation. And even then it’s not a new idea, it has been thrown around for years and years before.
What I don’t get is why they don’t do what apple does with lion. There are all sorts of gestures but with one finger you have a normal mouse, multiple fingers have gestures.
I can say that it works really quite nicely (I would also say that a large macbook touchpad also helps a great deal in this as opposed to most “tiny” pc touchpads).
Oooh I really like the idea of implementing it so it could sense you fingers hovering over the touchpad (for ‘pre-landing’ visualization — I’m sure there must be thermal sensors that are sensitive enough (if you have warms hands in a cool room at the least)(or even triangulate from a few tiny IR cameras at the touchpad corners but but that’s getting a bit complicated and expensive isn’t it –could have concomittant 3d gesture applications though??)
..A nice subtle glow(red? /blue for me, I’m a bit red-green colour blind) appearing on screen where your fingers are hovering.. between 3mm and 9mm i reckon works for me..
hmmm, *maybe* someone’ll read this and make it for me. ooh look, a flying pig!
To be more frank: it has been done already, I think. At least on few ~early Acer Aspire One (IIRC) netbooks for example, some with (fairly stock) Android – which, IIRC, pretty much did this (quoting Thom’s words)
(so, in fact, sort-of-Linux most likely had it much sooner )
And with quite cumbersome results, if I recall the impressions correctly (yes, on minuscule Aspire One touchpads, so there’s some hope, I guess…) – actually, the consensus was that it works horribly, I believe. Not really anywhere near (again, from the mini article above):
I doubt even sensing fingers in hover would help much – that’s a fairly “nuanced” and “slow” action, requiring even more precision, vs. basically just resting the fingers on some surface.
Touch itself is not a problem. It is the very idea of having someone, let alone me, touch my monitor in order to do something on-screen. It’s the most ridiculous way to control a computer that I have ever heard of. Not to mention the prospect of having a computer screen covered in smudges, smears, and fingerprints is simply NOT something that I will ever do, have in my house, or allow someone else to do! Tablets are one thing, my pristine monitor screen is quite another.
From Wacom users perspective it’s completely normal, but switching from touchpad to tablet is the way of pain.
The keyboard/mouse interface works great and has served computer user needs since their inception. I understand touch-tablets are the hot new product but that fact doesn’t diminish the keyboard/mouse in any way. A desktop and a tablet are not the same thing. Trying to frankenstein one into the other seems dumb to say the least.
Maybe somebody will come up with a decent compromise but to say I’ve been less than impressed thus far would be a massive understatement.