Using Photoshop usually requires lugging a typically cumbersome, expensive computer around, and changing that experience has been the dream of many creatives for years. As we found out back in September, it’s a problem that Adobe has been actively working with Google to solve. The two companies have been working together for almost two years to bring Photoshop to the browser, and they finally have a working version called Photoshop Streaming that they’re letting educational institutions apply to test over the next six months. Yesterday, I got a look at it in action when Adobe’s director of engineering, Kirk Gould, remotely ran me through a brief demo of the program.
So it streams a video of photoshop running on a remote server to Chrome. It works basically the same way as OnLive Desktop and Steam In-Home Streaming.
I guess thats cool but if you really wanted to run Photoshop CS6 or Creative Cloud on a Chromebook, both are supported by the latest Wine, which can run on x86 Chromebooks via a Debian or Ubuntu Crouton chroot. Granted, Photoshop would likely run like a snail on a Chromebook.
Very true. Although the last time I tried running Photoshop in Wine was around CS3 times and it worked, but crawled so badly it was totally unusable. You had to watch the toolbar redrawing line by line every time you moved or selected something, and that was on a Core2 Duo. Perhaps Wine has improved in the past 5 years to fix this though – I must try it again…
Two huge companies spent 2 years to implement VNC in a browser.
If HTML5 is the future then why is the only HTML5 “app” anybody uses Google Docs / GMail?
… because its too hard and not worth it.
Also, what “cloud” does this run on? Microsoft’s Azure just had a huge outage, like 11 hours I heard.
But that’s wrong? HTML5 is being used for ownCloud’s online editor, for YouTube and other post-Flash streaming videos, by Netflix for native Linux playback, and by Microsoft for a WebRTC version of Skype.
Says more about Adobe than HTML5 IMHO. It’s used for much more than Google Docs and is quite capable of running locally as well as remotely. Adobe just spent two years to do this so they can gradually phase out the local version of Photoshop and start forcing recurring payments from their customers and I might point out that HTML 5 has very little to do with what’s going on here. I suspect most of those two years were spent on infrastructure and making sure that Photoshop itself would stream without trouble. There’d be a ton of optimization going on in the version of Photoshop they’re streaming.
I believe Adobe runs their own datacenters.
So they probably had a lot of work to do:
– they created enough infrastructure which supports using GPU for doing the calculations. So they had to build the infrastructure to can scale to accomodate the number of customers they’ll get.
– VNC sucks for interactive Internet streaming. So you want a . Photoshop probably has a toolkit. Kind of like GTK or QT. So they had to find a way to change this toolkit to push as little pixel updates as possible to the browser.
On your HTML5 app question: you have applications like https://popcorntime.io/ on the desktop. Lots of mobile ‘native’ apps which are actually HTML5-apps with a wrapper or a native application with a webview with more up to date data.
And pretty much every new or redesigned website you see is a HTML5-“app” these days.
Seems even Microsoft is building them now. Haven’t actually used them.
Here is an other example: https://c9.io/
It’s strange to witness these shifts back to centralized services. Back in the days of mainframes this was done because hardware was so expensive.
As hardware became better & cheaper, PCs became the norm owing to their superior experience. Then we get the internet and lots of social services migrate to the web due to their social functions, yet local apps still dominated for productivity.
Now we are seeing some vendors push full circle by providing the entire service remotely and making the local machine a thin client again. However this time the push away from software and to central services is motivated by companies wanting to generate recurring revenue streams, which many big software companies have had trouble with. Once software is installed, many users can’t be bothered to keep paying for every release. This problem is easily solved by hosting the product remotely.
Even if I wanted a thin app I’d still prefer self-hosted options where everything remains in my possession (with offsight backups / redundancy). Something just bothers me about being so dependent on external factors for every minute of work. Just the other day the internet went out for a couple hours, that was an inconvenience but at least I could still use local software to do work. Thin-apps on a chrome book would have made working impossible.
I think most users are willing to let this shift happen because they’re tired of maintaining their computers or, more accurately, having to find someone else to maintain them. I don’t even blame them really. I hate to use car analogies for most comparisons but I think it works here: people buy a car and learn how to operate it, but most never learn how to fix it. Same way with computers and given the crap on the internet, Windows/Mac updates breaking, Antivirus software, driver fuck-ups, and the rest it’s easy to see why they want something a little streamlined. As it stands now, computers break down far more often than cars and have to be replaced more often too and replacement comes with a whole set of installation and reconfiguration headaches. I’m not saying I like this, but I do understand it from the other side since I’m the one who gets to see everyone I know go through the headaches and ask me for help.
I think the best thing programmers can all do now is write a lot of local software for devices like Chromebooks. Web apps can run offline just as easily as online if coded right. We won’t stop the trend, but we can diminish its severity.
darknexus,
I guess. Running thin clients for everything means a lot less can go wrong, which is a plus.
But I kind of feel the computer problems you bring up are byproduct of careless operating systems design. Apps should run in a sandbox. Cleaning the system and app removal should be trivial and bulletproof. But as it stands, reinstalling windows is often easier than finding and fixing the problem
Microsoft started to address this with windows 8, by creating an environment for sandboxed applications, the lack of which has caused so many problems for users over the years. However as I developer I could not endorse it in windows 8 because it came with too many strings attached. It was much to totalitarian for my tastes – the OS should enable the user to take control, it should not take control away from the user.
Edited 2014-11-21 23:15 UTC
You can argue “should” all you like but the fact is that, even if sandboxed, sometimes an app has to go outside it to access oh, those trivial things I call my files. So, you have to grant it permission. Trouble is, users are quick to grant said permission when asked if they think the sandbox is interfering with what they’re trying to do. Bring up an important looking dialog and the average user will click through it. Yes it’s careless. Yes it’s stupid. But this is what you get when you want to put a computer in everyone’s hands including those who aren’t going to take the time to maintain them. So like it or not, now we’re stuck with it and as a result comes the trend back to cloud services to take the worry off the hands of these people.
Edited 2014-11-21 23:48 UTC
This is because every attempt at managing app permissions has so far been done incorrectly by sandboxes.
Take Android for example. I am a software developer capable at writing my own apps for Android and yet I’m not qualified to determine if what an app is asking for is reasonable or not. The core of the problem is that they need to be granted prematurely and in too generic ways.
Lets say I’m installing an Instant Messenger and the permissions dialog includes that it needs access to my camera. Do I want it to use my camera? I can think of reasonable and unreasonable reasons for an IM to use my camera. Same thing when it requests Internet connection or GPS information. The question being asked is simply unfair and people literally got no choice but to always approve.
dpJudas,
Yes, +1!
The user shouldn’t be bothered with permissions until he invokes functionality within the context of an application that needs it. Or the application developer can ask up front if that’s more suitable. This way if a user never triggers optional functionality (such as “attach picture”), then he should not have to agree to it up front just to use or try out the rest of the program, which is stupid.
Forget permissions for files and things like your camera, use intents:
http://developer.android.com/reference/android/content/Intent.html
This works on Firefox OS (called Web Activities there):
https://hacks.mozilla.org/2013/01/introducing-web-activities/
As an Android end user all I can say is that the OS itself greets me with an unfair dialog stating what an app wants. My friend just said this WhatsApp is cool and now it wants my camera, GPS, address book, Internet connection, background services and so on.
It doesn’t matter if some of it could have been done with a different the API, because the developers chose not to. The entire initial permissions dialog has to go. Cyanogenmod showed how to do that, but will Google implement it?
That’s Android’s permission system which is flawed for a host of reasons as you pointed out. But even permission systems like Windows UAC failed because users just got tired of being asked. At this point it doesn’t matter anyway. It was done incorrectly for years, now we’ll be stuck with something worse if we don’t get smart quick.
darknexus,
I agree that UAC was too little too late. I don’t even know if it can be configured, other than to turn it on and off entirely (that is the only sane option for me because it keeps asking the same permissions for the same apps over and over again).
http://helpdeskgeek.com/windows-7/windows-7-disable-uac/
Is there any way to run a proper application sandbox under UAC? If there is then I’m ignorant of it. A search did reveal this interesting 3rd party sandboxing app, which I haven’t tried yet:
http://www.maketecheasier.com/run-windows-applications-in-sandboxes…
It seems like a neat approach to me, given that we can’t rewrite all the existing windows applications to work with permissions. Unfortunately there are regular security vulnerabilities in the implementation, as acknowledged by the developer.
UAC has exactly the same problem. An application requests administrator rights. How am I supposed to know if it is a reasonable request or not? And when I grant it the app developer is basically allowed to do *anything*.
Part of the problem is that historically all our current operating systems evolved from the Unix age. Back then the general idea was that an OS was run by an administrator (root), the app developers were always fully trusted (it was done by the fellow professor down the hall) and the only thing to protect against was the user (the students). The directory structures, access control lists, user groups, process capabilities, and so on all follow such principles and ideas. It is always assumed the user/administrator knows up front what rights should be granted.
Yep. Either Microsoft, Apple, Google and Linux/BSD wise up fast and realize app permissions needs to be more like browsers do. Or we will be stuck with HTML5 as it slowly gain all the abilities of native apps. Running a new app should always be as safe and simple as surfing to a new web address.
dpJudas,
Java supported this kind of thing via Java Web Start. This was pretty impressive, but by the virtue of being Java-specific and microsoft doing what it could to kill Java, it didn’t really go anywhere.
With windows 10, MS will finally gives us WinRT specific sandboxing of desktop apps, similar to JWS actually. Maybe I could get behind this, but boy do I hate DRM. Metro is hostile to both developer and user freedoms by crippling our ability to run software without microsoft control. In other words, they’ve made this very unappealing for both proprietary and open source developers.
http://blogs.msdn.com/b/uk_faculty_connection/archive/2012/04/03/in…
Is MS going to liberate metro in windows 10? Because personally I have no intention of ever writing software for Metro until my users are free to run it directly without either of us needing to apply for authorization. MS would be wise to heed developers, otherwise we will continue developing “legacy apps”, migrating to web apps, or even migrating to other platforms.
Great observation. I think linux containers and BSD jails may fit the bill today, I don’t know about solaris?
There are even times I’ve wanted sandboxes in javascript. It’s surprisingly common that I need to rely on a third party script, yet it makes me cringe to give it full access to a website. Take google analytics (used here on osnews), without javascript sandboxing google technically has unrestricted access to do what it pleases under our authenticated accounts. By using google analytics osnews implicitly trusts google not to hijack our accounts. This is all the more reason to block 3rd party scripts in the browser.
darknexus,
I still feel the sandboxing of apps makes the user significantly safer because apps don’t automatically assume all the permissions of the user. This has bugged me on windows since the 90s.
Yes users will be dumb, and maybe sandboxing would not have been 100% effective, but it’s no excuse for users not having any effective control over applications. We placed all our bets on antivirus software, yet I feel safer running Android with no antivirus software than I do running windows with it when running 3rd party apps.
Not only that, but I’m sure these companies are getting tired of pirates ripping off their apps left and right, so I knew it was only a matter of time until we started seeing this. I think the music/movie industry are kind of up shit creek in this regard because of the analog hole, but apps/games at least have another alternative, and that alternative is cloud streaming.
The biggest disadvantages are having to make recurring payments and never really owning your software. On the other hand, as another poster pointed out, it does have its advantages, esp that you don’t have to maintain the software on your local machine anymore, and you can access it from just about anywhere.
As for Internet going down, I work from home and that has happened to me enough times that I now have a hotspot backup, just in case. An indispensable tool.
WorknMan,
Yes copyright infringement is a problem that can be solved this way, but it’s probably not quite as bad as the industry makes out due to the false equivalency of 1 illicit copy = 1 lost sale, which probably incorrect most of the time.
As for games, especially the intensive ones, it would be an inordinate amount of CPU & GPU to host remotely, not to mention tremendous bandwidth & lag for full HD streams for each gamer – the internet probably could not support the collective bandwidth. So even if studios might want to host games remotely to thin clients, it’s going to be too expensive to engineer something that is ultimately going to be a worse user experience. Instead I think we’ll continue to see online DRM, like they did to much customer resentment with The Sims.
It seems like a “problem” that providers wanted to solve rather than one that consumers were demanding solutions to, I mean how many people actually complained about installing photoshop themselves? To the extent that it’s a problem, I guess I’d rather see a solution where they manage software remotely, but it still runs at your site or co-location of your choice.
Mobile broadband plans are outrageous for very little data, I certainly couldn’t afford to stream anything. On a recent road trip in Pennsylvanian everyone’s online GPS apps failed, mobile connectivity caused a common failure mode. This was exactly why I insisted on buying a local GPS app for my wife’s tablet, unfortunately she wasn’t with us. You’ll never guess what my backup was…I had an old-school map!
Yes, a staggering amount of Windows installs I’ve come across over the years have had a cracked version of Photoshop on it, how many of these people would have it if they actually had to pay for it ?
So if Adobe would come to only provide new Photoshop versions as a SAAS solutions then it would indeed be interesting to see if their revenue would soar, or if it will continue typically being those who use it professionaly who keeps on paying for it, with the ‘casual’ users either sticking with their existing cracked versions or switching to free/crackable alternatives.
I mean paying $699 for CS6 so that you can remove red-eye from your party pictures or crop your ‘selfie’ into a facebook avatar seems like overkill.
Especially when there are scores of free tools that can do that sort of simple editing.
I think it’s just that people have heard of photoshop, so it’s easier to pirate that than it is to find something else.
Edited 2014-11-22 04:55 UTC
I don’t think people making quick “red eye” edits are the primary buyers (or pirates) of Photoshop; it’s too complicated and difficult for that. Plus, no version of Photoshop runs on those peoples’ silly smartphones anyway.
You are unfairly discounting the large number of people who are professional and semi-professional photographers, not to mention “pro-sumers”, hobbiests, and enthusiasts who are involved in digital imagery today. Perhaps millions of people. These people take their gear (software and hardware) very seriously and spend hundreds of hours in front of computer screens editing photos.
I think Adobe is blowing their foot off with a shotgun by getting rid of crackable copies. At some point the photographic enthusiast/modding communities will move on from photoshop to something else, and then Adobe will slowly lose the professional market.
The problem, most users aren’t motivated to upgrade to the new versions. It’s so expensive to buy, a large investment, meanwhile the old versions remain incredibly capable. Plus who wants to regularly retrain themselves on something so complex?
So in a way, a new subscription service might meet the needs of many users while providing reliable income for Adobe.
Me, I’ll keep using CS2. It does everything I need and much more, and it’s already paid for. And despite the disclaimers from Adobe, it works great under Windows 7.
Same here (with CS3). The only limitation I’ve come across is that it does not understand the raw files from my new camera. Everything else is more than enough for my needs. But I’m happy to take the extra step of converting them to DNGs (which is probably a good idea anyway) rather than shell out a wad of cash just to read the files directly.
You never really did own the software anyway… Some vendors have even sought to prevent you from reselling it after you’re finished with it and various other arbitrary restrictions.
As for piracy, a significant number of those who use pirate copies can’t or won’t pay whatever happens, they will continue using an old version or they will seek an alternative. For some companies this would actually be disastrous because it would give mindshare to the alternatives and result in much greater acceptance.
Photoshop is a good example of this, many people think they *need* photoshop when in reality there are many other programs that would suit their requirements, but they have never bothered to investigate alternatives. If they were faced with a situation where they couldnt afford photoshop and can’t pirate it either then many would start investigating alternatives and learning them properly.
You did insofar as you could use said software for as long as you wanted, without ever having to phone home to the mother ship.
“Using Photoshop usually requires lugging a typically cumbersome, expensive computer around”
Surface Pro 3, Macbook Air, Lenovo Yoga – all machines with ample specification to run Photoshop CC and run it well. So these are cumbersome computers? I think not.
Expensive is relative but cheap crap will run like cheap crap, you pay for what you get.
Furthermore, the system requirements for Photoshop CC (Creative Cloud) are actually higher than the system requirements for the locally installed CS6 version.
http://helpx.adobe.com/photoshop/system-requirements.html
http://helpx.adobe.com/photoshop/system-requirements.html#main_Phot…
I’ve used Adobe’s iPad version of Lightroom. Which is essentially a thin client that connects to your Desktop version over teh internet and offloads the actual work to it. Nifty but a bit clunky.
So I guess this is the same thing but the work is offloaded to their data centres. All good and well provided we’re not forced down this road.
I have no problem (so far!) with the subscription model of paying for LR and PS, it represents good value for me. But forcing you into a thin client would be a bridge too far.
I don’t think Adobe would consider that for even a second. Way too many of it’s core users work on the road (photojournalists, photographers, artists, etc) for this to be their intent.
As to piracy, despite my initial misgivings, i think their current more affordable subscription model will disincentivize piracy. There will always be a group of users who pretty much only pirate software, those don’t really count as lost revenue to anyone.
Adobe has Photoshop running in Chrome.
Not:
Adobe’s got Photoshop running in Chrome.
Terrible writing.
To be fair, English isn’t Thom’s native language.