“I’m a Mac and iOS developer and just spent the past week using a Windows Phone 7 powered Samsung Focus as my primary phone rather than an iPhone 4 as I have for the past three years […] Anytime a new phone hits the market, I want to pick it up. I was also intrigued by the screenshots and previews I’ve been reading on Engadget for the past few months. Windows Phone 7 looked like nothing else I’ve seen on the market.”
I have my little theory, Phones or devices with no access to native development (C/C++) will always have a marketplace with a much higher ratio of crappy apps. Seen it with Palm OS5 devices, then with most J2ME Phones, then BlackBery, Initially Android (until they released NDK) and now seeing it with Windows Phone.
At least it can explain the slow performance he reports.
If coded badly yes.
Many managed applications run slowly because people just code for them using patterns that are not ideal for those environments.
Or worse, they just write stupid algorithms.
It is incredible how many developers nowadays don’t have any sense of algorithm performance and just whatever comes to their mind and seems to work.
If I understand managed environments properly, each instruction of managed code is much more costly than an instruction in machine code, because of the much more complex decoding/compiling process and the various tricks like GC that occur under the hood to make the programmer’s life easier.
So long managed programs are slow managed programs. You must try to use as little instructions as possible in your code, and use API functions as much as possible. Kind of like in Matlab, where for loops are infinitely more costly than matrix operations.
The problem is, in your opinion, that people are coding managed programs like they would code C programs, where you only use API calls when you really need them. Do I get it right ?
Edited 2011-01-03 10:18 UTC
I am not trying to deny that applications on Windows Phone 7 are slow (I have not yet used a phone running this OS) and I am also not trying to say that managed applications run as fast a unmanaged applications, but I think you have a lot of misconceptions about managed environments.
In .Net (a similar environment is used on Windows Phone 7) the managed code is actually compiled to native code, but this only happens at runtime. This happens the first time a method is called (so there might be a performance penalty when a method is called for the first time), but this needs to be done only once.
Have you ever used a modern .Net application? In most applications you won’t even notice when a garbage collect occurs.
This also doesn’t make any sense at all in managed environments. Can you define what you mean by “little instructions”? Again in .Net, what might be a single instruction in F# can be the equivalent of a whole list of instructions in C#.
In .Net most of the API is written in .Net so I can’t really see the advantage of only using API functions.
Applications in a managed environment can be slow when written badly, but if you think that only “expert” programmers use unmanaged environments, you are just being very naive. Equally bad code can be written in any environment, the problems caused by bad code will just slightly differ.
Edited 2011-01-03 10:56 UTC
Thanks for the explanations. In my experience of managed code (mostly Java and Python, though, I encounter .Net apps much less often), I can separate applications using it from native code through the following :
-Very long startup times (given your explanations, I suppose it’s because many methods are executed for the first time and thus compiled)
-Slow refreshes. Resizing a window is much more choppy than with a native app, I don’t know exactly why.
-Freezes briefly (from less than a second to a few seconds) from time to time, for no obvious reason.
For the rest, they behave very much like native apps.
Do you mean that when you download the runtime, you don’t get a pre-compiled copy of the API (or at least of its most frequently used functions) ? This seems strange. API functions are known in advance, well-tested, often used, and not subject to change, so there’s a priori no benefit in not compiling them in advance, and doing that is likely to reduce performance.
I see managed environments as the highest level languages of our time : something which avoids the common mistakes for you and helps you not having to care about what happens under the hood, making coding a faster and smoother process. Suitable when you don’t have much time, and when coding “just for fun”, but not suitable when you’re looking for some control on what actually is happening.
As usual, the right tool for the right job.
Many methods are “jitted” based on a statistical or heuristic algorithm to compile only where there is a performance benefit. In addition dalvik ( .net and java also I assume) can cache compiled code so that on relaunches the application is ready to go. Watch this video on how dalvik does jit, long but very informative to managed code noobs (especially pertaining to mobiles):
http://www.youtube.com/watch?v=Ls0tM-c4Vfo
This is solely attributed to the toolkit+WM+draw system not whether an application is running managed code or not. Drawing is usually lower level stuff.
Yeah, annoying. Those freezes are probably due to Garbage Collection. GC is no panacea to proper algorithms and memory management. If someone abuses memory by constantly instantiating crap you will run out of memory forcing the GC to run.
The problem here is that because to don’t have to explicitly delete stuff many app developers assume objects are free
For the rest, they behave very much like native apps.
As stated above JIT compilation is subject to the heuristic algorithm the compiler runs in order to ascertain performance benefits. Thus depending on how the api is used can affect whether something is compiled or not (seriously, watch that video).
Managed code has the advantage of running fairly decently if you’re not an expert. But, yeah, it still does not excuse or necessarily run well if you produce sloppy code though it is much more resistant to sloppy code than lower level languages. I would say that generally it is fine for applications (even big applications) but for infrastructure level stuff it is still completely wrong.
In the same way that ,say, a video decoder can be generally written in c++ or c, there are performance benefits coding some functions with assembler. This is also true on a macroscopic level.
Indeed, that video was very instructive and helped me understand this “trace jit” and “method jit” thing which Mozilla devs were always talking about during FF4’s earlier development. Thanks !
Actually no.
First of all, if a managed environment is JIT enabled, then there isn’t much difference in the execution time, because what ends up executing is machine code as well.
Better it is better optimized for the processor where the application is running as opposed to statically compiled applications.
My complaint was mostly in areas described below.
No use of GC friendly algorithms. Just use new everywhere without thinking on the consequences. If the GC implementation is not parallel, it will slow down the application.
Or not using mutable data structures when the contents are changing a lot.
Copying data with for loops, instead of using the proper runtime API calls to do byte bulk copies, which usually map to DMA operations.
Using O(n) algorithms when O(n log n) exists for example.
Plus, many of the complaints I see against managed environments are from people without proper compiler development background.
Any language with a runtime bigger than what C offers, will suffer from “managed environment complaints” if not used properly or if the runtime implementation sucks.
But in the end it is a developer usage/runtime implementation. There are no slow or fast languages, only implementations.
It is certainly possible that there is more work to be done, but to be honest the .NET framework has had a mobile phone implementation with JIT for ages and Microsoft has very competent people in that area. Android ran on a pretty average interpreter until 2.2 and stuff still ran decently. Even with the JIT added I find it somewhat unlikely that dalvik can beat the .NET compact framework JIT, but that is speculation, some benchmarks would be interesting.
I kind of suspect that the guy is on to something and there is some teething issue with networking in the libraries for third party apps. Certainly sounds like network performance is the issue, third party apps appear to run pretty snappily otherwise.
Actually the .Net JIT does not produce as good code as the Java one, because it lacks quite a few optimizations, mainly due to the way it works.
In Java world the code is partially JITted/interpreted, while in .Net the complete code is JITted before execution starts.
As for the Dalvik/.Net Compact, some benchmarks would be nice. But the .Net Compact Framework is already fast enough to produce XBox 360 games, so it should be ok even on mobiles.
I’m not sure I understand that. Wouldn’t separate network and UI rendering threads allow developers to have an application which keeps snappy all the time, only displaying a smooth “loading” screen when they can’t do anything without requested data from the network ? This alone can’t explain the 9s application load time.
Moreover, it seems that appplications don’t load anything in the background, and wait until the user asks for something by swiping to a pivot before looking for it. Is it really a good idea ?
Edited 2011-01-04 10:00 UTC
It’s not impossible that some toolkits consider network events as “normal” input events (like those from the mouse and keyboard). In addition, some toolkits (for instance SWT from Eclipse/IBM) impose that there is only one thread to manage UI related events; if that condition also extends to network events and thus imposes a single thread, then the door is open for serious lags… Which may explain why Opera started feeling totally incompetent when I moved and now have poor network conditions: sometimes, the whole UI hangs for seconds when a network request has been sent, to the point that I reported the problem on their forums asking for more “network resilience” and more intelligence as to resending requests that haven’t received a response after some time. I’ve had Opera wait for a response for 20+ hours!
It is a bad idea: things should be preloaded; Vista shouldn’t make me wait 10 seconds for a contextual menu to appear in the explorer. The problem is that (and I think about it at least once a day) the effects of preloading things in the background may be lessened by some OS mechanisms that the applications don’t control, like swapping. In the case of mobile applications, that’s even more likely to happen.
This is exactly a good example of bad coding.
You should not use the UI thread for actions that are supposed to take some time.
As a side note most UI are actually single threaded, because that simplifies a lot the architecture.
I can’t understand this single-thread limitation. UI events should be managed asynchronously with pop-up threads, for a huge number of reasons. And it doesn’t necessarily makes the architecture much more complicated, when you have pop-up thread support at the kernel level to begin with.
Of course, but we’re talking about a few KB of tweets there, so preloading them shouldn’t make the application much heavier in RAM to the point where part of it is swapped out while it’s in the foreground.
And even when things are swapped out, flash memory is much faster than mobile networks already.
Edited 2011-01-05 11:35 UTC
It is a case of “Worse is Better”.
The architecture requirements that you need to have to build a thread safe UI, provide minimal gains when compared to an UI which is single threaded and makes use of communication mechanisms between threads.
Having a multiple threaded UI, means you need to take care of:
– Who is holding which rendering context
– What threads are making use of which UI elements
– Which thread processes which UI event
– Race conditions to UI changes
– and so on
Having the UI event based and one thread responsible for drawing it, simplifies things a lot.
Well, I was more thinking about event handling than drawing when talking about those asynchronous pop-up threads, but let’s include drawing then.
When a button is clicked in an application, two totally separate processes occur :
-The button provides visual click feedback, through a “pressed” look.
-The application responds to the click by doing something insternally (click event handler).
So I can spawn two threads, one for the button-drawing job (which the application doesn’t even control, it’s all managed by the UI toolkit), and one for the event handling job (fully managed by the application).
The sole cases in which races and such may occur is when there’s already one running thread doing the exact job we just asked for (drawing this precise button or executing this precise event handler, which cannot be run multiple times at once, a strange characteristic for an event handler).
In these case, which are very unlikely already, we have two options : putting the thread in a queue waiting for its previous incarnation to be done before running it, or just refusing to spawn another copy of it (which decision you take depends on which kind of application you’re running).
Are async pop-up threads that much of a problem ?
not really, seeing as all (ui interacting) event handling must happen on the dispatch thread. thus events are managed sequentially. The rendering thread by it’s very nature is 1 thread that handles all drawing.
The event handlers themselves can spawn other threads which might cause race conditions if the ui is dependent on the results of those threads. So:
“click go”-> event spawns work thread 1-> thread 1 doing a lot of work.
user clicks go again (as nothing is happening) -> event spawns work thread 2 -> thread 2 does very little work.
thread 2 then returns first and if data from thread 2 depends or is affected by data from thread 1 then you have a race.
This of course is *very* contrived. Anybody and their monkey would do some sort of safety checking if this situation is liable to occur.
So TL;DR. They are not a problem unless you’re being a dumbass
You usually have one thread to do rendering, one (main) thread to dispatch ui events and x number of threads to do asynchronous work. I know that silverlight classes methods for network access are all asynchronous thus should never block the UI. Unless you’re munging data on the dispatch thread network access shouldn’t be blocking ui on WP7 (it will cause the app to do nothing if the app is dependent on the data coming in from the cloud). What they could be doing is munging data on the callback methods which do happen in the main thread.
I totatlly agree. Such systems without an NDK are simply immature. Doesn’t worth looking at it, until they’ll provide a solid NDK.
He’s been using his iPhone 4 for the past 3 years? Does Teh Steve know about this???
Probably found it in a bar.
(As I often do), I read the comments before reading the linked article… and I was surprised that an obviously “Mac-oriented” (no judgment there) guy found WP7 rather good. He would have made me think of grabbing a phone with that OS, had I had the luxury of owning two smartphones.
Honestly, I was expecting a merciless bashing of Microsoft’s mobile OS.