Today, we are excited to announce Quick Boot for the Android Emulator. With Quick Boot, you can launch the Android Emulator in under 6 seconds. Quick Boot works by snapshotting an emulator session so you can reload in seconds. Quick Boot was first released with Android Studio 3.0 in the canary update channel and we are excited to release the feature as a stable update today.
There’s a quite a few other improvements and new features, as well.
C64: Switch it on, there it is. You can start to type in your BASIC code right away.
I even got my Amiga to start in under 10 sec. from HD. With a quite sophisticated UI and many extras.
So why does it still take 6 seconds with SSD and 10.000 times more MIPS?
Because the operating system is thousands of times more complicated?
Sure 1000 times more complicated, but not 1000 times more usefull.
It’s 1000000 times more useful. Modern PC will process more data in 1 second than C64 during its whole lifetime. Let’s face it – old computets sucked, same as current state of art PCs will suck 20 years later. It’s nothing more than bunch of plastic and wires.
agentj,
This actually adds credence to cybergorf’s point. Given the fact that hardware has gotten so much better, one would expect modern software to perform many times better than it does. When it comes to clock time, real and significant hardware gains have largely been offset by software inefficiencies.
Sure we’re tempted to say android is 1000 times more complex, but in all seriousness the entire kernel should load in the blink of an eye given how fast flash storage is, and by past standards it’d be inexcusable for an app loader to take so long to load itself on such fast hardware. The truth of the matter is that the software industry has left optimization on the back burner arguing that hardware improvements make software optimization irrelevant. This is a very common justification in the field of software development, and if that’s the industry’s consensus, then so be it. But we shouldn’t lie to ourselves and pretend that modern inefficiencies are intrinsically do to additional complexity, no we must recognize the fact that the art of software optimization has gotten lost along the way.
yes – that was exactly my point.
Your comparison is invalid on three counts.
A. Kernel boot time has nothing to do with storage performance and has everything to do with the complexity of you call a “PC”. While the C64 had very limited set of devices that were initialized from ROM, a modern kernel needs to support 1000 upon 1000 of CPU types, chipsets, devices, etc. Most of them with unbelievably complex initialization sequence. If you don’t believe me look at the driver source code of modern GPUs or 25/40/100 GbE network devices.
B. The amount of optimization that goes into the kernel these days is million miles ahead of type-a-couple-of-1000s-of-ASM-LOC and shove them into a ROM that was used to design the C64.
C. Same goes for file systems, system services, network services, etc.
D. That said, you are completely correct when it comes to user facing applications (GUI, web applications, business applications, etc).
– Gilboa
Edited 2017-12-20 18:37 UTC
Well in this case it is D: the android emulator is a user facing application.
… He was talking about PC vs. C64.
who is “he”? me? than yes. So?
Yes. You.
So, I didn’t comment on Android. I was talking purely on C64 vs PC.
– Gilboa
gilboa,
IMHO it’s true of most code.
Being a system developer I can’t say that I care much for user facing code
– Gilboa
Edited 2017-12-21 13:22 UTC
gilboa,
The memory mapped devices are significantly faster than the legacy PIO ones, and on top of this the bus speeds have increased dramatically. Hardware initialization time is so fast that a stopwatch would be too slow to measure it. Most time is a result of software deficiencies. While complexity can contribute to software deficiencies, it’s not the inherent cause of slowdowns on modern hardware that you are making it out to be.
One problem is that network drivers, graphics drivers, audio drivers, printer drivers, usb drivers, etc come in packages of 10-100+MB, which is quite unnecessary and can end up causing delays and consuming system resources unnecessarily. At least modern SSDs are so fast that they help mask the worst IO bottlenecks caused by bloat, but alot of it is still happening under the hood.
I appreciate that fast hardware is considered much cheaper than optimizing code. However there’s little question that legacy programmers were writing more optimal software, that’s really the gist of what we’re saying. It was really out of necessity since on old hardware they couldn’t really afford to be wasteful like we are today.
Look, being a kernel / system developer, I still write asm code from time to time. But I should be honest, that even if I really, really try, I seldom get any meaningful performance increase compared to cleanly written C code + GCC optimization. (And writing cross platform asm is a real b**ch).
Even if you talking about UI: Its very easy to write highly optimized code when you’re dealing with simple requirements. Complexity (what you consider bloat) usually trails requirements.
E.g. It fairly easy to develop a simple-yet-fast VI, Its 1000 times harder, when you try to develop a fully fledged IDE w/ a debugger, UI designer, syntax checker, multi-lang spell checker, project tools, testing tools, built-in browser, cloud support and whats not.
– Gilboa
Edited 2017-12-23 07:55 UTC
gilboa,
I never said it was easy, only that it’s a skill we no longer have as much appreciation for due to the immense progress on the hardware front.
Edited 2017-12-23 11:19 UTC
You are saying that you rarely see any performance increase compared to C. That might be true by looking at x86-code. It might be true when optimizing a single routine. The worst bloat arises when the compiler puts those routines together, however. When programming in ASM and writing routines one rarely ever touches the stack (except for the return). Usually you can manage to hold things in registers. In compiled code, almost every function call comes with a saving to the stack, setting up the stack, doing some short stuff and rewinding the whole thing again. That is a p.i.t.a.! Moreover, the differences between a routine (and the functions it’s calling) residing completely in L1 can be enormous compared to something ‘slightly’ larger.
You are saying that the kernel has to initialize a lot of hardware. That is true. This should run in parallel whenever possible and it probably does. You should notice yourself that this works everything but perfect, nonetheless.
The kernel itself is a good example on how you can fuck up things by taking the wrong path. In the Unix world ‘everything is a file’. The kernel offers an interface where you can access all the hardware things and settings as files. This is the MOST INEFFICIENT way of handling things and is done that way for historical reasons. ‘Everything is memory’ being the right approach, of course… I still see many applications where files are being read in line by line. This only works, because CPU speeds are beyond the wazoo. Otherwise, this is dead inefficient. You’d be surprised to see how many layers you are ‘trespassing’ when tracing such a call. It’s worse than your average onion. Needless to say that replacing those by memory mapping would be the best approach, while reading in the whole thing and then parsing it afterwards the second best (with lots of distance in-between).
Working with embedded devices I find myself surprised many times when I get to see ‘how fast’ 84 MHz actually can be. I have moderately complicated routines, which can do several ten thousand rounds in a second on such a little thing. It is shocking to find yourself back on your PC with an application taking more than a second for something, which shouldn’t take even one hundredth of that second on today’s hardware. Those seconds add up to make matters worse.
So no, we are far off from anything even close to efficient these days.
I have always hated using the emulator. Would try and find even the crappiest phone to test on to avoid it.
I guess the non-quick-boot version of it can be called ‘slow as fk boot’?
This new ‘Quick Boot’ sounds a lot like suspend to disk or hibernation to me. Don’t get me wrong, it’s nice to have anyway – even with a new name.
I don’t even know where to begin with this.
As an adolescent, I used to crack software (protections) for fun. I started off on the Commodore Amiga with 68k assembler. In order to crack, one had to disassemble (parts of) the software and put in a fix. Let’s just say that the code the compilers created was far from optimal. Even if there was an entire routine missing, I could usually ‘create some space’ by simply optimizing the crappy compiler code.
I also cracked a bit of ’86 software. Eventually I stopped, as I got bored.
Today’s code is such a bloat that often you can’t even tell earth from water. Today’s software does with 30M what we would have done in less than 2K using assembler. Draw your own conclusions…
I feel the need for sharing an anecdote. A while ago, I was developing an application for an embedded device in C++. There was the need for ‘biased’ random generator functions, just as they were introduced with C++11 random number distributions. Unfortunately the compiler in use didn’t support C++11 yet. As the test-device had plenty of flash, I decided to include the random stuff via Boost… Soon, I was surprised to find out that my application had grown more than 10x in size and was now taking up more than half of the flash memory. An investigation revealed that the binary now even included internationalization routines! I’m guessing that the damn I18N had to be set up as well; this, on a device with only a few LEDs.
The compiler (and even more so the linker) was set to eliminate unused code/functions.
Luckily a compiler supporting C++11 was ready a few days later. Now the random stuff takes up only a few KB (though, arguably still more than needed).
Have you ever wasted your time by looking at the source-code of Mozilla’s Firefox? I went crazy after half an hour. There are tons of functions doing (more or less) the same thing. Obviously, they lost track of things. This is not only problematic in terms of code bloat and program efficiency, but also a security problem. You find an error in one function and you can almost be certain that there are more functions with the same error under a different name… what a mess! 😛
Bottom line: Complexity is just an excuse for not polishing the building blocks. Today’s software “foundations” are often already rotten in their core.
Edited 2017-12-21 06:43 UTC