Debian announced that they are going to introduce multiarch support for Wheezy (7.0) in 2013. Well, nice, but aren’t they a little bit late now that problems are mostly sorted out and systems moved to 64-bit? This would have been great news at the time when Lenny (5.0) was released, but does it even matter in 2013? Are they just going to make things more complicated for no reason?
I call them Dr. Scholl’s.
But seriously, it does seem a little late for this. It might have helped the transition, but almost all processors are 64-bit now, so the market for this is small.
However, Debian is very much a “scratch an itch” community, so there must be some support for this, or it wouldn’t have made it this far.
[q]But seriously, it does seem a little late for this. It might have helped the transition, but almost all processors are 64-bit now, so the market for this is small.[q]
Well, this is only one advantage. The other is cross-platform development.
You’ll be able to install libraries for multiple archs in parallel, meaning (e.g.) that you can install ARM libraries on x86_64 as well, to compile against with a cross-compiler.
That was what I was looking for – cross-platform binaries. However, reading the article I didn’t see that, but I might have missed that. That would then be like the Rosetta binaries for PowerPC/X86 Mac.
I just read the article without anymore research, so this may be completely wrong …
Think about a complex cpu intensive program you wrote for your cell phone. Most cases your little arm dual core is pleanty of processing power, but everynow and then you feel constrained by particular problems. No problem you just plug it into your phone doc and the host 12 core amd64 computer takes over running the operating system at the exact spot in the computation! Turbo power in spades, I say.
Now of course, phones don’t run debian…
No body sane runs physics simulations on their phone …
You can’t switch Arch while the program is running. The executable memory, data memory and stack are incompatible across architectures. The application itself is incompatible too, it have to be installed twice or use universal binary like hacks. What Debian do is what Gentoo and other did 8 years ago, having /lib32 and /lib64 and two set of ENVVARs. The LSB (Linux Standard Base) also support it since years, so there is nothing new there.
Yeah, apparently you didn’t understand the gist of my post. Obviously thats not what this is. My best ideas are at their core rube goldberg machines. Really cool to look at, but extreme overkill for a menial task.
Edit:
But actually they are still talking about mixed instruction sets on the same machine as well, with one possibly running by emulation or what not. But hybrids still might make sense arm/x86 netbooks where you reboot into another ach for different power/performance levels. Still, if you could only get rid of the reboot , it would be even cooler.
Edited 2011-07-27 03:04 UTC
Actually I think the Nokia N800,N810,N900 runs Debian-derived Linux. Probably N950 too.
Being derived from or “forked” from another distribution makes the difference.
N770 – fork of Debian – Maemo 2006
N800 – fork of Debian – Maemo 2007
N810 – fork of Debian – Maemo 2008
N900 – fork of Debian – Maemo 2009 (v5)
N950 – merger of Maemo and Moblin – Meego (version?)
Nokia made Maemo by forking Debian and modifying it specifically for Nokia’s hardware. Intel created Moblin (a redhat fork?) for general use on mobile and enbedded devices. They merged into a single distribution last year or the year before which is what the N950 runs.
Rumour is that the N950 only has a small number of units available which are being leant out to QT developers instead of sold. Is this true? Booo.. I’d love to get hands on an N950 as a final upgrade from the N900 in the Nokia MIDs line.
Ideally, I still want a phone/tablet sized device runing stock Debian. Until then, Meego looks to be the closest to a proper general purpose distro on a mobile device. IOS and Android are certainly not GP distros.
Don’t panic. This is just the “debian way” of doing things. We should not surprise by now.
This is to make things LESS complicated.
http://wiki.debian.org/Multiarch
http://wiki.debian.org/Multiarch/TheCaseForMultiarch
And here I was looking forward to thumbing up another Debian article.
One way or the other, they need to drop the ‘i386’ moniker. The term ‘i386’ probably doesn’t make a damn bit of sense to anyone who hasn’t been into computers for more than 15 years. Even ‘x86’ is pushing it a little. If my first computer was a Pentium, how would I know WTF an x86 is?
Edited 2011-07-27 00:33 UTC
Because a fair amount of software available the time still required a only a 386 minimum?
But, really, why dumb it down because some people don’t want to spend the minute it would take to google something?
Exactly what I thought, cleaning up the hacks that are in place to deal with it now.
Hasn’t netbsd been able to do this kinda thing for a while?
…interesting, to say the least. I sort of understand it, but not fully. I also think that this is something that would have been very nice to have had back in the Debian Etch/Lenny days as the article says, it would have truly been useful in those times.
Those who are saying it’s too little too late:
– the i386 -> x86-64 transition isn’t finished yet. A lot of machines are still 32-bit or running 32-bit Debian. An in place upgrade from 32-bit to 64-bit would be very nice to have.
– there’s other transitions out there, and there will be more in the future. If you think the x86-64 is the last architecture you’ll ever see, you’re quite mistaken.
– you can use Debian’s multiarch even with software emulators: running QEMU’s user space emulation with an appropriate Debian userland would rock for some use cases. Imagine you’re developing ARM software: crossbuilding and testing on the same machine would be a cinch!
– you can have more than 2 archs on the same machine. So it’s way better than just lib32 vs lib64.
– Debian won’t have to provide separate difficult to maintain ia32-libs packages any more: just install what you need from the i386 repository.
– easier support for ISVs: skype for instance, which exists only as a 32-bit binary will now be able to depend on what it really wants instead of “ia32-libs”.
So, yes, Debian took longer to support 32bit/64bit, but in return you get any-arch on any-arch; a much better, more flexible and long term solution.
And like I mentioned before, who knows what chip makers will come up with in the future. If there’s a fancy new CPU with extensions, and certain programs would benefit from being recompiled to take advantage of these new CPU features, then Debian could easily introduce a new subset/partial repository containing optimized binaries just for these aforementioned programs.
I think it’s indeed a bit late for the majority of x86 users — although it might actually be a godsend for people with some legacy 32-bit applications.
But what I’m thinking of, is that the timeframe of having it ready by 2013 might be right on track for developments in the ARM world. The upcoming Cortex-A15 core already supports a 40-bit address space, but is still a 32-bit processor. I expect their Large Physical Address Extensions to be a hack not unlike Intel’s PAE trickery on x86-32: acceptable to bridge the gap for a while, but not a viable long-term solution. So it seems not unreasonable to expect a true 64-bit ARM core in the not-too-distant future.
The Juniper SSL VPN requires 32-bit browsers and Java to login. The lack of good 32-bit multiarch support is the main thing that kept me using Fedora over Ubuntu in the past. The ia32-libs hack is too ugly for words.
Crufty x86-only software is whats keeping you using fedora…
Also the idea of a vpn initiated from the browser is ridiculous, a browser shouldn’t have this level of access to the os.