On Monday of this week, more than 60 Ubuntu developers gathered in a hotel near Paris’ Charles de Gaulle airport to plan Ubuntu’s next release, codenamed Edgy Eft. The goal of the meeting is to set the goals for the upcoming release and to chart the set of steps that will be necessary to implement it.
I don’t know how it compares with other linux LiveCD’s , but I recently tried Ubuntu Drapper and was seriously not impressed.
Boot time was like 5 minutes, and starting up firefox was another 5. Can you imagine calling this a “technology preview” and demo-ing it to an audience of window users. Yuck!
– In fairness, I don’t know the specs of the box I was using, since it was at an internet cafe, but it was XP-capable.
The machine you tried it on probably didn’t have enough RAM. LiveCDs require more memory than systems where the OS is installed on the hard drive, because they don’t have the luxury of a disk-based swap partition.
Try it on a recent machine (i.e. one with at least 512MB) and you’ll see a HUGE difference.
Agreed that it requires more memory. I’ve run it at 256MB and it is slow. The problem is that Canonical is mailing out all these zillions and zillions of Ubuntu CD’s to a world in which many people still have 256MB boxes without mentioning that the Live CD requires more memory than that.
> The machine you tried it on probably didn’t have enough RAM. LiveCDs
> require more memory than systems where the OS is installed on the
> hard drive, because they don’t have the luxury of a disk-based swap partition.
A bit OT, but by what mechanism does having too little RAM make it run slower? I always thought the mechanism that caused this was exactly the swapping that cannot occur when there is no disk-based swap space. My logical conclusion was that with too little RAM and no disk, a program would simply not run at all.
> A bit OT, but by what mechanism does having too
> little RAM make it run slower?
Maybe because there’s no memory left for filesystem caching? That can make a dramatic difference with lots of IO load.
A nice (very old…) example that comes to my mind is installing Windows 3.11 with and without Smartdrive.
Boot time was like 5 minutes, and starting up firefox was another 5. Can you imagine calling this a “technology preview” and demo-ing it to an audience of window users. Yuck!
– In fairness, I don’t know the specs of the box I was using, since it was at an internet cafe, but it was XP-capable.
The machine was probably lacking ram. I tested it on my athlon 64, 3000 with 1 gig of RAM and was so impressed that I blew out half of my windows partition to make room for it and installed the full version.
Ubuntu is the first linux distro that actually has me contemplating a switch from windows for good.
If my work did not require visual studio I probably would have already destroyed my windows partition.
I suggest you get rid of the partition and get VMWare (or the new Intel/AMD chips that are able to support Xen 3.0 that runs Windows). Dual-booting is a pain, since you often end up in the position where they stuff you want is on “the other OS”, so you end up building a lot of redundancy. The nice thing about VMWare (or Xen) is that you can make a base VMWare image with just Windows 2000/XP and then make copies for different type of configurations, since as you’ve probably experienced, MS software doesn’t always play well with itself or other software. If one copy goes south, you can just restore the old one. You can also build up a network of VMWare sessions if you have the RAM.
Back when I used to work in Windows development, it was a life saver. Personally, I’d even recommend that setup for people who don’t want to use Linux at all since the convenience of being able to create multiple custom environments that can be independently restored, is just too great.
Though I liked Ubuntu Breezy, I also found Drapper was boring and unattractive:
—————————–
1. It could not set my widescreen resolution correctly though it had already installed the widescreen patch automatically;
2. It lacks good WPA wireless support, easpecially for the condition I need to switch between different wiress networks with differn configurations (some having WPA, some not);
3. The looking is uglier. Theme of breezy is much better.
Edited 2006-06-23 15:48
2. It lacks good WPA wireless support, easpecially for the condition I need to switch between different wiress networks with differn configurations (some having WPA, some not);
I don’t know why they didn’t go with NetworkManager, but it’s available as an officially supported package. It’s even on the installation CD. Look for network-manager and network-manager-gnome.
3. The looking is uglier. Theme of breezy is much better.
I don’t know, I think orange looks better than brown. Anyway, the older theme is still available in Preferences/Themes.
I think what all the people replying missed is that you were running everything off the LiveCD. And yes the LiveCD is dog slow, how many operating systems do you boot/run off of a CD? It’s a nightmare, has nothing to do with Ubuntu, just not a fast medium.
The article seems to imply that AIGLX may not be in there, but AIGLX will be in Edgy as its in Xorg 7.1 which is going to be in Edgy. Metacity supports the new accelerated graphics as well so it will provide the affects, but its not clear how much work will be done on adding compiz and other stuff like that.
they should make it i486 like slackware
Debian is targetting the i486 architecture since sarge. An 80386 could possibly run Debian, but that’s only because the kernel comes with 80486 emulation. The name of the branch is still ‘i386’ because it would require too much work to change. Anyway, a genuine i386 could theorically run the code…
Given that Ubuntu is based on Debian, I am pretty sure they already moved to i486-level.
Given that Ubuntu is based on Debian, I am pretty sure they already moved to i486-level.
Actually, the ubuntu install still defaults to a 386 targeted kernel – which I found out trying to get the dapper version of xubuntu working with DRI on a Rage LT based Compaq laptop (Armada 1750). Had to reload with the 686 kernel to get DRI working.
… and as a old-school assembly programmer… How would one target 486 and not 386, since there are NO additional opcodes on the 486… You don’t need an emulator, there’s no CODE difference between the two platforms AT ALL. The difference between a 486 and a 386 was the 486 had lower clock times per opcode than the 386, and the 486 DX came with an onboard math-coprocessor.. there ARE NO CODE DIFFERENCES!!!
Well, unless of course you do a bit of timing code that requires knowing how many clock cycles it takes to execute – but that is just SLOPPY programming (and is the type of thing not done since the sub 16mhz days)
Opcode changes didn’t occur again until the Pentium.
How would one target 486 and not 386, since there are NO additional opcodes on the 486
Other than, say, cmpxchg, which is very useful and present in 486s and not 386s. Perhaps.
You can optimize compiled code for a particular generation of processor by ordering some instructions in a different way, or preferring one type of instruction over another. Granted, the difference between a 386 DX and a 486 DX aren’t as great as the difference between an Athlon 64 and a Pentium 4, but in theory the 486-optimized code should run better on a 486+.
Like somebody mentionned, the most important opcode is cmpxchg. According to what I’ve read, it’s widely used for mutex handling and multi-threading… Perhaps you didn’t had to use it because there wasn’t many MT old-school applications? Anyway, there is bswap and xadd, too.
Hardly a major change, but enough for Debian to move to -march=i486… Ubuntu got a default i386 kernel, but the userland must have been optimised for i486s.
I think they should concentrate on getting Dapper fixed first. Way too many bugs for a new release. And the 6 week delay was for what???
Edited 2006-06-22 22:52
The problems with Dapper are a few really visible but easy to fix issues. I imagine the ATI driver issue, for example, is already fixed (not sure, I don’t run Ubuntu on my only machine with an ATI chip). Whether or not they’re ever going to roll a new LiveCD with the problem fixed is another story…
The modern distribution projects (especially Gentoo and Ubuntu) are pretty good at keeping things working throughout development. I run ~x86 on Gentoo and run into a broken ebuild perhaps once every other month, and I’ve run the Breezy and Dapper repositories for more than 3 months before either were released without experiencing any noticeable regressions (that I can remember).
The problem Ubuntu had with Dapper was that a couple things that had been working throughout the entire development cycle were broken by an update 2-3 days before the gold release. This is the type of mistake that projects learn from. I’d expect some sort of package freeze policy to result from Dapper, although I wouldn’t expect it to happen until… Feisty Fox? Furry Ferret? Frumpy Frog? I think Fiery Fox is out of the question…
I installed on a notebook with an ATI Express 200M and have had no problems. I’m sure there will be respins with bug fixes and updates. Can you imagine 3 years from now, installing from the original CD and then apt-get’ing 3 years worth of updates?
Well, they do mention a minimum of 192MB to use the “Desktop CD” (the new name of the LiveCD).
LiveCDs (apart from specialized “lite” distros) require more memory, period. I’m not sure that this CD is that much slower than, say, Knoppix or Mandriva Live on similar hardware.
not necessarily. although ubuntu’s was slow in particular, i got full slax (with KDE), phlak, and other distros to run fine. and that’s on crappy compaq laptop hardware, 256mb ram.
Try SLAX live cd based on slackware – its the fastest live CD ive used with a full KDE desktop too! Though admittedly ubuntu’s cd has more software than slax.
Dapper live refused to boot on my system with a samsung dvd burner but booted fine on my 2nd system with a Lite-On. it has had problems for several people with regards to usb2 and other issues (live cd errors out). But the alternate install cd based on debian’s d-i worked fine for me to install dapper.
The live CD is basically broken in Dapper (previously versions were better). If you want to install Ubuntu, I highly recommend downloading the “alternative” CD, which runs a text-mode installer. This seems to work without a hiccup.
However, there are other bugs in Dapper, notably with the “ati” driver and a problem printing Gnome applications. The latter problem has been solved – just do an update. As for the first problem, I solved it by editing /usr/X11/xorg.conf and changing “ati” to “vesa” – a slower driver, but it works perfectly. Of course, if you use an nvidia video card, you won’t have this problem.
Anyway, my Dapper works fine now, but I must say, it really sucked that Ubuntu released it before these bugs were fixed. I don’t know they can fix the live CD unless they release a Dapper-II.
If you have a 686 processor (likely unless your machine is really old, or it’s an AMD64), you can upgrade the kernel like this:
apt-get install linux-686
I am really interested in seeing SMART get some more mainstream exposure. I have heard a lot of good things about it, and it has performed well throughout the Suse 10.1 debacle.
I am probably out on my own, but I think a more unified package management system will help Linux adoption; SMART allows for just this. The problem is that none of the commercial distributions (or those who are competing for enterprise dollars) currently use SMART. I think only good things can come from Edgy and SMART.
unified package management is silly.
The whole reason there are different package management soultions is that they solve different problems.
unifying the package management just means you get a generic system that doesn’t serve specific needs.
eg. gentoo’s source packages.
I can’t see how unified package management would help ‘linux’ adoption.
Unified package management allows skeptical IT departments the piece of mind that they can administer their systems no matter which distribution they use.
Moreover, I think package management in Linux as a whole is weak. Apt set the standard for quality package management, but it is limited to .deb based distributions. Yes, Apt4RPM does exist, but it is in flux right now. The project maintainer left to work on Smart in March 2005. However, a new project leader was named in March 2006.
Since the break through of Apt, Red Hat and other RPM based distributions have gotten their act together, however some problems exist. Red Hat/Fedora Core improved yum, but did not have an officially supported GUI. Yumex works very well, but never got the blessing from Fedora. So, they are working on Pirut and Pup to be the graphical front end for yum. Fedora is essentially starting from scratch to develop an enterprise ready solution for graphical packagemanagement for RHEL. The idea that Red Hat/Fedora is not building off of the success of the Linux community (Smart or officially sanction Yumex) is a waste of their resources.
Suse had a good solution with Yast2, but decided to adopt Zen Updater/Red Carpet with 10.1. This lead to well documented problems. Again, if a change is to be made, why not use community resources? Novell will eventually get it right, but at the expense of the SLED 10 release date.
One could assume that Red Hat and Suse are going their own way because Smart is an inadequate solution, but that is not the case. Simply look Suse irc’s or Suse forums and you will see the unofficial fix to the 10.1 package management debacle is Smart. Suse even included Smart on the 10.1 installation media (not included on 10.0).
Many people talk about the “Linux Community”, but I don’t see much of a community here. One of the rewards of Linux is the ability of many, many people/organizations working together to create a better alternative to traditional computer practices.
Smart may not be the best solution, but I am discouraged at the unwillingness of the Linux community to work together.
Suse had a good solution with Yast2, but decided to adopt Zen Updater/Red Carpet with 10.1..
Updating with Yast2 wasn’t ideal either.To little high avaibility update servers.Just look at the sheer amount of mirrors gentoo or ubuntu have.I mean a better package manager isn’t going to change that.
Again, if a change is to be made, why not use community resources?
Yes,why not apt for example.Beter a good copy than a bad original.
I have no problem with Apt, but it looks like support for it is decreasing. The two biggest apt users were Debian and Ubuntu, and it looks like Ubuntu is looking to make a change if the community response to Smart is positive. Apt4RPM is in the midst of change. A major RPM based distro needs to officially back Apt, but it looks like they are going their own way.
Updating with Yast2 wasn’t ideal either.To little high avaibility update servers.
Never had a problem with Suse’s mirrors, especially when I choose the closest ones to me.
Fedora is essentially starting from scratch to develop an enterprise ready solution for graphical packagemanagement for RHEL.
That’s something I’ve gathered. Why are they doing this? Starting from scratch on yet another package management tool?! I despair. Surely this will severely affect the quality of RHEL? What Red Hat should be doing is using well known, ready made and community tested solutions, where bugs are well known and can be ironed out, rather than looking to the poor saps who beta and alpha test Fedora saying “There you are. There’s a a new package management tool. Debug that for us so we can sell RHEL”.
Suse had a good solution with Yast2, but decided to adopt Zen Updater/Red Carpet with 10.1. This lead to well documented problems. Again, if a change is to be made, why not use community resources? Novell will eventually get it right, but at the expense of the SLED 10 release date.
I really don’t know why Novell took this decision. It took years to stabilise YaST’s package management and graphical tools to the point that it caused very, very few problems. It will take them years again to stabilise this Zenworks thing at all, and given the other problems Novell has you would have thought that debugging a package management system would be the last thing on their minds. Just keep YaST or reuse Smart.
Alas, I fear that package management is fast becoming much more of a lock-in tool for vendors. Packages and repositories built specifically for completely different systems…..
Actually, you don’t need central packaging to allow independent installation of packages. Look at GDebi (which is included in Ubuntu). It allows you to ship a non-repository .deb file and have it’s dependencies automatically resolved:
http://packages.debian.org/unstable/admin/gdebi
There’s no need to create yet another package management.
BTW, the key problem with non-central package management is version conflicts and dependency conflicts. It’s a nontrivial problem, which is why distro-specific repositories are set up.
In response to ozonehole, the ati driver works just great on my laptop, and it has a painfully incompatible Ati Radeon Xpress 200m.
Getting the proprietary drivers installed was tricky, but I did manage that successfully as well.
Well, I guess YMMV. On my above-crappy compaq laptop (512MB minus 128 for the video card) it ran fine, and the installation was flawless.
I really like Dapper Drake, personally. However, to each his own, I’m not getting involved in a distro war… 🙂
Compile for Pentium Pro compatible code. Who needs i486 code? C’mon…
more people than you think install debian/ubuntu on old 486 boxes as a router or firewall or a lightweight home file server or something like that.
Besides there is no significant speed increase by optimising for the pentium processor anyway – i386 optimised debian runs faster for me than some i686 distros.
Ubuntu is compiled to work on a 386/486 but has -mtune=pentium4 so the instructions get ordered better for new processors.
I agree with you guys and I believe that all distributions that target the desktop market should target the i586 architecture (or even better, the i686 architecture).
The reason is simple: you simply cannot use a distribution like Ubuntu, Suse and Fedora with a cpu under 233MHz (and I would even say 500mhz but I know some people are going to tell me that it’s possible to use those distributions with a 233MHz cpu.)
Why? Gnome, KDE, OpenOffice, Firefox(Linux), etc all require an interesting amount of ressoures.
Believe it or not, I installed Slackware 10.2 on a P166MMX (remember that Slackware still use a 2.4 kernel) to idle on IRC using Fluxbox as my Window Manager and Firefox as my browser if I ever need it on that box and it’s slow.
I see no problem btw, old hardware must be upgraded anyway! 😉
Gnome, KDE, OpenOffice, Firefox(Linux), etc all require an interesting amount of ressoures.
And using newer CPU instruction sets will magically solve that how? Have you tried compiling any of those programs with different compiler flags and compared the performance (objectively)?
If you do find software that would benefit from using e.g. i586 instructions, I’m sure the Ubuntu developers would be very interested. There are some packages which certainly do benefit from specific compiler options (the kernel, glibc, various math and crypto libs); these are already optimized (check /usr/lib/i686 and /lib/tls/i686).
Gnome, KDE, OpenOffice, Firefox(Linux), etc all require an interesting amount of ressoures.
And using newer CPU instruction sets will magically solve that how? Have you tried compiling any of those programs with different compiler flags and compared the performance (objectively)?
If you do find software that would benefit from using e.g. i586 instructions, I’m sure the Ubuntu developers would be very interested. There are some packages which certainly do benefit from specific compiler options (the kernel, glibc, various math and crypto libs); these are already optimized (check /usr/lib/i686 and /lib/tls/i686).
Hmm ok, you got it wrong. I never said they should compile programs with the i586/i686 instruction set to make the system usable. I meant they should target the i586/i686 architecture because anyway, Ubuntu wont run at a decent speed on OLD CPUs like a 386, 486…
Try using Gnome, KDE, OpenOffice, Firefox, etc… on a P200. It will be way too slow to be usable. So why would they support 386, 486 CPUs? It just won’t work with them anyway. That’s what I meant
I’m just hoping that in Edgy they make the default screen resolution 1280×1024 (or 1280×800 for the widescreen laptops) for the amd64 version, ’cause really you don’t have a 64-bit computer to have a 1024×786 resolution screen.
right, because CPU and screen resolution go like hand in hand…
I’m just hoping that in Edgy they make the default screen resolution 1280×1024 (or 1280×800 for the widescreen laptops) for the amd64 version, ’cause really you don’t have a 64-bit computer to have a 1024×786 resolution screen.
Hmm. I have 2 amd64 and 2 17inches CRT monitors. My resolution is 1024×768. You cant assume most people will go over 1024×768 anytime soon. Some people, like me, will always prefer CRTs over LCDs and 19inches CRTs are too big to fit on a normal desk. And hmm, a resolution over 1024×768 on a 17inches CRT is no good. Everything is way too small.
Yes. And I hope they set X to 64bit depth and not those lousy 24bit. Come on. I didn’t buy a 64bit CPU for nothing. *bangs his head against wall*
No, right because amd64 marks newer hardware whereas i386 goes from really really old pc’s to really modern ones.
How many people/businesses that have pc’s with amd64 procesors, have low resolution screens? I dare to say none because that would be completely ridiculous, I think it’s rather safe to say that a 64 bit computer comes with a screen with a decent solution. I understand why they are conservative when it comes to i386 but there simply is no pc available with low specs AND an amd64 compatible processor.
If you go to a new platform you can renew the lowest specs available and make use of the resources available (and then I don’t mean abuse like windows vista seems to do). But having a new hardware platform where the software uses specs from three generations ago as default is just ridiculous.
I don’t know what the original poster is talking about – ubuntu breezy live cd and install defaulted to a 1280×1024 for my nvidia 6600GT card even on my 17″ LCD. I think its more to do with what VGA card you have that determines the default resolution – if it doesn’t detect one then it chooses VESA and goes to 1024×768
How many people/businesses that have pc’s with amd64 procesors, have low resolution screens?
From my expearience, even when buying modern hardware, and getting screens that do higher resolutions, due to personal preference users try and swap back to 1024*768, because they don’t like the change.
I think you may be supprised.
I have an Asus v9999GE meaning a 6800 gpu, and it doesn’t detect it, I have to manually add higher resolutions to xorg.conf to have higher resolutions, even after installation of nvidia-glx.
Sixty is a heck of a flock, way to represent!
I run Dapper since two weeks now and must say its very
nice, works for the gretest part.
Only thing i am very sick about is hat if i put my Linksys wireless pcmcia card (WPC54G) in the whole machnine locks up. This happens ONLY on Dapper. In Linspire and Suse the card works good.
Other problem is its suspend mode, its does not work on my laptop. Again under Suse its works flawless. I hope these problems get fixed in the new release. Otherwise i like it!
Please do submit bug reports describing your issues.
https://launchpad.net/distros/ubuntu/+bugs
Edited 2006-06-23 17:06
Smart uses apt and is heavily based on it. Why standing still when new and better solutions are at hand, especially if you can cooperate with the other people (RPM-based distros). If Smart is what it takes to get a “Unified” Project for package management I’m glad they support it. It often is easier to create a new one based on old systems to bring people together. Chances of Novell changing to APT was not exisiting, chances of Novell changing to SMART exists. Besides how old are APT and YUM? I’m sure new and better algorithms exist now for package management.
What I’m really really missing in Dapper is decent dual head configuration. Why can’t we have a nice Gnomeish simple dual head config dialog which sets things up? Dual head support still really sucks in Linux, partly due to drivers, partly due to X, and partly because of this damn Metacity which doesn’t handle dual screens properly at all. I need this so much!