An .iso image of myOS has been released. “Minimalistic GNU/Linux system, stripped down of everything, but core necessary files to compile and run OpenGL/C code. It has simplified directory structure and cleaned up internal cross referencing. It can fit single floppy disc without development components. With Scithech SNAP/MGL drivers (based on Mesa) it was possible to run OpenGL without X. Stripped down and modified GNU gcc compiler, mixed with diet libc includes and selected shared files seem to be able to compile all relevant libraries and produce stable and relatively small code. Apart from Necromancer’s file manager and OpenGL developing tools, this is pretty much your average Linux thanks to BusyBox.”
seems to be a fun thing to try.. but what is it good for? why no x? maybe for lowend machines because of all that stripped stuff, or embedded?
Browser: Qtek8010 (Mozilla4.0 compatible; MSIE 4.01; Windows CE; Smartphone; 176×220)
LiveCD games perhaps?
Hopefully some of the speed improvements will filter back to the less performance-aware mainstream.
Perhaps it could lead the way for the future of Linux media centres?
Edited 2007-03-29 11:02
Definitely embedded. It would be possible to have a GTK+/Cairo/Glitz/OpenGL stack that would enable a very fast desktop taking up very little resources.
Think of the Nokia webpads, or the Motorola Z6 phone, or the ACCESS Linux platform for cellphones.
X was intended to be network transparent – i.e. the server and client could be separated between multiple machines across a network and function the same way as if they were both on the same machine (although a bit slower). But who really uses that functionality in X today? I would guess not many people. This project provides a foundation upon which GUIs can be built in *NIX workstation environments that eliminate some number of software layers, thus almost guaranteeing better desktop performance. This could be an environment that would allow building *NIX desktops that have the smoothness and immediate response that WIN and MacOS provide now. I, for one, would welcome that.
But who really uses that functionality in X today?
I and the developers that use my systems do. Every day. I don’t know what your background is, but in large collections of UNIX/Linux servers remote X11 is one of the key features. Windows’ Remote Desktop doesn’t hold a candle and nothing is worse than having to interrupt your work to walk down to the server room to log into a machine.
This project provides a foundation upon which GUIs can be built in *NIX workstation environments that eliminate some number of software layers, thus almost guaranteeing better desktop performance.
While I agree that performance is good, it’s a trade-off. Would a 10% gain in ‘performance’ be worth the reduction in functionality? How about 2%?
Modern X servers do a certain amount of rendering directly to a local video card via DRM and DRI. This mitigates the effect of the network code. Games such as Doom3 reportedly run at roughly the same framerate under Windows or Linux on the same system. Why tear out features if a significant performance benefit isn’t there?
I have involvement with environments that still use X’s network functionality. These environments support a lot of specialized applications that the USG uses which run on large servers backed by large storage boxes. But you know what? Many of these applications have been or are currently being transitioned from using remote X to using a Web browser for remote access instead.
Sure, X and its network transparency will still be needed in some cases (I will admit to thinking that this was one of the coolest features of X back when I was introduced to it). But I still maintain that for many typical desktop scenarios, X is a bit of overkill being that it has a lot of code to support features than many never use.
Please note that I never stated that we should dump X. It has its places even today, so we still need it. And I will admit too that a number of recent advances in the X space have resulted in performance gains. But if we can cut out layers of software for situations where it is not needed (such as with many *NIX desktops in use today), I think this can provide an environment for better user experiences in the majority of situations.
>>While I agree that performance is good, it’s a trade-off. Would a 10% gain in ‘performance’ be worth the reduction in functionality? How about 2%? <<
In 1986, the NeXT box had a very strong GUI running a 25mhz system. Try that on Linux running X.
I don’t really know, but I think the perfomance hit may be more than 10%.
I think you could get pertty substantial performance increases, but by using various tricks to directly access video ram, possibly taking advantage of a vendor’s particular chipset in a very device-dependant fashion. (In other words this is half the solution – the other part is a hardware stack). I wouldn’t expect someone running something like this to be remotely interested in networked desktops, but instead in the max possible FPS while gaming or simulating. You would not just eliminate X, but all other tidbits that take up time, either because they use CPU time or just because they get “a share” while running. Things like disk indexing and searching would be useless, along with HTTP servers, NFS servers, etc. Ideally you’d have a kernel, the barest set of daemons necessary, and the game, and that’s it. My XBox, for example, is a 700 and soemthing MhZ Pentium CPU and runs a pretty crisp game. A 700 MhZ windows machine barely boots. Not all of that overhead is the desktop, a lot of it has to do with other non-game related but desktop necessary features used by the OS.
This is very thoroughly hashed out already all over the net: the network transparency option doesn’t add layers if you’re doing things locally. At worst, it adds an “if” statement somewhere in the code. X on the local machine compares with other graphics bases.
Also, I use the network transparent features of X. I believe it’s used for the NX remote desktop program.
It doesn’t add layers but unnecessarily complicates latency picture. It’s possible to develop a per formant toolkit on X (try atena or fox for that).
It’s just that most popular toolkit does not manage to do that properly.
X uses shared memory for transferring big objects and unix sockets for other data(events, messages, small primitives) so it’s not more heavy than normal IPC on any other system, and shared memory is certainly fast. One faster approach is direct linking to a GUI/rendering etc. library, which then talks to a kernel subsystem (which is in fact a graphic server in fact). This is how Microsoft did it all those years (GDI), and they seem to finally move away from it (to a hw accelerated graphic server more similar in concept to X!).
Point noted by last poster. I will agree that X on Linux (XOrg 7.2) is getting better – not quite as heavy of a memory footprint and a more modular design. And ultimately (hopefully) we will see the incorporation of methods in X to take direct advantage of the hardware on modern graphics cards (currently something being played with in Compiz and Beryl, but not ready for prime time). I just want to see a *NIX desktop environment that provides a desktop GUI experience that is as smooth and integrated as that found on WIN (although it has many other faults) and MacOS. I vaguely remember a project called Berlin from a while back that was trying to re-invent the graphics environment on Linux without X – I will have to dig around to find something on that. I am mainly a KDE user on Linux, but maybe I will need to spend some more time on some of the other desktops – maybe it isn’t all X’s fault.
“””
But who really uses that functionality in X today?
“””
Oh, for me about all 60 of the business desktop users that I happen to support. Probably the vast majority of Linux desktops in business.
Besides, the whole “X is slow because of network transparency” bit went out the window *years* ago when MITSHM, and later, DRI showed up.
I agree that X’less OpenGL is a good thing for some applications.
But I get so tired of hearing the same *wrong* claims made about X over and over and over and over again.
While I’m at it, I should probably proactively dispel another popular one. When you look at how much memory X is using, keep in mind that you are looking at total mapped memory. The *vast* majority of it is *video* memory which, depending upon the driver, can be mapped 2, 3, or more times for different purposes.
Very little of it is system RAM.
Xorg has its inefficiencies. But they are mostly implementation details and not fundamental design decisions.
Jeff Garzik touches on this in the context of a broader topic, in this bittersweet paper from last year’s Linux Symposium.
http://tinyurl.com/357m7o
It’s a good read. You’ll laugh! You’ll cry! You’ll find out why your Core 2 Duo system takes so freaking long to perform certain more intensive operations… like shutting down.
Holy sheep s… Batman!!! Read the paper – well written and quite an eye opener. Partially supports what I said about X, but not because of the network transparency thing. Also possibly explains why I have seen so many complaints posted about the slowness of Gnome ;-). More and more functions are being pushed out to user space, and if developers don’t manage the frequency of system calls, we will all suffer.
There used to be a paper from John Carmack explaining that (back in the utah-glx days). Do you happen to know where I can find this? I’m tired of trying to explain this very fact to users who just won’t accept what I’m saying 🙂
Adam
Actually, that’s wrong, at least on my machine. I looked at /proc/$PID/smaps and found that a huge amount of virtual memory was anonymous (heap) memory, not memory that was mapped to the video card devices. It’s a common memory leak. I have to restart X regularly.
That’s probably not X fault: you have an application which ask X to allocate resources (pixmaps for example) and it is not handling correctly the resources.
Sure this could be X fault, but it’s most likely that one application is the culprit..
I realize I’m little late with this. But my attribution was wrong. The paper and talk were not by Jeff Garzik, but by Dave Jones.
This would be cool. I know there’s something like this using the framebuffer – I can’t remember the name of the project. The downside is that you need to rebuild libraries to support this. But a distribution without X and with Cairo/OpenGL effects etc would be something I’d install out of curiosity.
I know there’s something like this using the framebuffer – I can’t remember the name of the project.
DirectFB perhaps.
http://www.directfb.org
X Windows still has one very important use: it’s a standalone display server, so the process can die gracefully or horrendously without bringing down the whole system, a la Windows BSOD.
At times, some projects seem to have no real rhyme or reason for doing them.
But taking an idea and going with it just to see where it leads can be its own reward and a lot of fun. Many projects start with a (somewhat) defined goal to aim for. The challenge is to get to that goal to acheive the desired result.
Other projects are done for the sheer challenge of doing it and attempting to do it. Along the way, many projects take on a life of their own and spawn new ideas and even more creative thoughts. While the end result may not be what was expected, it’s usually never a bad result.
The lessons learned can be quite valuable, even if not immediately so. When it’s not immediate, they become building-blocks for others down the road.
Bravo! These types of people take risks. People like:
Marconi – Radio
Kilby – The IC
Edison – Electric Light Bulb
Bardeen/Shockley/Brattain – Transistor
DeForest – Vaccuum Tube
Torvalds – Linux
Innovation is always good. You never know where it can take you.
Bit off topic but people keep repeating it. Just to be a little educative.
Invention of Electric Bulb
http://www.ushistory.net/electricity.html
Many times a lie get repeated so many times that it almost become the truth.
From “http://www.ushistory.net/electricity.html“:
“But its major deficit was that it could not serve as a source of power. The appliances we take for granted today – fans, refrigerators, electric irons and computers – could not be powered by gas.”
My grandparents had a gas-powered refrigerator in the basement for years. It was heavy as hell, so when they moved they had to leave it there.
Operating theory:
http://www.cam.net.uk/home/StKilda/electrolux.html
true.
Like who invented the airplane…
Bravo! These types of people take risks. People like:
Marconi – Radio
…
Edison – Electric Light Bulb
…
Lots of people where working on the same problems Marconi and Edison were. They just got good results first. A lot of work involved sure but, in the cases you mention, they were working on something lots of others were working in at the time, with lots of commercial applications at stake, so I wouldn’t qualify it as “taking risks”.
Ok, back to the article: I guess it could be used for truly multiplatform games, completely OS independent, although playing would involve a reboot.
what would this do to performance of games?
Also, what kind of hoops would one have to jump through to run a DE on this? Hmmmm, KDE without X.
Also, what kind of hoops would one have to jump through to run a DE on this?
Hack Qt, the underlying toolkit of KDE. If Qt has to make Xlib calls to draw onto a graphical context, those routines would have to instead draw to a graphics context that myOS provides.
Or run a “fake” X server to trick Qt. The fake X server would do the translation from X protocol into myOS’ convention. Crazy idea, probably not practical.
What I wonder about is why software file sizes seem to be get larger and larger as drive space becomes cheaper. Is it really necessary or developers just not as concerned with saving space?
Personally, I think that developers aren’t as worried about saving space or writing really clean code. As the old saying goes ‘time is money’ and cleaning code up that already works is just a waste of money.
Code itself takes up very little space. What’s happening is that new software includes more help files, icons, audio/video, example files, etc. Why? Because as you said, developers aren’t concerned about saving space anymore, and because users like the extra stuff provided.
Where clean code does matter is in memory usage. There are a lot of developers out there who don’t even think about memory, telling themselves that “RAM is cheap.” Luckily most of the people writing libraries and toolkits don’t think that way or it could be much worse.
Edited 2007-03-29 15:21
Or Gnome without X? That could be sweet….
I’d also love to see what an already lightweight desktop environment like Xfce would be like on this…I suspect it’d really fly (assuming you could get it and various other apps to work well without X…)
–bornagainpenguin
After trying it out, a richer OS environment would be better. (I’ve tried to port some networking code to BuysBox style embedded linux with modest success – and after hacking out the “useful” features). With inexpensive solid-state storage at 1 gig and cheap laptop SATA drives at 80-160gig, I’m not sure you really have to “fit on a floppy” any more. Maybe a phone application, but phones have their own OS with OpenGL (or other 3-D’ish rendering). I would also think an ARM or PowerPC cross-compiler would be a helpful feature for embedded hardware.
All in all, I would love to see something similar, with a richer Linux OS in the background, and highly accelerated OpenGL, perhaps with some sort of scene graph libraries on board. This would essentially be an open gaming/simulation platform. Something with decent performance on a 1.5 Ghz Pentium M class processor and kick-butt performance on an Intel Core2 Duo with a highly accelerated OpenGL driver. Too many neat things to try, too little time… sigh.
Enough criticism, though. It’s a cool little distro.
Edited 2007-03-29 15:34
Although it looks interesting (a OS in only 13MB?), it’s still bugged: out of five times booting it up, only three times did it finish getting through bootup, and the three times it did, after prompting the user to hit enter, it showed a half-second’s worth of ASCII art before black-screening.
EDIT: Add another six attempts to start up, only two of which got through bootup.
Edited 2007-03-29 18:23
From the article on top:
“It can fit single floppy disc without development components.”
I think disk is meant (floppy disk, disc ^= diskette), because discs (disc ^= discus) – CDs aren’t that floppy. Please forgive me this pickyness. 🙂
Scithech -> SciTech
Who still even has a floppy drive?
For the past several years i’ve been using a freedos boot cd and a usb drive to do any flashing duties.
My point is: floppy drives aren’t universal anymore, it’s about worthless to even talk about them anymore. Shoot, even most stripped down embedded devices now start at 2MB of onboard flash, that’s far more than a standard floppy even.
Edited 2007-03-30 03:14
“Who still even has a floppy drive?”
My computer has. And my (midi-enabled) keyboard has.
Perhaps I should be stocking up on cheap floppies before floppies won’t be sold at all anymore.
By the way, hose 1’44 floppies aren’t as “floppy” as those older, bigger ones
Edited 2007-03-30 06:59
Alexander Popov was first who demonstrated the practical application of electromagnetic (radio) waves. For at least 5 years before Markoni.
http://en.wikipedia.org/wiki/Alexander_Stepanovich_Popov