This entire article is written as a proposal to a coprporation for a new, very unique computing system. Please offer criticism and suggestions to improve the system, and tell me whether you think it could work. What exactly is the “Edge Computing System” And more importantly, why would I want to go to the trouble of developing it? The Edge Computing System is just that, an entire system, not just a new type of computer or new software suite. The Edge is the means by which you can have your personal computer with you at all times.
1. The general idea
How is this accomplished? To put it as shortly as possible, the edge computing system works by having all computers in the system completely compatible with one another. Your documents and configuration are carried with you, in a piece of flash memory (compact flash, memory stick, whatever). When you insert your piece of memory into the computer (any computer) your configuration is automatically read and loaded, so that you are presented with the exact same desktop and files at any computer you go to (although some will run it faster than others, obviously).
Imagine the convenience of having everything you need with you at all times. You wouldn’t have to worry about burning a presentation to a cd to take it to a prospective client, you could simply stick your card in their computer and present. For that matter, let’s say that on the plane ride to your prospective client you notice that you are missing something from your presentation, a picture or table you forgot to download. Luckily for you hotels would be equipped with these computers, so that you could just plug into the computer at the hotel and use their internet access. Or, if you’re staying in an old bed and breakfast that hasn’t been retrofitted, you can stop at the local library and update your presentation (you wouldn’t want to do it at your client’s, that would look unprofessional).
That’s the point of this entire system, to be able to work on your stuff everywhere, to never have to worry about anything being left behind, but mainly to increase convenience and productivity. There are a lot of questions that need to be answered though, because this does seem like a huge undertaking. It is.
The hardware part (the simplest one in my view) will be talked about in the next section. After that I’ll talk about the software hurdles that must be leaped. After that comes the development issues (to give you a hint, I think the entire project can be ready to go in under a year, more on that later). Then I’ll talk about how it will be tested, which I deemed deserved a different section than development since it is a rather novel way, then comes selling it, the hardest part. Lastly is the continuing development of the idea.
2. Hardware Issues
This entire idea started as a hardware issue. I have worked at a little computer repair shop for over a year and one of the things I noticed was that hard drives are failing constantly. In fact, they failed more than any other part of the computer (with power supplies coming in at a close second). Now, this is a serious problem! Many people neglect to back up their hard drives, banking on the fact that it will work as long as it is needed, however they were dismayed to find that their hard drives have other ideas, one of which is failing and taking months of work with them. I thought, what if there were some way to get rid of hard drives, or, if not get rid of them, to rely on them much less.
The answer is simple really. Divide the task of storing data into more than one area, a read only part for the OS and software (not on a hard drive), and a writable part for your documents (allowing the writeable part to be much smaller). The easiest way to make an OS read only is by putting it on a CD-ROM obviously, but there were several problems with that, such as noise, transfer speed, and size. The next step from a CD-ROM is a mini-DVD. A mini-DVD will hold 2.8 gigabytes of data, uncompressed, while a 700 megabyte CD-ROM will only hold 1.7 gigabytes of data compressed. This means that much more software can be bundled with the OS on the read only media. There will be several other advantages as well, such as increased data transfer speed (over a CD-ROM, not a hard drive), less vibration, and size. I’ll go in depth about the mini-DVD reader in Appendix A.
The other part of the computer is the writable part for documents and configuration files. This would be a small piece of flash memory, as I stated in the previous section. Why flash memory instead of zip disks (or any other media for that reason)?
There are three parts to the answer. The first is size. A compact flash card, even in a protective carrier, is only about an inch and a quarter by an inch and three quarters by an eighth of an inch. The second is . . . size. A compact flash will carry anywhere from four megabytes to three gigabytes (although the cost increases exponentially). Obviously I’m also banking on the fact that compact flash cards will continue to increase in size and decrease in cost (which I believe is a fairly safe assumption). The final main factor was stability.
With this entire computer I’ve tried to keep the number of components with moving parts down to a minimum, because moving parts will fail as much as they want, where something without moving parts generally will not fail as easily. Compact flash has no moving parts, therefore it should be more stable. With a complete computer using this system the only components with moving parts are the cooling fans and the mini-DVD reader. You may have a back-up hard drive, but only to back-up your flash card, you shouldn’t be relying on it.
Almost everything besides what I have already mentioned is just like a standard computer. One difference is that for this to run well I would suggest something like a gigabyte or two of RAM, allowing you to load the entire operating system into RAM. This would make the computer extremely fast, and if you don’t have that huge of an amount of ram, there are other things which you can do, which I’ll go into in the software section.
So with this model there are several different typed of computers which can be created. I’ll go over the different classes and styles in Appendix B. For now, we’ll just assume that the computer is a normal desktop model.
The software part of this is kind of a strange dichotomy, in that most of the work is already done, so it should be easy, but getting all the software to integrate and be one hundred percent stable and efficient will be a huge task.
First of all, the operating system used will be Linux, and more specifically, Linux running under the KDE or GNOME desktop environment. I believe that KDE and GNOME are developing much faster than the other window managers, are getting much more press in the Linux (then, say, Window Maker or Enlightenment 17, what I considered as the main two other choices) and the eye candy is there. Some Linux hardcore fanatics will say eye candy isn’t important, give them the command prompt, etc. Obviously this is not the best idea, just ask Microsoft. Many of the people who are going to use this OS could care less about security, or the open source model, or anything else, but they will care about whether it looks pretty and whether they have to think about using it at all.
Second of all, tons and tons of software (which should be mainly open source) will be included with the OS, which is possible because we have 2.8 gigabytes of space to work with (even more if need be, since we can still use compression). To explain how the software needs to work, I’ll go through the boot up sequence from power on to having a usable desktop.
The first step is obviously switching on the power. The hardware goes through the usual POST, and when it’s done with that looks for a boot image on the mini-DVD player. It finds it, and you are presented with a welcome screen (which gives you the option of passing messages to the kernel, just in case, but times out after three seconds) which quickly is replaced by the hardware detection screen. This would simply be something with a progress bar and a frame saying what the computer is currently doing (for instance “auto-detecting video card” and “installing Radeon 9000 drivers”). After auto-detecting all the hardware (including printers) the computer goes has a chance to download and install security updates over an ethernet card (which is activated and does a DHCP broadcast). Obviously this means that you must have high speed internet to download the updates, and that is mainly because people with dial up simply aren’t attacked by hackers very often. Next comes the X-windows startup. X-windows and GNOME are started and you are shown a default GNOME desktop (or KDE, I still haven’t decided what I want the default to be, though GNOME loads faster, which is why I chose it here), one in which all the configuration files are read from the mini-DVD. At this point the default programs are loaded into RAM already (such as OpenOffice.org and galeon web browser, etc.). Obviously some people will not use some of the features that are loaded by default (although if you have enough RAM besically everything can be loaded) but we’ll talk about that problem later.
From this desktop you can browse the internet or work on files, anything you could normally do, except that it is all erased on the next restart because it is simply being saved in RAM. Now we’re going to pretend that this is your first time ever using this computer and you wish to create a compact flash with all of your stuff on it. You see a button labeled “Create Wallet” on the desktop (“Wallet” is the catch-phrase I decided on for the compact flash card, since it kind of implies having everything there for you) and you click on it. Up comes a wizard which first asks you to insert a blank compact flash card, explaining that you can insert one that isn’t blank, but it will be erased in the process of creating your wallet.
Now, the first step to creating your wallet is the partitioning part. In my view the wallet will eventually be integrated with basically everything (MP3 players, digital cameras, PDAs, etc.), so the first question it asks is whether you wish to use your wallet with MP3s (we’re assuming you have a sizable compact flash card, like one gigabyte, anything less than five twelve and this step will be skipped). If you answer yes then a 128 megabyte partition is created with the proper file system to be readable by most MP3 players. The applicable software is activated as well (I’ll explain activation later). Next it asks if you wish to use the wallet with digital cameras as well. If the answer is yes another 128 megabyte partition is created with the proper file system, and the software is once again activated. A question is not asked about PDAs at this point, because compatible PDAs will be able to read the file system natively.
Now, with the separate file systems you are only given 128 megabyte of MP3 room, but that doesn’t mean that that’s all you can have. In the software included you will have basically a library of MP3s, but only 128 megabytes can be on the “Device play list” at any one time. That way you can have several play lists, and when you decide on one to listen to you just make it the active play list, which copies it over to the partition. With pictures it simply automatically downloads the pictures you have taken to a folder labeled “Unsorted” in the picture library on your root partition every time you insert your card into a computer, leaving the picture partition clean.
As more things become available and the size increases of compact flash cards you will be able to integrate more things, but these two give you an idea of the potential for integration.
After asking the questions and determining the number of partitions and size the program will format the compact flash. Next it builds on it the skeleton directory system, in other words, the basis for which everything else will be built on. This consists of two folders on the root of the device, one labeled “Docs” and the other labeled “System.” The “Docs” folder will have several sub folders with labels like “Music Library” and “Picture Library” but other than that I think it’s fairly self explanatory.
The “System” folder will contain the configuration script (similar to knoppix.sh) and all the configuration files in one place. If a person downloads window decorations or themes these will be installed to a sub folder in this one.
A form is presented next for the basic information about whoever is using this card. This will include stuff like name, birthdate, address, and other things, but the biggest part is the security question. Security is quite in depth, so I won’t go over it right now. A brief outline is in Appendix C, but any seggestions would be appreciate it.
The next step is what I call “Activating Software.” This is probably the most tedious part of the setup. You are presented with a few simple yes or no questions at first, such as: “Will you be using this computer for office related tasks (such as word processing)” or “Will you be using this computer keeping track of finances?” After answering these questions you will be given choices between software, where applicable. For instance, in the area for email clients it may look like this:
Which email client do you want to use (if you’re not sure, you can choose more than one, then choose one later)?
* Kmail
* Evolution
* Fetchmail
* Mozilla Mail
* I don’t need this capability
When choosing there will be a frame giving details of choices as you roll over them with your mouse. This screen can be found again in the “Configure” menu under “Application Management.” The only difference is that instead of being asked one by one which appliacation you wish to use for which purpose you are given several screens with related applications. After you are done deciding which programs you want to use for which tasks comes the asthetic part of the setup.
This is basically the same as the KDE First Time Wizard, with a few extras. You decide your window decorations, background, screen saver, position of the panel, icon set, widget style, window behavior, and a few other miscelaneous things.
The final step is only if the OS detects a hard drive. If it does it asks if this is the computer you wish to back up to. If you say yes it adds a partition to the hard drive the size of your card (as long as there is free space) and records the mac adress of the network card. This is so your card can identify and back up to this machine, since you are quite unlikely to come across another card with the same mac address. If you wish to change this setting later you can, utilizing the configure menu.
The next thing you see is simply the “Congratulations!” screen, saying that all they need to do is hit OK and their personal settings will be loaded, or they can choose to hit the quit button, which will exit the wizard so that it doesn’t load their settings, but instead returns to the default desktop. In either case, their card is finalized and ready to be used.
Now whenever the card is inserted in the computer it is auto-mounted and scanned for a configuration file. If one is found X is restarted with the configuration file on the card loaded.
There is one other thing that needs to be adressed. That is the RAM requirements. For most new computers they should just come with a lot of RAM, but if an old one is retro-fitted you may not be able to add a lot of RAM. The answer to this is that as a person uses programs a list is made of the most commonly used programs. It is according to this list that programs are assigned priority, and are consequently either loaded into RAM on boot, or have to wait until they are executed to be loaded into RAM from the mini-DVD. This means that the first few times the wallet is used things will generally be slower than later.
That is basically how the computer works. How this will all come to pass is talked about in the next section.
4. Developing It
As I said earlier, developing this idea is going to be both difficult and easy, but I think that it is by no means an impossible task. All it will take is modifying hundreds of programs so that they work together to perfection.
That was a little bit of humor there, by the way. OK, so here is how it will work. The development would take place in three phases, data gathering, programming, and testing.
The first part of the development would be a data gathering time period. I think that this part could be handled in anywhere from one to three months (although three months is definitely preferable to one month). This would consist basically of a multitude of surveys covering many things, the most important two, however, are:
What do you want in an operating system?
What tasks do you perform with your PC most?
Another part of the data gathering stage would be a group of linux enthusiasts who would compile a huge list of open source software with details such as purpose, ease of use, features, size, and other data. Then all the software would be tested by a completely different panel of people who would look at it from the point of view of someone migrating from different operating systems and choose at least three pieces of software for each task (if three can be found). The premise behind this is that you would have what I call Apple, MS or Linux programs. That is to say, easy to use but not too customizable, fairly easy to use and fairly customizable, and difficult to use but extremely customizable (respectively). Then the software would be prioritized and shaved down until the entire software/OS package will fit on 2.8 gigabytes of space.
The next step is programing, which I have fit into two categories, asthetic and functional.
The first part of the programming should take another three months (with a fairly hefty group of programmers). This would merely be to go through every program, one by one, and change the look so that it fits with the entire operating system (meaning that icons are similar, colors are customizable with the rest of KDE or GNOME, etc). This would involving working with the dreaded theming engines, so it might be a little tall of an order to ask for all the software to look unified, but as much as possible.
The next part of programming would be functional, making sure that programs don’t conflict with each other, making sure that programs are stable, and that things like copy and paste work between programs. For this period another three months would be used.
It may seem like I’m not alloting much time for each section of programming, but it’s not as little as it might first appear. You first have to accept the fact that the software is already created and that the goals being pursued in the development of this OS has already, in a broad sense, been pursued by the individual programers themselves.
There are two special considerations which must be taken in when doing the development. The first is that, during the six month period of programming (and into the following three month period I will talk about in a minute) several programs will be produced from scratch where I view deficiencies. The first is a media player (to be integrated with the file manager) and the second is a good financial management program, to take over the “Apple” role in that section (we already have gnucash as either an MS or Linux version). The second factor which must be taken into consideration is that quite a few different companies must be contacted and offered the chance to make their product a part of the OS natively. The first ones that come to mind are Real Player, Shockwave, and Flash.
The final three month period is a code audit, going from line one to line ten million. This will definitely be the hardest part, and will most likely extend into the testing phase. All programs will be audited, by different people then origionally worked on the programs.
5. Testing It
The testing process is one of the most important, since it will bring out (hopefully) the vast majority of the flaws in the system. The testing will be seperated into two distinct parts. The first is a security section, attempting (in a very novel way I might add) to bring out even the most engenious hacker strategies and fix them. The second is a pilot city. I will talk about the security issues first.
But before that I’m going to talk about the general attitude that must permeate everything that has to do with this operating system. That is the fact that it must be viewed not as a comercial venture, but as a genuine attempt to make the lives of those who would buy this better. The open source model must be maintained. I will say that again because it is very, very important. THE OPEN SOURCE MODEL MUST BE MAINTAINED. That means that if someone wants to view the source code of any part of the operating system it should be readily available to them. Why? Because we need the cooperation of the open source community and we need the trust of governments and people. This should be viewed as a community effort. I will go more in to how this entire thing will work in the continuing it section, but you must understand that without the cooperation of basically everyone this will not succeed.
The way that security leaks will be brought into the open will be through a contest. One computer will have one file on it. This file will be the object of the contest. If someone can hack into the computer and retrieve the file (which is protected by our operating system hidden by a server version of the operating system with a firewall) they get a prize. To be more specific, if the can print up a copy of the file, and mail it to the company, they get something like a hundred dollars. For every time they do it (the file will be changed after every successful breach). This may seem like it could get expensive, but if you consider the fact that it’s cheaper than paying someone to look for flaws (especially since they would most likely miss quite a few), and that it will serve as advertising, proving to the masses that the customer really is coming first, then it isn’t that expensive after all.
The second test method was the pilot town. This is a simple idea really. Just pick a likely town and offer it dirt cheap computers, retro-fit-kits and free tech support, and see what happens.
I am very specific when I say likely town though. It must be relatively small (say, less than 25 000 people) but not too small (say, over 15 000). The next requirement is a high school with a population of at least 1 000, a college with a population of at least 2 000, a fairly large library, and several middle and elementary schools. One cool bonus would be a small cofee shop, which could be outfitted as a cyber-cafe.
This town is modeled after my own home town quite a bit, because it seems like it would provide an excellent base for an expirament, without being so big as to overwhelm people.
One suggestion to begin the process would be to outfit the high school, college and library for free, which would hopefully get the rest of the town interested, and would only take about thirty computers.
Why would a town want to do this? You simply point out to the people that by doing this, not only will they get computer stuff for amazingly cheap, but it will put there town on the map through magazine and newspaper articles about this “pilot town.”
6. Selling It
In this part of my proposal I hd a detailed outline of an advertising campaign, but I really am kind of embarrased by it, so I’ll just go into the rest of the selling it section.
The other main point for selling the OS is the cost. The OS and bundled software should come at a price of less than just the operating system for the other guys (we’ll say it should be a little less than $100). That is for a full version. Now, there will be two types of upgrades, as I will explain.
Version numbers for the OS wll dictate how much is paid to upgrade. Decimal number advances will imply small software and security enhancements (which is why a new version will probably be coming out once a month or so). The point behind this should be that, if it’s working, don’t fix it. You don’t need to upgrade from 3.04 to 3.05 because it probably addresses an issue which you haven’t had. These upgrades will be extremely cheap, simply trade in your old disc, pay ten dollars, and walk out of the store with a new upgraded OS, as long as the two are within a tenth of each other (that means that you can upgrade from 3.10 to 3.20 for ten dollars, and your old disc). For anything within one whole number it would be twenty dollars (for instance, you could go from 3.01 to 3.99 for twenty dollars, and your old disc). For big upgrades, with large changes in the OS itself and many programs, you would go to a new whole number. This would cost fifty dollars (for for fifty dollars you could go from 3.53 to 4.0, and your old disc). For anything outside of one whole number difference you would just have to buy a completely new OS.
One exception to this would be version 1.0, which would be the cost of an upgrade for a full version for the first six months, to make it easier to switch. Licencing to companies like Dell and Compaq would be different as well. Instead of a price per user basis you would simply sell one gold copy of the mini-DVD and allow them to make as many copies as they want as long as it is bundled with a computer. That way the computer retailers would be encouraged to switch as well, and they could do your advertising for you.
Of course you could also download ISO images, but I personally think it would be easier just to pay ten bucks a month and stay up to date.
7. Continuing It
Now I will explain why open source is so important. It is mainly because if we maintain open source we can expect the cooperation of most open source developers. If someone has a program they think would go well with the rest of the OS then they can contribute it. In time I think developers would program with our OS as the standard, attempting to make their programs compatible with it. We would not have to spend money on continuing developtment, but instead simply work on integrating what they create into the operating system. As this becomes even more popular more and more programs will be made using the open source model.
Another idea to take care of some deficiencies in programming would be to denote money or computers to a college with the understanding that they would program one piece of software as a class project.
The main point behind this is that the cooperation of the open source community will provide a huge and free labor pool, and with little encouragement except for recognition you can use their programs and be viewed as a flexible collaborator instead of an inflexible dictator attempting to grab as much money as possible from the end user. And that is the main point, that this system is made not with making money as the major goal (although I believe that it will be a nice side effect) but as making the consumers life better.
Appendix A: Mini-DVD Reader
The mini-DVD reader is one of the most crucial parts of this entire system. It allows an entire OS to be stored where it can’t be hurt. It has two definite advantage, the first is that it’s virus proof, since they can’t infect something they can’t write to. Your flash card could be infected, however, but even if that happened it would only mess up your personal files and not bring the entire computer down. The other advantage is that the computer would be (what I refer to as) “Grandma” proof. What I mean by that (and I’m not trying to insult grandmas, because I have several excellent one’s myself) is that someone who maybe doesn’t know a huge amount about computers and might normally mess up the operating system accidentally, can’t. This is because if things get really messed up all you have to do is restart the computer, and it’s like you have a fresh load. Being Grandma Proof is very closely related to being “Computer Expert” proof and “Oh yeah, just delete it, it’ll be fine” proof.
Anyway, enough about the advantages, let’s talk about how this is actually going to be done.
The first difference would be that the mini-DVD of this reader has eight small (perhaps a half millimeter in diameter) holes drilled in the plastic part which correspong to eight small “posts” which stick out of the spindle which the mini-DVD sits on. The “posts” go through the holes and into a stabalizer, which is hinged on the first mini-DVD reader housing. I’ll go into the two housings later on.
What you have is a mini-DVD resting on a flat (approximately two centimeter) piece of material with eight “posts” sticking through it and into another two centimeter wide circular stabalizer, which rides on a double set of ball bearings. The motor spinning the mini-DVD spins at around 10 000 rpms (I read that that was the fastest sustainable speed for a small electric motor currently). The eye tracks like a usual cd/dvd-rom eye (I toyed with the thought of having the eye spinning in the opposite direction, to inscrease speed, but that would increase vibration, decrease accuracy, and add moving parts). The entire apparatus rests within an open topped box, which is inside another slightly larger box with a hinged lid. The inside box is attached to the outer box by rubber pads, so that, again, vibration is reduced. Cables supplying power and
One note is that the mini-DVD must be loaded vertically, then have the stabalizer put down on it, then have the airtight lid of the second box shut. One cosmetic thing is that it would be cool to have a plexiglass, or clear plastic lid, with a gasket on the box and not the glass, so that it looks all neat and clean.
The final note is that it would use serial ATA, not necessarily because it could use the extra bandwidth, but mainly becaue of the size factor.
Appendix B: Computer Styles
There are, in this computing system, four basic computer designs (five if you include servers). I will go over them, starting with the simplest and working my way up.
The first computer type is what I call an “ultra-thin.” This computer would consist of the motherboard (with everything built on), and the mini-DVD reader. This allows only Internet browsing or word processing, etc, without the possibility of saving to a wallet (although you could conceivably load things to an FTP server if the need arose). The computer I envision in this class would be an LCD screen with everything attached to the back. It’s thickness would probably be around two inches, and it would have around a two gigahertz processor, two gigabytes of ram, built in sound and speakers, and wireless ethernet, keyboard and mouse (with rechargeable batteries). This computer would not be that common, but might be seen in libraries, cyber-cafe’s, schools, and prisons.
One interesting thing about this computer is that you could have just the “core” (processor, motherboard and reader) as a small box, which could be screwed to walls or under desks, and then just hook up a USB compact flash reader. This way it could be integrated with desks and other furniture.
The next type of computer is called “thin.” This is a computer just like the previous, except with the addition of one or two compact flash readers. You might need two to transfer files between two wallets. In this case the first one inserted would be “dominant,” which is to say, it’s configuration would be loaded, while the second one would be “subservient.” I believe that this computer would be the most common, since is would be fairly inexpensive and yet still have full functionality.
“Normal” is the simple name of this type. It comes with everything the previous two had, except it could have a CD bruner/DVD-rom and a backup hard drive as well. This type, however, I don’t think would be that common, because my goal would be to have several “thin” stations dispersed in a SOHO environment or home, and just have one server to back up all the wallets to.
The last type is a “power user” computer. This would have everything the previous model has, except for it would also have a beefed up graphics card and a cartridge mini-DVD player. What I mean by that is, for the gamers the computer would basically become a console, with a console emulator and a mini-DVD encased in a cartridge (quite similar to Sony’s mini-disc). This would also help prevent piracy (although a way will always be found to get around any measures).
The server also has several special purposes. It would be a gateway, firewall, DHCP and print server, while also backing up memory cards. It would also have a few other uses which are not so common. The one would be that it automatically downloads updates for the computers to install and the next reboot, and with very expensive proprietary software (Dentrix is the first example which comes to mind), it would house the program, which would be loaded up into the computers when it was wanted, and that would also handle licensing, by having only x number of people signed on at any time.
There would also be several sub types, of course, such as a media computer, for those who are addicted to MP3s (or ogg vorbis). But generally the computers would fit into one of the following categories.
Appendix C: Security
Security is a pretty big risk with this system. Imagine getting mugged for your computer wallet. It could potentially be more damaging than having your actual physical wallet taken. That is why there are some drastic security measures which must be taken.
The first is, of course, encoding. The difficulty of the algorithm to break would depend on what you set as your security level. This could require anything from an eight character password to a twenty character password, but there is no perfect encryptian.
That is why we have a second level of security. This is that, once a card is put in a computer, you have thirty seconds to input the password or the entire card is erased. This, obviously, would really only work if the card was put in a machine running the OS which it was designed for.
There are several obstacles to mount, but the measures I stated will deter most criminals. There are a few other ideas which I’ve had, but I would want to talk them over with an expert before going and embarrasing myself.
Appendix D: The Franchise System
My franchise system takes it’s cue from the fast food industry, which is obviously full of franchises. The way it works is that for every x number of people (say, 50 000) in an area we place a shop, with it’s own name, but a little sign saying “Official Edge Shop” or something like that. These stores are given excellent deals on hardware and software, and they keep all the profits. In return, they must deal with all the service inside their area (although after warranty is out they can charge whatever they want, within reason). The way a person would get a shop is by simply paying a one time fee at the opening.
There is a catch, however, and that is that they are subject to periodic reviews by “roving reviewers,” who may stay in town for several days or a week and check on all the shops (although they don’t have to tell them they are reviewing them, they can just show up and ask for service, to see responses to various problems). Also, there would be a form you could fill out on our website reviewing the shop which you go to for service, which would hopefully keep the shops honest. If a shop was found lacking we would simply pull our name and deals from them.
The obvious benefit from this is that people get a personal touch, someone they can go to with a problem who will help them. This reflects well on the company as a whole, plus there could be a form on everyone’s wallet in the system folder with details of visits, so that if a person is out of town they can still stop in at a shop and have their history known.
The disadvantages are that it makes a middle man, who would inflate prices, but they could choose to buy direct and just have to settle for phone service. The hope, however, is that with this system service will rarely be needed.
About the Author:
Joshua Boyles is 18 years old, but has quite some computer experience and he lives in Oregon, USA.
in windows, there is roaming profile that serves a similar purpose.
a flash card isn’t enough for all type of documents
At work, some guys have two hours’ worht of document of
about 4 GB.
The concept is good, however, there needs to be a massive software modification to fit the “standard” and the return on investment is next to zero.
You are basically offering Knoppix on miniDVD with some nifty look upgrades and a big flash card (yes, Knoppix already has the ability to save configuration data and the like to flash). The whole hardware thing is entirely unnecessary–nearly every computer has a CD-ROM or DVD-ROM in it these days.
I STRONGLY doubt that the “open source” community will cooperate with you on this. You are basically a company that offers almost no core customizability of the system and piggybacks on the open source development method. Yes you could view the source, but where does it get you? Your DVD is already burned and your “upgrades” will cost money, and you won’t be able to control them. Oh, it’s easy enough to say “change all your software to look the same, O free software community, and by the way, do a code audit while you’re at it so I sell more and don’t get in legal trouble.” In practice this will never happen. Also, how do you plan to get the security updates on to the disc? Just download them every time? Or will we have to buy a new DVD for each?
Sounds like you need a time machine for the dotcom/venture capital era.
Interesting Idea but i’ve had the impression that Compact Flash is a)slow b)not that reliable itself, and c) has limited rewrite capability.
I do worry about HD failure and often wonder how to make it easy for average users (and me) to easily backup and recover. Here is my pie-in-the-sky scheme:
All computers conforming to this standard have one internal drive and two hot swap drive bays with one containing a drive (other drive optional). The internal drive is for the OS and base libraries/files ONLY (as well as temp files and swap). It can always be rewritten via a CD/DVD that comes with the computer and patches/updates to these files are done via the net (or for a small fee you get a new disk every month in the mail).
The first removable drive is your primary data drive and contains everything you hold dear, including programs (I’m not 100% on that part).
The second bay is for either more storage, or your backup drive. Backups are done by copying from one to the other (or with a utility akin to Second Copy 2000 http://www.centered.com/ )
If this sounds like the old days of dual floppy drives, data disks, and backup disks, that is kind of the idea. A simple model with physical components users can understand.
Of course the problem is that hard drive can still be physically fragile. Maybe we need to wait for some magical high capacity, high-speed, (practically) infinitely rewritable solid state storage medium.
Sigh.
PS: Of course I can’t get this damn firewire HD I bought to format under W2K, So I can’t even pretend I have part of this setup today. Grrr.
Note that with my Hard Drive scheme, that if all computers of this type are of a standard configuration (probably never happen, Damn you Moore) you could conceivably take you system in for repair and get a loaner while your machine is away, but keep your data with you.
Well, I know this has been said once already and will no doubt get a billion other mentions here, but this varies too little from Knoppix to be revolutionary. Knoppix is on the right track. It is already free and usable and requires no investment from corporations. I can’t see this being that much better than Knoppix; especially considering the investment it requires. Regardless, thanks for the interesting article and keep thinking!
The concept is interesting but not that far off what is already available to anyone who wants to use it. The world can’t agree on which desktop architecture (Several major and minor types) to use or which OS (umpteen hundred?) but for this to work as you suggest everyone would have to agree on one thing.
Anything to obviate the need for Micro$oft Windows is a good thing. As others have mentioned, the idea is very similar to what Knoppix already offers.
I believe there are many take-your-computing-brick-with-you ideas in play right now as well. As of yet, many have not made it to market, but will soon.
The portable computing brick has many more interesting upsides than what is essentially portable data.
The idea is nice but with some many varities of OS it will be very hard to get the uniformity and consistency required. Even if Linux becomes the standard there are already many incompatibilities between distros (POSIX only goes so far) I very much doubt a the seamless software needed will be possible.
And I can hardly see Apple fan-boys giving up their hardware…
I’m afraid I don’t see much reason to allow people to migrate from computer to computer like this. People like using their own, personal equipment. Buy a notebook and use your computer on the go. Why would I want to use other people’s computers when I have my own?
Jared
Sounds a lot like Knoppix, i think some concepts are open for change, how will computing look like in the future, will we still use Hard Disks? Or will we use an intern memory flash disk , that loads the OS in ram, and runs with out any spinning parts? Maybe we need an lighter Operating System too, that can run on less ram then what we are used to now, on low end systems without complains.
I tested knoppix, it works well, but has its hardware issues. And the compressed file system works faster than the uncompressed one, so an uncompressed file system would slow it down.
I sure hope that some changes will take place, good luck with the project.
How are you going to load your OpenOffice.org or KOffice or whatever-format documents on a Windows or Mac OS machine? Seems the be a pretty big problem, isn’t it?
Anything that requires the cooperation of so many different entities just won’t work.
You’re better off looking at something that ‘piggybacks’ onto other operating systems to give a similar effect.
An excellent example of this is xwt (www.xwt.org) that runs on your various operating systems to provide applications that look and feel the same.
Also, I would have though that the way forward would be remote storage of information so your information is available to you from anywhere without having to carry around an annoying disk.
Geez, I can’t foresee anytime in the future where I’ll be paying for the Linux kernel, the GNU/Linux OS and free open source softwares. At least not anytime in the near future. Now if all this was for free. It’s nice in theory, but lets be pragmatic, secondary storage devices such as hard disks are getting faster, better and bigger. Take Mac’s ipod for example. It’s a mp3 hard disk that I’ve slammed on the floor more than once. I even jog around with it. And it is yet to fail me. So the argument about hard disk reliability while noteworthy, is not as terrible as you make it sound.
This idea is very similar to a Linux distro called knoppix. It’s basically a cd and a floppy containing your popular Linux desktop environment,user settings and other pertinent Linux files. In fact, I suggest every Linux user has a copy of Knoppix in addition to their primary Linux distro for those times when you want to do things on Linux as opposed to Windows, at work, at relatives place, in a library etc.
1. Yesterday I installed linux on one of my computers to use them like a proxy using squid. Well, I search for a graphical program to configure squid and I didn’t find it. Then I opened the configuration file “squid.conf” and I started to understand it. I would like to see an linux distribution with all configuration files in a control panel with each a graphical program with all the same options the .conf file was.
That’s the problem of linux. The distributions that exist don’t have any interface to that files in a control panel… for now we must edit .conf files and be an expert to undertand it.
When I talk about .conf files I also needed to edit a file in rc.d to start a program when the os boot.
2. One other thing that is important for linux is to be plug and play of devices, and have an option in control panel to install it easily… Yesterday I install the linux operating system without my modem connected. When the linux is installed and working I connected my modem to it, I make a connection in dial up and I choose modem port, don’t work! Then I choose the port i don’t remember now, but I think ? TTY01 ? and worked, fine:)
I would like to know what’s happened if I was a internal modem…
This is the strategy that Sun Microsystem is trying to put forward. Well that is what I gathered from a video on CNET.
don’t get me wrong: this would be a nice thing it it were possible, but there are several reasons that this is a crackpot idea:
-open source enthousiasts want multiple options for everything. while you would want to use a memory stick, i use a flash card, and others use a usb hard drive. so all those interfaces and drivers need to be supported, but as it is open source, people are more interested in hacking and developing then in designing universal interfaces, let alone usability tests.
-people in the states still use gallons, pounds, miles inches, feet and hogsheads, even though congress has long ago decided that SI is the official way to go. could you imagine what has to be done for a universal computer to work? as it is, it is a miracle that you can phone from afghanistan to the usa.
-while you might like windowmanager universal KDE, i think it sucks and want to use universal gnome. unfortunatly my desktop configuration cannot be read on your system because the window manager designers give a rat’s ass about interfacing with software not their own. using the clipboard for copy and paste is still very shaky even today.
i bet there are even more reasons, but this will do.
int.
I am not sure a compelling enough problem exists for a revolutionary solution.
1. The Internet makes document availability less of an issue. If I am travelling and forget a document, I can have it e-mailed to me. If I am thinking ahead but do not wish to carry physical media, I can put it up on a ftp or web server. A variety of remote access solutions exist.
2. Web access is getting easier and easier (for instance, WiFi).
3. MS’s monopoly status, near-universal HTML and PDF readers and evolving MS-compatability and open document format efforts (the last of which I think is the most important on a long-term basis) make viewer/editor availability less of an issue. Java-on-the-client can also be used to great advantage here.
4. Notebooks are getting lighter, cheaper and more powerful everyday. (In terms of size, they have already reached the physical limits imposed by human hands and human eyes.
5. For use within closed environments (company, school), a variety of thin client solutions are available. I have got to believe that thin client solutions for closed environments I believe are going to be a big growth area — and might be an area where small Linux-based companies can compete with Sun and the like.
6. With 24/7 network availability, thin clients need only passwords rather than physical dongles or media.
Hmm, so you’ve basically described a thin client machine with a CF slot. I think we’ve tried this before.
While your idea has some merit I think you’ve gone a little too far. You cannot control the hardware, so just forger about that. While you may not need anything more than your 2.4Gb of storage, or a GeForce 2 card in your machine, someone else may need half a Tb and want a Radeon 9700 in theirs. Not all computer users are equal and they will resist any attempts to make them so.
The same applies to software! You will never get people to standardise on Linux. Even Linux cannot standardise on one distribution. Your examples give choices at every turn; if you’re standardising, why not just choose one desktop enviroment, one email client, one web browser, one office suite etc? I suspect you already know the answer; people like choice!
I think what you should really concentrate on is open data formats. Many of the problems you are trying to address would not be a problem if we had a well supported standard office document format, or a standardised way to store basic user preferences. Not only would this make cross platform interoperability a lot easier, you wouldn’t have to worry about wether the computer in question was running KOffice or OpenOffice, or Linux or Syllable or Windows. The user could plug their CF/USB Memory Stick/Whatever into the system and it would just work. This is far more likely to be succesful in the long term, and you don’t have to restrict anybodies choice to do it.
I can see situations where this kind of system might be handy. If a system was included that allowed easy transparent access to files over the internet (internet storage mounted as a local directory so people don’t notice the difference) you wouldn’t have the limited storage problem and it would make the system more attractive.
Didn’t IBM already make a prototype of this? It was a tiny computer that you could carry with you and you could plug it into a base station and carry around all your documents etc with you.
I don’t really see the point to this when you can already get:
* Memory cards which can store upto a GB (maybe more)
* USB/Firewire harddisks
* Laptops
* and portable computers with a 2.5 inch HDD, but no monitor, keyboard mouse etc. (they plugin just like a regular tower would), which will fit in you pocket (they are little bigger that a 3.5 inch HDD).
Which email client do you want to use (if you’re not sure, you can choose more than one, then choose one later)?
* Kmail
* Evolution
* Fetchmail
* Mozilla Mail
* I don’t need this capability
Hey where’s my favourite email client… mplayer?!
Both Palm and PocketPC has alot of similar futures that this concept struggle after. Now when Palm have Bluetooth projectors can be used directly trough PalmOS. This future is very good for presenations.
DCMonkey – You were having problems formatting a drive under W2K? I just went through that with an 80gb external USB drive.
The secret is that under w2k you cannot have a FAT32 partition more than 32gb. NTFS does not have this limitation.
Also, be aware that FAT32 has a 2gb file size limit.
Excuse for my english, I’m italian.
Well, why not use two 1Gb CompactFlash card? One instead of the mini-DVD and another for documents and configuration?
I think your Idea will work with a VIA EPIA-M10000 a C3 1Ghz Processor, 2 514mb DDR modules, 2 1Gb CompactFlash card on parallel ATA (why not 3 or 4? You have another IDE channel…), hard disk on firewire and a Source-Based distribution (Gentoo or Sourcemage…)
A Revolution? I think not, an experiment much expensive (compactflash are too expensive!).
Bye.
Gianluigi Ravviso
Yes, IBM did make a prototype. Now http://www.antelopetech.com/“>Antelope sells it as the Mobile Computer Core, or <a href=”http://www.antelopetech.com/mcc.html“>MCC for short.
Sounds like switching from the von Neumann architecture to the Harvard Architechure. The big problem being what if you want to use some kind of new application that isn’t included in the standard Edge Computing disc, suddenly you have to switch right back to the von Neumann architecture for the extra flexability it provides. Your dumping alot of flexability for a minor improvement that doesn’t matter most of the time. How often do you actually move between machines, really? At home never. At work very very rarely. Even the sales people going presentations to clients spend more time in the office, on one machine, than in other peoples offices pitching.
Nor can you rely on the OSS movement as your edge computing workstation will be acting much more like a Network Computer style dump terminal, the only real difference is the OS will be booting from a DVD (worse than a dumb terminal as it’ll be harder to upgrade and maintain), and your data will be carried with you on the Flash card. Not a hackers (in the classical sence of hacker) machine, so they simply won’t use it and you waon’t e able to build a big enough comunity to make it take off.
Seems to me that a minimal disk or boot device with a VPN connection and a VPN Client would provide so much more for so much less and work today.
The problem with having my personal data/info on a flash card and putting that flash card on somebody else reader
which I have no trust in is a big one.
Would you trust that computer at the hotel to not make
backup of your email or source code you are programming?
I believe that unless the data I input is directly written
to my memory storage without the hardware I don’t trust having seen it, that the system is doomed.
Even VNC doesn’t offer this kind of protection, so if you’re not going
to resolve this, then use VNC has it does that today.
Joshua, I commend you for thinking and, in fact, for thinking something all the way through. I can see this working under certain circumstances. Let’s scale it down and say a corporation uses the Edge sysem. It has only Edge hardware and the software you mentioned. This could work very well I think. LOL, the only part that made me wonder was the huge amount of software. That would be the opposite of the simplified hardware. A corporation would probably want to use one email client, one office suite, etc. I mean, it doesn’t have to be that way, but there could be some simplification in areas like that which are universal and everyone in the corporation uses.
As someone else said, keep thinking and thanks for the great article!
Your idea isn’t bad, and even though Knoppix is already doing something quite similar, there is always room for competition because Knoppix doesn’t really have much yet.
The only problem I see with your idea is that of pricing. You will probably not make the kind of money needed by selling the Edge System for such a low price. I’m not saying that you should increase the price of the OS to above $100, because that may mean losing market share. What you should do however is to think of a strategy that will make you money, whether from Tech Support, or through the sale of proprietory software made specifically for the OS. Also, you should charge companies like HP and Dell on a per-system basis instead of a “purchase once use forever” model. This could increase revenues for you by hundreds of millions.
Another idea is to get into the hardware business of developing the miniDVD system yourself and then sell the entire Edge System as a complete package under you own brand. This approach will obviously be more costly and time consuming, but the rewards will be much greater.
Idea sounds good but there is already a good alternative, Sun Smartcard System. Now instead of going and creating a whole new system why dont you just extend the currently localized sun user database, and replicate across multiple hosts in different locations, so as to person A in california, can come to NYC and be able to log in. And sun hardware is very reliable…speaking from experience.
Sun Microsystems has had this model for years. They use it internally whereby an employee does not have a perm office of PC. They get to work & insert the smart card which authenicates them & reloads their last saved desktop state.
See http://wwws.sun.com/hw/sunray/index.html for more info.
Me and a few other people discussed the possibility of adding roaming desktops over the ‘net to Linux. It’s not hard, mostly relies upon efficient caching remote filing systems, along with a decent mirror network and good security.
Then you just patch GDM (and KDM), add some code to PAM and now you can enter [email protected] as your login address and get your desktop pulled up wherever you go.
A bit more achievable methinks
It sounds more like someone trying to get others to build his dream system for him based on a 4 page essay.
Why others would suddenly stop development on an existing OS that has potential to build this “dream OS” is beyond me.
Perhaps you should begin working on it yourself before wasting peoples time with such proposals, then others might be able to better see the benefit of such an endeavor.
“This would make the computer extremely fast, and if you don’t have that huge of an amount of ram, there”
that huge of an amount….Wow, this doesn’t scream “stream of consciousness”. Must learn to edit…edit, ma boy.
Let’s start from the beginning. Your “data is the computer” model is interesting, but how does it vary from the portable hard drives available today? For one, the entire OS is on the portable for consistency’s sake. But remember, the station should have local driver space, so that hardware expansion doesn’t require a full OS upgrade for every OS out there. So now you are mingling userOS and stationOS, an interesting idea.
User data is stored on the hardware, and the hardware is physically carried with the user. As it sounds, you are describing a PDA, HDD MP3 player, flash drive, or any of a number of types of devices. Furthermore, you are describing a piece of hardware that can store less than 5 GB of data. I have more personal photographs than that, let alone music tracks, video files, old papers, scans of magazines, etc. This is not an acceptable amount: put it on either a portable or a microdrive, or nobody will want it. A backup station should be cheap enough to implement, and necessary for any device that is carried in the user’s pocket at all times. Do you think nobody loses their cellphone?
You’re making an inherintly single-user, single station OS from the ground up and you’re starting with Linux / X? If you want it to be easy to use with clean programs you need to dump Linux. It’s a great server OS and a decent multiuser desktop OS, but as far too overengineered to be a top-notch singleuser OS. Align with a Be group, the Amiga group, or design your own OS. But no more jimmying Linux into every corner imaginable just because it can be done.
To Summarize
What your describing is a portable storage system, that contains Data, Programs, and OS. Many current programs can already be installed on removable HDD’s (and many more could be modified to, if the Programmers believed in that kind of thing). So all you are adding is a user-controlled OS. Somehow, that’s just not compelling.
The problem in slinging data around OS’s is compatibility. The problem with creating a radical new hardware standard is… compability. Good luck Sissyphus.
I see two better routes. Route 1: push Motherboard makers to allow booting from the USB port. Exactly what you want, achieved instantly with Knoppix and a cheap Matrox drive. Route 2: Bios and TCP/IP based remote HDD mounting, with local RAM caching. Universally accessable, and will be done about the same time the Telcos finish laying out last-mile fiber.
Your route: Lots of effort, little payoff.
The first problem I see with this is the issue of security. I am not going to stick a storage device containing my personal data in someone else’s computer, there’s too much risk of trojans. Technologies such as Palladium/TCPA have the potential to offer some solutions to this problem, if you trust them (and I don’t).
Computer prices are constantly falling. Even by Australian standards hand-held computers such as the iPaQ are becoming quite affordable, and are more usable for editing files on the go (I’ve written a magazine article on an iPaQ). Laptops are constantly getting cheaper, lighter and more powerful. 10 years ago it was possible to purchase a new laptop and a new car and have the car be cheaper! Now there are schemes in place to offer cheap laptops in countries such as Thailand which will have significant flow-on affects to first-world countries. I anticipate that in the near future a cheap laptop will cost the same as a good bottle of Whiskey or Champagne!
I expect laptop and hand-held computers to become so cheap that big corporations will regard them as disposable items rather than as assets. When they are 12 months old they will be disposed of if they have anything slightly wrong with them or if there is a better model. If the issues of doing a quick install on a laptop with full hardware support are adequately solved (NB there’s some money to be made in this area) then corporations may make it a policy to just give all employees a new laptop every year and let them keep it at the end of the year. This will flood the market with hardly-used second-hand laptops which will make prices really nice for those of us who don’t need the latest laptop to run Linux.
I expect falling prices of laptops and hand-held computers to kill the market for the current Internet cafe business model. I expect that future Internet cafe’s will have a wireless hub and a large number of power sockets, and no other high-tech facilities. Incidentally I used to part-own and run Australia’s longest-running (at the time) Internet cafe, so I have some confidence in my ability to predict the Internet cafe market.
Intel has been developing a similar idea.
http://www.intel.com/research/exploratory/personal_server.htm
This is not a very feasable solution for feature rich programs. You’re talking about a graphical representation of a config file which quite possibly has thousands of options, and in terms of applications like Apache, has a modular design to add even MORE options.
The reason I don’t feel comfortable within Windows in the first place has to do with not having enough power over it’s configuration. Sure there’s the registry, but that thing is less helpful in what does what than Linux config files, and finding a complete documentation of it online does not appear to be possible.
Limited functionality or extendability of programs simply because there “should be” a graphical representation of a config file seems bad. And what’s the difference from changing values in a config file and in a graphical represenation of it in the end… you still need to know what the values do. Instead of hitting save at the end you hit ok.
I’m sorry but this idea just seems bad. The downsides outweigh the good sides, and it really offers no significant benefits.
Fine idea.
It won’t work.
Here’s the problem.
The problem is simply that it’s a closed system. It has to be, otherwise it doesn’t work.
By closed system I mean simply that there is a one entity at the top of the totem pole that controls the entire thing.
In order for the utopia of total portablility to be fruitful, you have to control any aspect that affects that portability. One significant aspect that affects that is simply features in the software. If I need to update that presentation at the hotel, the hotels software better be completely compatable with my presentation data.
Could it be? Sure, but recall that the primary way that developers distinguish their software is through features, program flow and integration. If everything is based on a Lowest Common Denominator feature set, then there is little motivation to improve the toolset? And who decides when an feature can be added in later?
Simple example with the presentation software. Consider that the software that developed the presentation may have had some snappy transistion that the hotels software didn’t support, or even the final desitinations software.
Today, what you want pretty much exists. With something like the Java Citrix client running in a web browser, you can practically access a rich centralized computing experience from just about anywhere. It wouldn’t surprise me if you could do it from those airport internet terminals.
As long as you have an open system, you’re going to have incompatabilities. And you must have an open system to attract developers and users.
Microsoft is in the perfect position to really break through on something like this. Selling an annual subscription to office applications accessible through terminal services, and having a fast, downloadable, lightweight client so when someone shows up at a Internet Cafe, a friends office, a Kinkos, or whatever, they can simply install the client and connect to a MS server that had all of their data and applications, from anywhere on the planet.
Most people don’t have this need though. They’re content managing the entire process themselves.
Mr. Canclled, you are way out of line. An 18 year old young person has an idea and your remark is all you can come up with? I’m afraid your remark is a reflection on you and not on Joshua.
http://www.webmin.com has what you need.
Wow ! I just got a big deja-vu reading this article!
Huh?!
What world do you come from?
I have had 2 macs in the last 10 years at home, never had a hard drive failure!
I have been working as a computer person for the last 5 years in a university and I have probably seen a hard drive fail only once, and this is in a duration of 5 years, and over 1000 computers —
The cost of buying enough flash memory to store my documents would be tremendous. When you consider television shows recorded with my WinTV card, music rips, dvd-rips, songs I’ve recorded (all in wav format), you are looking at a lot of data. People need to learn to backup irreplacable data and settle for the fact that if they don’t back it up, there is a good chance they will use it.
Until we get past hard drives with moving parts they will always have an obscenely high failure rate (compared to other computer components).
Thanks for the tip but I was already trying to format NTFS. Ther is a known bug that sounds like the problem I had (something keeps a handle open on the new volume, thus disallowing the format), but the stated workaround didn’t work and the hotfix was supposedly included in W2K SP3, which I already had installed. I ended up plugging it into my XP Pro machine and formatting it there, but now have some other problem where the format didn’t complete at 99%. I’ve got a few more days before the return period runs out to tinker with it.
As for my dual removable HD idea, it occurs to me (duh!) that a consumer friendly RAID mirroring setup with removable drives would likely be adequate for what I had in mind.
Lots of good ideas here all around.
Tehnodev, I read the file of squid.conf when I start to configure proxy, and I understand it, but in graphical mode is a fast way to do the same thing… I’m a programmer and I know that an application that have (edits, checkbox’s, tabs, etc…) is more easy to use… you don’t need to be an expert. And you can have help files in it (one suggestion is to put the comments of that file .conf) in the graphical application.
I think that if one of the distributions make graphical interfaces to linux .conf files in a control panel, they will be sucessfully… Beginners users want graphical interfaces, don’t want text modes to configure things, that’s the past.
You will agree with me, that if you have to choose in a text mode and a graphical mode, you will choose graphical mode to do the same thing…
Thanks joe:) webmin is a good solution to interface config files.
Just put his stick into any computer and wait 30s. Voila. Life gone.
hi,
nomadbios allows you to hot-migrate your OS with all running applications. Write an USB-driver for L4 and nomadbios, and be done with it 🙂
Hi, this is the author of the article, and before I get started answering all the questions, I have a couple of things I’d like to say.
First: I did not post this article with the hope that people would send me money to develop it (I couldn’t even if they sent me money). I posted it because is was a failed project. If it really had promise I would be off making millions of dollars. I wrote this a few months ago when I had some time on my hands for fun, but decided to post it on OSNews just to see peoples responses.
Second: Although Knoppix was one of my main influences (the whole bootable CD thing is entirely from there) the “persistant home directory” had not yet been implemented, and wasn’t until about a month after I finished with this project. I came up with that idea on my own, although I’m by no means claiming I invented it, because looking back at knoppix from the past it seems that many people did this. I was very excited when the knoppix home directory came out, because I used it just like my idea, having a knoppix disc at home and work and shuttling my stuff back and forth. It was great.
Anyway, on to the critisism. First, I wonder if anonymous would post information on the roaming profile, because I’ve never heard it.
Greg: the source would be viewable, and downloadable, for whatever changes you would like to make. The changing of software to follow a common GUI and the code audit would be done by the company producing the discs, not the open source community (although, as I said, the source could be incoporpated into the common code by downloading it if it was an improvement. Also as I stated in the article, security updates would be downloaded at boot time, and incorporated into the running code (kind of like in Cisco’s router OS, how you have the running config and saved config). I also view it more as symbiosis, not “piggy backing.”
DCMonkey: You should get a RAID controller and mirror your main drive, or maybe make two mirrors just to be sure.
Zachary, Micheal, Leslie, Mystilleef: It was original at one point though . . . (sigh), and Leslie is right about trimming down current OSs.
wiggly-wiggly: what kind of name is that? Oh yeah, and you’re right about the apples.
Jared: I think this idea would be excellent for students though, who have library computers, home computers, classroom computers, etc.
Ronald: Downloadable would be a program for windows or Mac that can read the filesystem and convert office types.
Charlie: That’s true about so many people cooperating, but I tried to avoid depending on the internet for transfering of important and personal files because to me there is too much risk involved.
John: Could you link to the article about Sun?
Interfacer: You can get a six in one card reader that fits in a floppy bay and reads pretty much everything, I agree with the hosheads point, and knoppix uses several windowmanagers and they run fine (well, maybe fine it too strong, tolerable) together.
jck2000: I don’t really trust the internet for private/sensitive stuff that well, although it’s okay for some things. Your other points are very valid.
Vanderas: Although I think open formats would be better than this probably, I doubt Microsoft really wants to cooperate with Linux. Hardware it autodetected like in Knoppix, so you don’t need the same hardware, whatever hardware you have will work.
uid_zer0: Sorry about that, apparently I came to the mistaken conclusion that fetchmail had to do with e-mail. What is it’s purpose? I just threw it on there because it had mail in the title to give me more options.
Gianluigi: Yeah, compact flash is way to expensive for that.
chrisb: Wow, I have no idea what you just said. von Neumann? I fell dumb.
Jay: Thanks a lot.
Shawn, James: Thanks for the link James, that’s actually really cool sounding.
Mike Hearn: Probably more acheivable, but less secure, I think. It’s a trade off you’d have to choose for yourself.
Mr. Cancelled: I didn’t say anything about stopping any OS. In fact, I don’t even want people to build my OS because I see the obvious flaws in it. If, however, someone can take one idea from mine and turn it into something better, then it’s worth posting it.
Joe: I was drunk.
No, just kidding, I didn’t do a huge of an amount of proof reading.
mini-me: macs are supposed to be reliable. Last month at the (windows) pc shop I work at we had at least ten failures of hard drives (and most people didn’t back up at all).
Berend: That’s why you backup.
Anyway, as I said, it was just an idea, and one I already thought was failed. I do think burning knoppix on a mini dvd would be cool (I’ve got a friend with a DVD-Burner who I’m going to get to do this to try it out), and I think in installation of some of the OSs with many programs a choosing default type thing would be cool, with detailed descriptions of programs, not: a mail client. Thanks for the responses everyone, even the guys who thought I was a retard.
This is another example of a great idea that is hard to implement even though all of the code is available and its core concepts are quite simple. This goes to show that there is a long way to go in managing programs and data and in designing programs so parts can be changed more easily.
The reason I got into open source in the first place was because all of the components were there, but integration is still really tough, even with access to the source code. I would like to see more projects done in languages where programmers will more artfully design the programs (SCHEME!) rather than always use an “Object-Oriented Approach”, which in my opinion, is great for some designs but bad for others. I think that the statement “The goal of this fuction is to transform a certain type input into a certain desired output” is just as valid, if not more, in computer science, as the statement “An object is an entity with responsibilities”, though lately the focus has been on the latter.
Just my 2 cents.
Where are you in Oregon? 18, huh. So, you just graduated? two more days for me (well, till summer. I’m a junior)
… That 18-year olds are still figuring out how to save the world! It may not exactly be world peace, but if you don’t think big when you’re young, when are you going to do it?
Don’t let these old hacks grind you down, kid. You’ll have plenty of time to be just as negative and jaded as they are.
1. Flash is too expensive and too small.
2. Flash does have moving parts – the connector on your edge computer, and the pins on the flash card. These WILL wear out.
3. Flash can and does wear out as it has a limited number of writes. You can burn out a flash card by repeatedly overwriting it. A good example of this is taking a Linear Flash PCMCIA card used heavily in a Cisco router/switch and attempting to use it in a Newton. You’ll find your data vanishing.
4. KDE and GNOME are desktop environments, not window managers. They in turn call Window Managers – if I recall correctly (and I might not), at least GNOME 1.x calls enlightenment or sawfish, etc. You can use other “lighter” window managers.
Please do some basic reasearch before posting bull droppings. I’ll not add other comments about how you won’t be able to get all the manufacturers to work together and agree on a standard. It’s already been said.
i am probably the least “tech savy” person to post about this but i think that helps me here.
this is a great idea taken too far. obviously the current HD model is limited and flawed, lotta parts moving fast, etc. what is the fastest type of memory we have? system ram can transfer gigabytes per second if im not mistaken.
the problem is that memory deletes itself, but i dont see why this cant be a whole mini-mb with a “bios” where u put any os you want in rom, run it on any computer that is compatible with all the card and the software you put on it, and do the obvious thing that Be did years ago:
just put the drivers “in the box” and if it works it works if it dont it dont.
obviously you could should be able to “flash” something and update it as needed, but we can do this already today.
the only thing that needs to be invented is a more efficient way of storing “optical redundant ram data.”
Have to agree with clasqm…Keep Thinking Joshua! There is NO failure in this. Here is our two cents:
Evolution occurs in three ways:
1. The strong get stronger
2. The weak die
3. Mutation occurs
The act or process of being altered or changed in genetics involves the change of the DNA sequence within a gene or chromosome in an organism that results in the creation of a new character or trait not found in the parent. Lets look to the computing world you described as a strand of DNA in the day-to-day *life* of people — we just need to *arrange* things…;-)
The things people do will not change in the normal course of events, but how they are done will if the *new* way is arranged better, more conveniently, cheaper, easier, anywhere, anytime, etc. The key: do the same things better! Just ask your parents about telephones that actually *dialed* and had these funny swiggly cords between the handset and the telephone — image uncordless and unhandsfree!
Wow, now that’s a stretch! 😉
Raquel and Bill
http://www.pegasosppc.com
On my home machine I have around 90GB of data and less than 20GB of software! My data requirements are getting larger. A better solution would be to run an application that would do the following:-
In the background,
Mirror my entire system to several other locations on the internet.
Mirror other users’ systems on my machine.
Supply a key/card to each registered user.
This should be done without any identifiable information being disclosed to any party. If your system breaks down, one could enter your key and your system could be rebuilt.
With broadband access (albeit an over hyped service) a system could be rebuilt in less than 24 hours.
There are issues, few if any are insurmountable. The biggest problem may be the reluctance of ISP’s to allow such a large transfer within 24 hours.
I think one of the biggest problem here is using flash memory. Personally my home directory has 25GB of data in it and its just going to keep growing. Most people are going to have multiple gigs of data and flash memory just wont cut it because as the flash cards get bigger so will people’s home directorys. Multimedia files are getting to be higher quality everyday and the sizes of the files grow with them.
IMHO it would be better to store the home directories on the internet with an automated check in/out procedure. Of course there are security issues to be looked at as well massive bandwidth and server potentional. At least this would take the responsibility of keeping a backup to a data center and not a user. Plus what about when a user loses his flash card, or has it in his pocket and it rains..
Thats just my 2 cents tho.
While I am happy to see such enthusiasm on your part, Josh, I think it’s misplaced. Most of what I’m about to say is hypocritical because when I was 18 I channeled my energy in just the same type of direction; but hypocrits’ advice can still be correct.
I’d suggest focusing all that energy and creativity to learning. Of course, embarking on a project such as the Edge Computing System will certainly teach you things, but the lesson would probably be something like, “Wow, when I’m trying to create a system that does so many good things for so many people in so many situations…..it’s hard and full of accidental complexity!”
I, too, wanted to make a modular swiss army knife that would be useful to everyone. I called mine Negatron. Maybe these grandiose projects are a fact of life for us young people who really love the power of programming. All I know is that, as months passed, I became more and more aware of my ignorance. I began searching to see what others have thought and written about computer science.
My Dijkstra-like conclusion, so far, is that strict mathematical thinking needs to be applied to this novelty of computing. So I’ve been focusing my energy on learning Predicate Calculus and Lambda Calculus. In the process, I’m discovering just how smart other people are! Initially, the enormity of what has already been created depressed me. Now it just makes me glad! I don’t have to work to rediscover what giants like Church, Turing, and Goedel have already pioneered. After learning this stuff I’ll be free to more powerfully accomplish my own dreams.
Not to say that my particular area of interest — logic and proof theory — will be what you pick, but just pick something! Become educated, become strong.
it’s a thin client, kthxbye.