IT solutions companies have been generating lots of buzz regarding thin clients basically since the early 1990s, but have yet to really penetrate into many suitable environments. These relatively cheap computer appliances carry broad promises like energy efficiency, space efficiency, and centralized maintenance and data storage. These claims could sound like the computer industry equivalent of snake oil. Kiwi-LTSP, a combination of KIWI imaging technology and Linux Terminal Server Project, is one open source solution for thin client servers.
Why do thin clients or terminal devices matter? For general consumers, home users and office computer users thin clients should have no visible impact. This is nothing new, as companies like Sun, HP, Dell, Wyse and Citrix have always promoted thin clients, dumb terminals and the like from a support perspective. Check out Sun, HP and Wyse product descriptions here, here, and here respectively. In my own experience supporting Windows desktops in an Enterprise environment we were always looking for ways to improve maintenance and customer experience without user interaction.
Allow me to elaborate. The advantages of thin clients make me think of three support issues from my former support job. The first is our roll-out of Windows XP for all users. To fulfill New York State computer security requirements, we updated computers using any OS that did not support limited user accounts. Mostly that meant refurbishing or replacing systems running Windows 98, which offered almost no security. Computers could not be bound to the domain, users were not subject to Active Directory policies (meaning they ran as administrators) and users had to manually update Windows.
I think of supporting Windows 98 in such an environment like a bad rash caused by an allergic reaction to a snake oil. Users treated their computers like home machines, installing such useful business applications as Kazaa. Beyond the support issues, the machines were quite capable of performing their required functions: Outlook email and calendar, web browsing and our in house text only database software. However, much of this functioning hardware had to be replaced. I don’t mean that Windows 98 era hardware should still be used, but that today’s Windows XP hardware is in a similar situation. Windows XP is at the end of it’s support cycle and most Windows XP computers cannot acceptably run Windows Vista. The life of much of this aging hardware could be extended using LTSP.
Even after completing the Windows XP rollout there were some rough areas. Local machines had to be powered on for automatic updates to occur. Since we couldn’t rely on users leaving their workstations powered on at night, we set computers to update at noon. Unfortunately we had a large number of baffled users calling wondering why their computers would slow to a crawl around noon or why they were being interrupted every 10 minutes by a message asking them to reboot.
Another issue we dealt with was user data backup. Most user’s documents and settings only existed on their local machines, and some users were given access to shared folders for their data. Requiring users to consciously backup data is not the best solution and lead to data loss.
A third problem area was support of a large number of public computers that utilized ‘Guest’ logins. Because these computers could not be joined to the domain they required a technician on-site to manually update individual computers.
Kiwi-LTSP is probably best suited for this last scenario. Libraries and schools often offer basic computer access to citizens or students for research, web browsing, email and word processing. Libraries and schools, working under tight budget constraints, tend to utilize computers longer than many commercial environments and attract donations of rather old computer hardware.
Setting up the Kiwi-LTSP server on my desktop (2.2GHZ Intel Core2Duo and 2GB Ram) running OpenSuse 11.1 took less than an hour following the quick start guide. I installed KIWI-LTSP version .70 based on LTSP5, which has support for some very interesting features like running some applications on the client hardware. The most difficult part was setting up my network switch on KIWI-LTSP’s network interface. Even then I only had to disable it’s DHCP server. For the client I hooked up my serviceable IBM x22 which has a PIII 733 and 640MB of ram. On its own it can run lightweight Linux desktop environments like XFCE or LXDE, but chokes and sputters with KDE4 or Gnome.
It booted about the same speed as it usually does according to my subjective test. In KDE4 (my default on my desktop) and Gnome it was quite responsive and usable, if not quite as fast as the desktop itself. After logging in, my KDE session loaded and all of my applications and data were available. All of my settings from my desktop were preserved, no need to setup Kontact again, change the color scheme, add widgets to the desktop or sync my documents. I could play music from my server’s music library in Amarok and run all of my preferred applications instead of their lightweight alternatives. Microsoft Office 2007 ran under WINE and even my VirtualBox virtual machines were accessible. KIWI-LTSP can even be configured to connect the client to a remote desktop server.
I did run into some trouble with regards to keyboard settings. My server uses a standard US keyboard but my client uses a Japanese keyboard. Unfortunately using KDE 3 or KDE 4.2 many of the keys showed up as numbers, which were not even consistent with the differences in layout. While using GNOME this was not a problem and we were unable to track down why the problem only seemed to exist in KDE. While trying to sort out this problem I did discover that the project maintainers lead by Jigish Gohil known as CyberOrg provided some great support. One should also note that the Ubuntu LTSP project appears to have more detailed documentation.
The keyboard problem also prompted me to literally pull out some hardware that I had designated for the scrap heap for further testing. I confirmed that when using the same keyboard layout as the server there were no issues. Even considering my small corner case issue this implementation of a thin client server met my expectations and impressed me with its ease of installation.
The KIWI-LTSP troubleshooting page suggests that a client machine with at least 256MB of RAM should be used. This strikes me as low end, but not extreme. To see if KIWI-LTSP could be used to re-purpose near junk hardware I uninstalled my 512 MB Ram upgrade, and booted my laptop with 128 MB of Ram. Amazingly, my laptop still responded like a spry and much more youthful machine. No doubt if I were serving more clients this performance could deteriorate, but the KIWI-LTSP page does claim that a server should be able to serve 5 clients per 1 GB of memory. Even serving 10 to 20 clients at reasonable performance would be a significant cost savings in terms of hardware and support costs.
Utilizing KIWI-LTSP in a typical office environment also has advantages. Centralization of user data is an excellent way to ensure data is backed up. Only the data from the server would need to be backed up. Further, such a setup would offer a more efficient use of hardware horsepower. In most modern usage environments CPUs rarely maintain 100% utilization for extended periods of time. Sharing the processor(s) of the server or cluster would allow that usually unused portion to be distributed as needed.
In both cases administrators only need to update software on the server. Also operating clients without disks removes one more fail point and reduces its energy footprint. I could see potential problems in scenarios involving high performance computing environments (although there is an interesting project called Icecream which is beyond the scope of this article) or heavy reliance on proprietary Windows software. While many Windows programs can be run in WINE or virtualized, licensing restrictions could undermine the benefits.
KIWI-LTSP is a very useful tool and even if it isn’t really the “Fountain of Youth” it did breathe some more life into my aging laptop. Certainly at a time where many organizations are plotting their upgrade paths and deciding the fate of aging computer workstations and questioning the need to be Microsoft shops, KIWI-LTSP should be considered as a viable alternative to new hardware and software acquisition. Thin client solutions definitely have the potential to save an organization money in terms of support costs, re-purposing aging hardware and even new hardware acquisition.
About the author:
I am a Computer Support Contractor, who is always exploring interesting information technology solutions.
I have had great success with Open Thin Client ( http://openthinclient.org/home )who just released their 0.4.1 release, ( http://openthinclient.org/article26 ).
I highly recommend it.
Thin-clients have their uses. However, if you want to do anything beyond simple web browsing and office document work, you just can’t use thin-client setups. All the processing is done on the server, and the output is pumped down through the network. Try doing anything with audio, 3D, full-motion/full-screen video, or even a lab full of students hitting youtube, and you can bring even a gigabit network to a grinding halt.
Diskless setups are where it’s at. You get all the administrative benfits of a thin-client setup (all the software is installed on the server), but you get all the benfits of local PC (all software is run on the local CPU, using local RAM, local video/soundcards). You still have systems with no harddrive, no optical disk, no floppy, basically just a motherboard and case. But you can do so much more!
We’ve migrated our local school district off Windows and onto LTSP … and then over to a diskless setup using standard Debian 4.0. With full nVidia 3D support, local sound, full USB support, basically everything you can do with a local install of Debian.
Thin-client has its uses … but it’s days are really numbered in Unix-land.
Thin clients SHOULD be substantially cheaper than diskless workstations. Don’t know if they really are…
Our current diskless stations are < $150 CDN (in bulk), with 2.8 GHz CPUs, 1 GB RAM, PSU, motherboard, onboard nVidia graphics with full 3-D support, onboard 7-channel sound, slimtop case.
We haven’t looked at hardware thin-clients in a couple years, but they were ~$250 CDN when we were looking. Considering the *huge* gulf in performance between the two, there’s really no comparison.
The Linux and Unix environment is built for thin client computing. X Windows itself already has this concept. However if you want an easier way to do thin on Linux you can use NoMachine
http://www.nomachine.com
If your like Linux thin client to run Windows apps then you can go ThinServer
http://www.aikotech.com/thinserver.htm
Thats similar to what I do. I setup diskless linux systems to start rdesktop on boot and log into a windows 2003 server to run apps. Works very well, but the systems can take forever to boot because the linux server they boot from isn’t that great in terms of hardware. However, its not that important because once they are up performance is fine and the windows server has much better hardware.
In regard to thin clients and 3D, some of the technology Red Hat obtained when it bought Qumranet will change this. Spice is a remote desktop protocol that can use the client’s hardware if it has any. There was a demo video a while back from AMD showing them migrating a kvm virtual machine from and AMD server to a Intel server. Anyway, the client accessing the virtual desktop was playing full motion HD video and it didn’t skip a frame during migration.
Red Hat seems to be pushing the virtual desktop thing hard, running on the hypervisor. Its pretty cool technology from what I’ve seen so far.
I dont like thin clients. Yesterday I spoke with someone about his HP thin client. It takes 7 minutes to boot his HP thin client. I my opinion, thin clients suck big time. They have like 1GHz CPU and 256 MB RAM, and you have to upgrade the hardware to get more performance. And you have to patch their OS. It is just like a diskless PC, but very weak. They suck.
Ultra thin clients, I really like. You plug it into the router/hub and in 5 secs, you get a login screen. No CPU, no RAM that runs applications – impossible to upgrade the hardware. No operating system to patch. Everything is processed on the server. Input is sent to the server, the server sends back bitmaps to the client. Impossible to hack. They weigh 0.38kg and uses 4Watt, size of a VHS casette. Very cheap. MTBF is 22 years, no moving parts. You upgrade the server and you have upgraded the performance of all clients at once. Physically impossible to upgrade the clients. One quad core PC can drive 40 clients. Login via VMware and RDP and run Windows/Linux/Solaris. Imagine 5 clients in a drawer and when you need more computers, you just plug the client in to the router and in 5 secs you get a login screen. Create an account, and off you go.
Drawback; movies. Windows Media player handles MP4, etc in full screen. But not divx and xvid formats. Max resolution is 500×500 windows lagg free at 25fps. If you only show static pictures then everything is fine. The bandwidth required for full use is 300kbit/sec.
These ultra thin clients are provided by SUN and they are called SunRay2. No other vendor has ultra thin clients, all have thin clients. You can buy old SunRay for 40 USD on Ebay. The server software is downloadable for free, and Linux versions exist. And Solaris versions.
I have a quad core and ZFS raid and Solaris. In my household I have SunRay and they are dead silent. Uses 4watt. Everyone can access the MP3 collection on the ZFS raid. And run Virtualbox with Windows. And watch movies in 500×500 windows. And Solaris is dead safe, no viruses. And ZFS with snapshots
I’m pretty sure LTSP works the same way. I haven’t tried a new version lately, but the way I remember it you use ssh and X11 forwarding to run apps on the server. So, if you upgrade the server you have theoretically just upgraded the performance of all the clients. But, I’m not sure if you can do everything that you could with the sun ray, like watching a movie. I wouldn’t have any idea how sound would work remotely this way.
Diskless clients however, start x locally and can take forever to boot.
Poorly configured diskless client can take forever to boot, with the network setup being most important. Our diskless Debian stations boot in under 2 minutes to a login screen, and than maybe another minute to get to a working/usable KDE 3.5.9 desktop.
However, this is where Wake-On-LAN comes in handy, along with a cronjob that wakes the clients before people get to the school/office/wherever. That way, the stations are already booted and ready to go before people need to use them. It’s not like people need to reboot constantly throughout the day.
Well, the hardware I deployed my setup on is a little underpowered. Pentium 2s, and a few 3s. The server is a pentium 4 with 256mb ram and a single ide hd. If all the clients reboot at the same time it can be brought to its knees, but other than that everything works fine.
We saved a good deal of money by reusing all this old hardware that was just laying around, and were then able to buy nice screens, keyboard, and mice for each box.
Ouch! Yeah, that server is *very* underpowered. Adding RAM will do wonders for it, as it will be mostly used for disk cache, and will greatly speed up booting for the clients.
Our biggest diskless server is a dual-processor, quad-core Opteron system with 8 GB RAM, 4×500 GB SATA in RAID10 using a 3Ware 9650SE controller, with an Intel quad-port Gbps NIC configured as a single port via bonding. Handles 685 diskless clients spread across two buildings, with NX support for remote connections, Samba, Squid+DansGuardian, Webmin, and CUPS. Almost 6 GB of the RAM is used for disk cache.
Booting the entire school (both buildings) takes under 10 minutes and saturates the gigabit backbone between switches.
The standard server configuration for the elementaries, which support ~100 stations, are dual-processor, dual-core Opterons, 4 GB of RAM, 4x 500 GB SATA in RAID10, with 2x gigabit NICs (1 for Internet, 1 for LAN, as the server also acts as router/firewall).
When we started our Linux project for the school district, we started with LTSP and recycled computers. Servers were dual-processor P3 @ 1 GHz with 4 GB of RAM, and clients were P2 266 MHz with 256 MB RAM and 15-17″ CRT monitors. Clients were $50 CDN through Computers for Schools. Worked nicely, and allowed us to install a full lab of 30 working computers into all 50 elementaries for less than it would have cost us to replace 3-4 labs with Windows XP stations.
Once everyone became used to the Linux system, though, getting the budget to implement a diskless setup was a fairly easy sell … it’s still a *lot* less expensive than installing Windows XP or Vista stations, and several times more powerful/useful.
LTSP is a good starting point … but once people start to use the system for more than just web browsing and office documents, the limitations start to become very noticeable, and the cries of “Why can’t we do X?” get very loud. And it just doesn’t work in a secondary school setting where they do CAD, animation, video editing, graphics, yearbook, and so forth.
The problem is that the server is an original socket 423 willamette pentium 4. It uses rambus rdram which is very expensive. So, there is no point in pouring more money into such an old system.
As far as clients go, I hope in the future someone will come out with something similar to this, http://www.chippc.com/thin-clients/jack-pc/ based on an Atom chip instead of an amd risc. Hopefully, it will be cheaper too.
Eewww! Yeah, definitely not worth upgrading. More a candidate for replacement down the line.
I don’t know what this is, but it’s not a “thin-client”. A thin-client has very little local processing power, and no local OS. It’s basically just a terminal. Everything is run on the server, and just the keyboard/mouse input and display output is handled by the unit.
This “HP thin-client” sounds more like an “rdesktop appliance” that runs Windows CE or something, and then loads an rdesktop client. Which isn’t what a thin-client is at all.
This is a normal thin-client.
For Linux setups, 1 dual-core PC with 4 GB of RAM can handle 30 clients, so you should be able to handle a lot more than 40 on a quad-core. Although you’re mainly limited by network bandwidth than server CPU power.
This is where diskless PCs shine over thin-client setups. You get all the power of a local PC, but none of the hassles (no harddrive, no optical drive, no floppy drive, no local OS installs to manage, just a CPU, RAM, motherboard, PSU, and case). Systems boot via the network, mount all the drives via the network, but run everything locally. You can even run VMWare/VirtualBox/KVM/etc locally to have access to Windows. Full audio support, full video support, full 3-D accelleration. If hardware dies, you just swap in a replacement, update the dhcp config, and boot. PCs become appliances … and everything is managed centrally via the server.
We looked at these, and units from IBM, NeoWare, HP, and others. They just aren’t worth it. For roughly the same price (new, < $150 CDN), you can get a slimtop case, PSU, motherboard that supports 2-4 GB of RAM, a 2 GHz AMD Sempron CPU, onboard nVidia graphics, gigabit NIC, 7-channel sound. Virtually silent (2 small fans). But few limitations on what you can do with it. Want full-screen video? No problem. Want 3-D games? No problem. Want surround sound? No problem. And it won’t saturate a 100 Mbps network to run an entire school (~100 PCs in the elems). The only time the network is hit is when loading programs … once they are loaded, they run on the local CPU using local RAM.
Overkill. We use dual-core setups for our diskless setups, and the servers can handle ~100 clients in the elems. In the secondaries, we use quad-cores, but they handle over 500 clients.
You consider that useable? That’s tiny. Especially on any monitor over 15″. Watching movies at full-screen, with surround sound, without clogging the network, is where it’s at. And that’s something that thin-clients can’t do on anything short of a “1 client + 1 server + 10 Gbps network” setup.
Except for some limited uses in office environments, thin-clients and dumb terminals are pretty much dead.
PHOENIX
“A thin-client has very little local processing power, and no local OS. It’s basically just a terminal. Everything is run on the server, and just the keyboard/mouse input and display output is handled by the unit.”
As I have understood it, all thin clients have a weak CPU and small RAM. I only know of SunRay that is ultra thin and just handles I/O. No other vendor does this, they all handle a small Operating System in the background. They are just essentially a weak diskless PC.
If you know of other ultra thin clients I would be glad if you posted some links. We are about to deploy ultra thin clients where I work (Fortune 500 company). I am the project leader and I have only found SunRay as ultra thin, all others are thin. But if you claim there are others, then please show some links.
“For Linux setups, 1 dual-core PC with 4 GB of RAM can handle 30 clients, so you should be able to handle a lot more than 40 on a quad-core. Although you’re mainly limited by network bandwidth than server CPU power.”
Wow! That sounds good! But I wonder, have you experienced this performance by yourself or have you only read marketing about this?
You see, one client runs Windows XP + MS Office + Powerpoint + etc at my company. This uses at minimum 1GB RAM and a CPU at 2GHz on a single computer, right? I have problems seeing how you could run 30 of these work loads simultaneously on one server with 4GB RAM and one dual core CPU. Even if you skip Windows I fail to see how one server can run 30 clients on 4GB RAM and a dual core CPU. Because, that is what is needed for one user. And you can not stuff in 30 clients into that.
Actually, I have looked somewhat into thin clients, and I strongly suspect you have no idea what you are talking about.
“We looked at these, and units from IBM, NeoWare, HP, and others. They just aren’t worth it. For roughly the same price (new, < $150 CDN), you can get a slimtop case, PSU, motherboard that supports 2-4 GB of RAM, a 2 GHz AMD Sempron CPU, onboard nVidia graphics, gigabit NIC, 7-channel sound. Virtually silent (2 small fans). But few limitations on what you can do with it. Want full-screen video? No problem. Want 3-D games? No problem. Want surround sound? No problem. And it won’t saturate a 100 Mbps network to run an entire school (~100 PCs in the elems). The only time the network is hit is when loading programs … once they are loaded, they run on the local CPU using local RAM.”
Maybe you didnt read my first post? Where I gave arguments to why I think ultra thin clients are superior to thin clients (because they are just like a diskless PC). You are describing a diskless PC.
The reason I dont like diskless PCs are because they are just like a PC, but without a hard disk. How many sys admins are required for PCs? One sysadmin for each 30 Windows PC? They need maintenance all the time. You need to patch them all the time. Viruses. etc. For 19.000 SunRay, you only need 38 sysadmins worldwide. This is only possible if you can decrease the work labour very very much. This is not possible with a diskless PC or thin clients. Also there are huge energy savings with 4 watt SunRay. One bank saved 94% of their energy bill.
The only thing you can not do with ultra thin clients SunRay is playing full screen video (unless you install some software). For work and programming, they are perfect. SunRay has as much performance as the server, they dont run software themselves. If you need more performance, upgrade the server. And SunRay never break, the MTBF is 22 years. If they break, you just plug in a new one. That takes 1 minute and you are back where you were.
Edited 2009-03-04 10:09 UTC
If you really prefer diskless PCs, I would just use this superior setup instead. This is easily the best solution on the market right now, for diskless PCs. Highest performance. Cheapest. By far the safest solution.
HARDWARE:
– OpenSolaris server (which includes ZFS, iSCSI and CIFS).
– PCs without hard drive
SETUP:
Setup a ZFS raid which is the safest file system ever, because it handles SILENT file corruption:
http://queue.acm.org/detail.cfm?id=1317400
ZFS needs no hardware raid cards (which sucks because they have their problems) and just needs a few cheap SATA drives. With 7 SATA drives you reach 440MB/sec. And you can add one SSD drive as a ZFS cache, giving you crazy IOPS. One SUN 7000 storageserver gives close to 250.000 IOPS in extreme cases, because of the SSD drive.
CIFS and Vista gives 90MB/sec on 1Gbps NIC. Thus, you get good performance with CIFS.
Create a ZFS filesystem and format it with NTFS and install windows XP on it + MS Office. Snapshot the filesystem. Now, make one clone of the Windows filesystem for each user, this takes one second. The Master filesystem is untouched.
Each PC connects to the server and boots his snapshotted cloned Windows XP over the network via iSCSI. No hard drive needed.
If a user messes his Windows partition up, just clone a new copy from the Master filesystem. And off you go in one second. ZFS snapshot each users partition and you have a backup.
Buy another OpenSolaris server, and you have a redudant fail safe system.
This is extremely cheap. All software is for free. You only need a server. And some diskless PCs. More info:
http://blogs.sun.com/constantin/entry/x4500_solaris_zfs_iscsi_perfe…
Edited 2009-03-04 10:28 UTC
We looked into doing something similar when we first embarked on our thin-client setups. However, we gave up on the idea of network booting Windows stations, as you still need a full install of Windows for each computer (or each class of hardware anyway). And all the management tools still required Windows. And all the licensing costs involved. We stopped looking at Windows solutions very early on, as we could not afford the licensing or hardware costs to run Windows XP.
We do use FreeBSD + ZFS for our backups server, though, for all the data safety and snapshotty goodness that ZFS provides. Rsync + ZFS snapshots makes for an amazing backup system for Unix. We backup 85 Linux and FreeBSD servers every night to our FreeBSD box, take a snapshot of it, and transfer the snapshot off-site during the day to another FreeBSD box. So far, we have 6 months of daily backups for all those servers, with the individual files easily accessible via the shell. ZFS just makes life so much simpler.
“We looked into doing something similar when we first embarked on our thin-client setups. However, we gave up on the idea of network booting Windows stations, as you still need a full install of Windows for each computer (or each class of hardware anyway). And all the management tools still required Windows. And all the licensing costs involved. We stopped looking at Windows solutions very early on, as we could not afford the licensing or hardware costs to run Windows XP.”
With the OpenSolaris CIFS and iSCSI setup it is easy to setup diskless PCs. You just install Windows (or Linux or whatever OS) onto one ZFS filesystem. Then you snapshot it. Now you can clone the Windows XP installation in one second using ZFS. For each user you just clone it. Each user will boot and read from the Master Windows installation using iSCSI and CIFS, and write to his own ZFS filesystem. Which you can also snapshot and backup etc.
With ZFS the user can also rollback and look into earlier snapshots to recover files. And if the user screws his installation up, you just clone the master Windows XP/Linux/Whatever installation again in one second and he is off to go.
This must be superior than your current solution. CIFS is dead simple. And iSCSI too. Here is more information about this setup I been describing:
http://prefetch.net/blog/index.php/2008/12/31/a-completely-local-di…
Edited 2009-03-04 20:44 UTC
Show me a simple method to boot Windows off the network, without requiring any Windows servers to manage it.
And stop harping about the benefits and capabilities of ZFS. I know all about them, and use them everyday. Have been since June.
He certainly does know what he is talking, but I think you are still talking about thin-clients, while he is talking about diskless systems. Because your not running all the client apps on the server in a diskless setup, the server doesn’t need a ton of ram and everything else. So what he said is true. In a thin-client setup which is what you are talking about this probably is not true, and you need a much beefier server as the number of clients increase and depending on what the clients do.
In my opinion I much rather prefer diskless systems because I think they are much more flexible. You can do some of the things phoenix stated, such as running a hypervisor, and 3d video. This just doesn’t seem possible in a thin client environment. Furthermore, I don’t see why you can’t have some of the advantages of a thin client system in a diskless environment. The things you mention are all just implementation details which could probably quite easily be done in a diskless setup. Also, in terms of power, I think the same could be achieved using atom based systems in a diskless environment.
What you are calling “ultra-thin-client” is just a dumb-terminal.
I’ve been managing thin-client and diskless setups for over 7 years now, I’m not some marketing droid. Do a google search for “Linux in School District 73 Kamloops” for the details.
There’s just no comparison between thin-client and diskless setups … unless all you are doing is basic web browsing and office documents. If you want to do anything beyond that, or anything with audio, video, or even just animation, you can’t do it with a pure thin-client setup. There’s just not enough network bandwidth to do it well.
There’s the difference. We don’t use Windows for our clients. It’s all Debian Linux, with the occasional Win98/WinXP in VMWare Player.
You’ve obviously never done anything with Linux, then. You can easily handle 30 thin-clients on a server with dual-P3 1 GHz CPUs, 4 GB SDRAM, and 1 400 GB IDE drive. We do it all the time in our elementary schools. And we have just under 50 of those, each with a lab of 30 computers that get used almost non-stop everyday.
However, that’s only for basic web browsing, the odd flash/java game, some educational typing games, and openoffice.org. We quickly hit the “network is the bottleneck” issue when schools wanted to do more than just surf the web and type letters.
We’re in the middle of a migration to a diskless setup (12 elementary schools left), using $150 “computing appliances” and now they can do pretty much anything a normal computer can do … but without the hassle of local harddrives, local optical drives, local floppy drives, local OSes, local software, and so on.
We’ve also migrated 6 high schools to Linux diskless setups, for everything (labs, library, office, admin), with plans to move the other 4 over this summer.
Considering I’ve been working with them for over 7 years, and have found them to be pretty useless, I’d have to disagree.
Exactly! A diskless PC setup is far superior to a thin-client setup.
Nope. Because they don’t have harddrives, they don’t have local OSes to worry about. No software patching, no viruses to clean up, no onsite technician required. If there’s a problem, you swap out the PC with a spare, edit the DHCP config to allow it to boot, and you’re done. Less than 5 minutes to fix a problem. And the only problems to worry about now are hardware related. And with only 2 fans having moving parts, hardware issues aren’t that common.
For 15,000 students, 1600 staff, and around 6,000 diskless PCs, we have 6 school techs, 3 Unix techs, 1 hardware tech, 1 electrician, 1 video conferencing tech, and a couple managers. We’ve actually reduced the number of school techs since moving off thin-client and onto diskless setups.
This most definitely *is* possible with diskless PCs.
That one I’ll give you. Thin-clients do use a lot less energy than full-fledged diskless PCs. However, you don’t have to use power-hungry components in diskless PCs. I don’t have the numbers for our systems, but they are very low. We use Sempron CPUs, onboard everything, and 250 W power-supplies. They’re also configured to shut-off automatically at night, and power-on just before people show up. Compared to the P2/P3/P4 systems they’re replacing, they’re very green.
Like I said, there are situations where thin-clients make sense. But once you want to go beyond dumb-terminal applications, simple web browsing, and office documents, they just don’t cut it. Diskless setups give you all the manageability of a thin-client setup (no local OS, upgrade the server and all the clients get upgraded, just swap hardware when there’s an issue, etc) but with all the power of a local PC.
Edited 2009-03-04 17:25 UTC
Ok, we are missunderstanding each other. That is why I suspected you didnt know anything about thin clients.
What I call Ultra-Thin-Client, you call a Dumb-Terminal. Lets call them UTC for short. As far as I know, there is no vendor offering UTCs, other than SUN with SunRay. All vendor’s thin clients are essentially a diskless weak PC with 1GHz CPU and 256MB RAM. Making them unusable for heavy work. SunRay is unique in that is a true UTC. It is a graphical variant of the dumb text terminals. But you can run Windows/Linux/Solaris with VMware and RDP on SunRay.
When I talk about one quad core driving 40 clients, I mean one quad core driving 40 UTCs. One user normally requires 1-2GB RAM and 1-2 GHz of CPU. It should be near impossible for one dual core and 4 GB RAM server to drive 30-40 UTCs. Therefore I doubted your claims (misunderstanding).
Of course a dual core and 4GB RAM server would suffice for 30 diskless PCs. That is no doubt, the server would almost act as a file server. Any OS would suffice for that task, even Windows. But I am talking about dumb terminals. There is no way a dual core and 4GB RAM server can drive 40 dumb terminals.
So I point out that a quad core can drive roughly 40 SunRays (i.e. dumb terminals). Of course you need lots of RAM for driving 40 SunRays. Each SunRay user needs 256-512 MB RAM on the server which is really good, considering how much memory the user would require if he used a dedicated PC instead.
Regarding thin client vs diskless PC. I consider them more or less, the same thing. Same, same but different. Both use a rather weak CPU and has little RAM. The diskless PC has slightly better stats, but the thin client has an OS to patch and maintain.As I mentioned, a HP thin client booted in 7 minutes, someone told me yesterday.
After a few years you have to upgrade the thin clients/diskless PCs, because they can not handle the new OS and new software versions. Worst case, you have to upgrade them all (very expensive), or worse, ditch all diskless PCs and buy new ones. With SunRay, you instead upgrade the server and at once all the SunRays have been upgraded. You can always use your SunRays, they never need to upgrade. Use them for 22 years, if you wish. They always handles the latest OS and newest software excellent. It is much cheaper to upgrade one server, than upgrade all diskless PCs. It is much cheaper to administer one quad core server, than to administer 40 diskless PCs. In the future, the servers will be dual octo core and have 128GB RAM, then the SunRays will be extremely fast. SunRay are future proof. Diskless PCs are not.
Diskless PCs also suck as much energy as a normal PC. Compare that to 4Watt SunRay. Say you have 1000 diskless PC each using 100 watt. That is 100,000Watt. With SunRay, that is 4000Watt.
You have no serious performance with diskless PC if you need to do say, a heavy compilation. With SunRay, you have as much power as the server has. A server will always be much much more powerful than a diskless PC or thin client. If you are alone on the server, then all it’s powers is yours.
The only thing you can not do with SunRay, is high bandwidth graphics. You can watch movies at 500×500 windows without lagg. But that is graphic intense, compared to ordinary office usage and software development. For all other uses than watching movies, SunRay does it excellent. Many SunRay servers dont have any graphic card. The server generates bitmaps in RAM and sends to each SunRay. Each SunRay requires ~30KByte/sec for normal office usage.
They never break, average life expectancy is 22 years. SunRay is a small plastic box, very similar to a VHS cassette. It is just another keyboard or mouse, an input device. Can not be hacked. You can have dozens of them in a drawer. Should you need another work station, just insert SunRay into the router/hub and you are done. Anyone could do that.
Hot desking is also supported. Insert your security card and you have logged in. Then withdraw the card and insert it into another SunRay and you are back immediately where you left.
You can use SunRay over internet. One at work and one at home. You will login into your work environment.
If we speak about low end gear (as you do about diskless PC), you can find refurbished SunRay at 40 USD on Ebay. That is cheaper than a low end diskless PC. Never need upgrading too. A new one cost 200 USD. But they never break. Use them forever.
Here is an article about U.S Army switching to SunRay for their class rooms and saving lots of work and energy and money.
http://www.sun.com/customers/software/usaic.xml
There are Linux only solutions with SunRay. You dont have to use VMware and Windows XP. You can run everything as normal Linux users, creating normal Linux accounts, etc.
No. Thin-clients, by definition, do not use local processing, memory, OS, or anything. They boot off the network, they load the OS off the network, and configure themselves to act as an I/O hub (send keyboard/mouse data to the server, display graphics from the server). That’s it. A thin-client is a dumb-terminal. Period.
Some vendors (like HP, NeoWare, WySE) have “hybrid” thin-clients, which use a local CPU/RAM/OS (usually WinCE) to boot into a local GUI that then runs an rdesktop client. These things are worthless PoS that are over-priced, under-powered, and give thin-client computing (such as it is) a bad name. Once the OS is loaded, and the rdesktop client connects, then it is back to being a dumb terminal. But, yes, the boot times for these things is horrible, as are the graphical capabilities, which is why we stopped looking at them after testing two variations. A standard P2 266 MHz w/256 MB of RAM performed better.
I don’t know how many times I can say this: a dual-P3 system with 4 GB of RAM **DOES SUPPORT** 30 thin-clients, where Firefox, Java, OpenOffice.org, and Flash, are all running on the server, with just the display being shot back to the client. WE DO THIS EVERY FRIGGING DAY!! WE HAVE BEEN DOING THIS FOR 7 YEARS ALREADY!!! THIS WORKS!! Get it yet?
Which is absolutely horrible! But, you are running Windows, and we’re running Linux (for the clients), which is probably where the disconnect is.
No, no, no, no, no, no and NO!!! You do not understand the difference between a thin-client (dumb terminal) and a diskless PC.
In a thin-client setup, 0 CPU, 0 RAM, 0 processing is done on the client. Everything is done on the server. The client is just an I/O hub: mouse and keyboard events are sent to the server, video is sent back to the client. That’s it. The local CPU/RAM is only used to boot the client. Nothing else.
In a diskless client setup, you have a standard PC, with a normal CPU, a normal amount of RAM, a normal videocard, a normal NIC, etc. It’s a normal PC. The only difference is that there is no HD, CD, DVD, floppy, etc. The client boots off the network, loads the OS off the network, mounts network shares. The OS runs locally, using the local CPU/RAM. Applications are “downloaded” off the network and run locally. Except for the boot process, there’s no difference between using a normal PC and a diskless PC.
Do you see the difference yet?
One runs everything on the server, requiring a massive server and an even more massive network, as everything is pushed down the pipes to the display.
The other loads apps off the server, but runs them locally, allowing you to do anything (even play 3-D games) a normal PC can do. But there are no moving parts to worry about, no harddrives to worry about, no OS installs to worry about, etc.
The *ONLY* similarity between a thin-client setup and a diskless setup is that everything is managed from the server. Need to install new software — do it on the server and all clients get it instantly. Need to upgrade the client OS? Just upgrade the server, and everyone instantly gets the update. Add a user on the server, and they can login from any client station and get their personal desktop.
Only for diskless clients. You never *have* to upgrade thin-clients. By definition, nothing is run on a thin-client, it’s all run on the server. Hence, the local hardware *DOESN’T MATTER*. Period. The only time you replace a thin-client is when the hardware dies. You don’t “upgrade” thin-clients.
Yes, for a diskless setup, where you run apps on the local hardware, you may need to upgrade. However, this is where planning ahead comes in, and you make sure that your initial roll-out can handle the apps you will be using for the next 3-5 years. Or, you find a hardware configuration that is so low that it’s basically a disposable appliance (like we did — at $150 each, we don’t bother repairing them).
No, no, no, no, and NO! Administering thin-clients and diskless clients *IS THE SAME*. There is nothing to the clients. Everything is done on the server!! They are identical in pretty much every way … except where the application runs (on the server vs on the client).
For some uses, yes, thin-clients are future-proof. But not for all applications, as the network and server disk are the bottlenecks.
No they don’t, as there are no HDs or optical drives sucking power and requiring cooling. And you can build diskless clients using low-power CPUs, chipsets, and videocards.
We can do this as well, thanks to NX. It’s one of our key selling points to the schools, as they always have access to their school destktop and files, even from Windows machines. Including “suspend” where you login from one machine, suspend the connection, and reconnect from another machine.
Thin-client solutions like the SunRay have their place. But they don’t compare to diskless solutions when you leave the realm of simple web browsing and office documents.
Ok, there are still some serious misunderstandings going on. Lets try to establish the definitions. As you start with in math.
“In a thin-client setup, 0 CPU, 0 RAM, 0 processing is done on the client. Everything is done on the server. The client is just an I/O hub: mouse and keyboard events are sent to the server, video is sent back to the client. That’s it. The local CPU/RAM is only used to boot the client. Nothing else.”
As I have understood it, a thin client processes all software on it’s weak CPU with little RAM. A typical thin client has 1GHz CPU and 256 MB RAM, and boots from it’s server and downloads all applications from the server and runs the all of the applications or parts of the applications, on it’s 1GHz CPU.
Whereas an ultra-thin-clients doesnt process any software at all, it just handles I/O. That is SunRay. The server processes all software. On the SunRay no software is processed at all, it just shows the bitmaps that the server transmits.
You are telling me that this is wrong? You are telling me that any thin client act the way as the SunRay does. Every thin client just shows the picture from the server, and no processing is done on its 1GHz CPU and 256MB RAM? Everything is processed on the server? Is this so? Can you show me a link to a thin client that behaves like this? (Other than SunRay). I have never found any ultra-thin-client other than SunRay.
Either you or my understanding of thin-clients is very wrong. Who is correct? You or me? If you can show links on thin clients working as you described, then you are correct and I am wrong. If you can not show links, then you are wrong and I am correct.
“I don’t know how many times I can say this: a dual-P3 system with 4 GB of RAM **DOES SUPPORT** 30 thin-clients, where Firefox, Java, OpenOffice.org, and Flash, are all running on the server, with just the display being shot back to the client. WE DO THIS EVERY FRIGGING DAY!! WE HAVE BEEN DOING THIS FOR 7 YEARS ALREADY!!! THIS WORKS!! Get it yet?”
And you also say that a dual P3 system with 4GB does support 30 thin clients. Where the thin clients does not process any software at all. All software processing is done on the dual P3 cpus. You claim.
I find this very very hard to believe. If it is really is true, then I should look into this solution instead.
I mean, for one user, at least a 1GHz P3 cpu and 512 MB RAM computer is necessary. For 30 users, you would logically need a server with 30 x 1 = 30 GHz P3 and 30 x 512 MB = 6GB RAM. But now you are telling me that it is not necessary. Something is very wrong here. How in earth could it be possible to run 30 users on a dual P3 and 4GB RAM? I dont get it. That config is needed for one single user. How can it support 30 users???
There are some grave misunderstandings going on right now. Even the nick “broken_symlink” is confused. He also believes the same thing as me. He states that a thin-client solution should need a ton of RAM.
We can settle this out, if you post some links to thin-clients. Then I can read about them myself. And study the datasheets. And then I can tell if I have misunderstood the thin client concept.
This is what a diskless system is, NOT a thin client, but please read my post on http://osnews.com/thread?351709
This is correct.
Phoenix already explained this. He said there are hybrid thin-clients which do have a better cpu, and ram when compared to “ultra thin-clients” like the sunray. But, these are not true diskless systems. Which is what phoenix and I have been talking about.
I think your understanding is correct your labels for the different solutions are just wrong, and you also don’t seem to understand the third solution completely. A truly diskless system.
I never stated anything of the sort. What I said is that as the number of clients in a thin client solution increases you need better hardware for the server. I have no idea what the exact specifications are as I have never used a thin client setup, only diskless pcs.
If you don’t get it now, I think the only way you will is by actually seeing a setup in action. I’m sure there are guides online how to setup a diskless system. Then you should probably setup ltsp. Finally, compare those two to your sunray, and hopefully then you will understand the difference. Think of ltsp as the “hybrid” thin-client.
Broken_Symlink, you wrote:
“Because your not running all the client apps on the server in a diskless setup, the server doesn’t need a ton of ram and everything else. So what he said is true. In a thin-client setup which is what you are talking about this probably is not true, and you need a much beefier server as the number of clients increase and depending on what the clients do.”
To me, it seems that you stated a thin-client setup needs a “ton of ram and a beefier server”. But maybe you didnt mean that.
Anyway, I have a hard time to understand this. But I am not stupid. I have a Master’s in Comp Sci, with a specilazation in algorithm theory and discrete math. And now I am finishing my second degree in pure math. The problem I have, is that it doesnt sound reasonable, If you think logically.
One user needs at least 1GHz and 512MB RAM with todays Operating Systems and software. This is true. You can not go under that hardware requirement. And now, suddenly a dual P3 at say, 1GHz and 4GB ram suffices for 30 thin-clients?
Because you have a dual P3 cpu server, we count them as one 2GHz P3. And now, 30 users share that 2GHz P3. This gives 66MHz to each user. And 30 users share 4GB RAM. Say that 1GB disappears to the Operating System. Left is 3GB to 30 users. This gives 100MB ram to each user.
So one user uses 66MHz and 100MB RAM. And this works fine, you claim? Why do we have todays 3GHz quad core and 4-8GB RAM for one single user? According to you it suffices with 66MHz Pentium and 128MB RAM for the newest Office and all other RAM and CPU sucking new software packages.
What am I? A fool? Can you not see that something is very wrong here? Is it only me that see the problem? Ive tried to tell you that there is a problem in several posts here. Still Phoenix iterates “HOW MANY TIMES DO I HAVE TO TELL YOU THAT IT WORKS FINE??? ARE YOU DUMB OR WHAT???”
But if you just link to some of the data sheets then I can read myself. (Ive looked into thin clients, and didnt find any ultra thin client solutions, except SunRay. I guess you are better at google than me, then).
That’s a diskless client, not a thin-client.
That’s a thin-client.
Correct. The very definition of a “thin-client” is “no local processing”. All applications run on the server, and just the display is sent back to the client. The “thin-client” is just an I/O hub. It sends keyboard and mouse events to the server, and receives graphical output from the server. They have a bit of RAM/ROM to boot, and to find the server, and to establish the network connection, but that’s it. After they boot, they run applications on the server, and just show the display.
See below for the link to Wikipedia.
See below for link to Wikipedia.
It’s not a claim. Organise a trip up to Kamloops, BC, Canada, and I can show you it in action in School District 73. We use this every single day, and have done so for 7 years now. Over that time, we have replaced a few servers (dual-AthlonMP with 4 GB RAM; dual-Opteron with 4 GB of RAM), and are in the process of migrating off the thin-client setup to a proper, diskless setup. But we still have 12 schools using the original dual-P3 server setup.
But, as I said before, we are using Linux and not Windows. So the user desktop is IceWM running a small handful of opensource applications like Firefox 2, OpenOffice.org 2, TenThumbs Typing Tutor, TuxPaint, TuxTyping, TuxMath, and the KDE 2 games. We have very few problems with this setup.
We’re moving away from this setup, though, as we want to be able to put Linux computers into every classroom, into the library, into the office, into the LAT rooms, everywhere in the school (~100 per elem, up to 700 in the secondaries). And we want to centralise printing, give students more disk storage, and give them access to a full KDE 3 desktop, with more applications. We also want to support CAD, animation, video, 3D, audio, and so forth. Hence, we’re moving off thin-client, and to a diskless setup.
For a Windows desktop, probably. For a Linux desktop, a couple hundred MHz and a couple hundred MB of RAM is plenty.
A very simple, easy read through the Wikipedia article on thin-client computing (http://en.wikipedia.org/wiki/Thin_client) would suffice. Here’s an excerpt from the first paragraph:
There’s even a list of manufacturers and devices. That all act like the SunRay.
And here’s the article on diskless computing: http://en.wikipedia.org/wiki/Diskless_client
Goes into detail on the differences between the two.
Edited 2009-03-05 16:55 UTC
Ok, Ive read your article and it seems interesting. Something is strange in my understanding though. I will check my other source up. It messes with me.
A question, you suggest diskless PCs. Is a user session tied to a specifik PC, or can you login to another PC and continue. Or do you have to restart your work?
Another question. All PCs, they can be different models and brands? You just create a Linux account that any PC can access. Some PCs might be faster or slower, so you will have different performance depending on which PC you login to?
The administration, is it easy or complicated?
This solution sounds interesting. But as we need to run Windows, I dont think Linux would suit. But it seems that you have found a near optimal solution.
So your solution consists of diskless PCs, and not thin clients. And everything works fine? Have you written more information on your solution, somewhere on the web? I would like to learn more. (Too bad we need windows)
Yes, a logged in user is tied to that PC. They have to logout, move to another PC, and login again, in order to move around.
Think of a diskless PC that same way you would a normal PC. The only difference is that no software is installed locally, it’s all installed on the server.
The one exception is when they login using the NX Client. You can suspend the connection, close the client, move to another PC, connect using the NX client, and pick up exactly where you left of. However, the NX Client login session is separate from a local PC login session. It’s really only useful for access the desktop when outside of the school.
Correct. Since the OS and applications run locally, performance depends on the local hardware. This is why we have standardised the hardware for all installs, so that there isn’t the whole “this station is faster than that station” issue.
However, at least with Unix, you can configure applications to run on the server and shoot the display back to the client, thus allowing you to run specific software on the server. This allows you to use 600 MHz PCs as the diskless client, but still run complex software that requires powerful CPUs. Gotta love the network transparency features of X11.
At least for Unix setups, it’s very easy. Everything is done on the server. Install software on the server, and all clients have instant access to it (NFS mounts). Add user accounts to the server, they can instantly login to any workstation (NIS). Need to swap out a dead PC? Just update dhcpd.conf to allow the new one to boot, swap in the new hardware, boot, done. Need to add a new printer or change defaults? Login to CUPS to tweak as needed. Need backups? Run an automated rsync script from a remote server every night. Once things are setup, there’s very little to it.
Yes, using Windows complicates things, and is one area where thin-client/remote desktop works better.
However, you can use diskless Linux stations that just load rdesktop or krdc or any other RDP client, and have a hybrid setup. That’s pretty much what all the HP, WySE, and similar thin-client boxes do. They load a stripped-down WinCE or Windows XP Embedded and run the Terminal Services Client as the shell.
Correct.
Correct.
I don’t think we have anything that’s publically accessible, but we have a tonne of information on this setup in our internal wiki. We keep getting asked to make something available, but just haven’t had the time to do so. Our IT department is under 20 people, including managers, to look after a district that spans an area 60 KM West, 60 KM East, 60 KM South-East, and 200 KM North of Kamloops, BC. We manage just fine, but don’t have a lot of time for extra/side projects.
Were do run nx on the server or the client? I’ve never had the need to log in remotely to a diskless setup, so i’m interested in how this works. Also, my servers don’t usually run x, and i might have setup the rootfs for the clients a little differently than you have. Which is why I’m curious as to how this works.
Another reason for standardizing on hardware is it makes configuration easier. You don’t have to worry about configuring x for each diskless system, and making sure each system has the correct kernel modules and everything else. Just set one up and all of them should be fine because its the same hardware.
Kebabbert, This is also a place where solaris with zfs would work very nicely.
This is exactly what my setup consists of. Diskless pcs, which start rdesktop on boot and login to a windows 2003 server. It is much more cost effective, because the systems only run 1 app. So, rather than waste money on copies of xp pro. we went with this. It has been working great for almost 4 years now.
Kebabbert glad you finally got it! Hopefully, you’ll deploy the right solution. Good luck!
NX Server is installed and running on the server, under vs1 (see below). Connections are done to vs1.school.domain.
You can get official NoMachine NX Clients for Windows, MacOS X, and Linux, and there are freenx clients for pretty much every Unix-like system out there. We install the Linux client on all our servers, so that teachers, librarians, and other staff that work at multiple locations can access their desktops at other sites. (We also create SFTP:// shortcuts on their desktops so that they can copy files around with as needed … hurray for KIO-slaves!!)
Our servers may be setup a bit differently from most, as we use Linux-VServer to separate “the server” filesystems from “the client” filesystems.
This way, we can upgrade “the clients” without touching “the server”, and can run server processes like CUPS, Apache, Samba, etc without having them affect “the clients”.
We call the VServer “vs1”. We can SSH into here and do all the normal apt-get commands for installing/upgrading/removing software, and do all the custom client configuration stuff here. This is all stored on the server under /var/lib/vservers/vs1/
/home is bind-mounted to /var/lib/vservers/vs1/home, so that we don’t duplicate data storage, and to provide uniform access to things like mail storage.
DHCP runs on the server and handles the PXE boot portions (/tftpboot/ltsp/). Once the kernel is loaded, the clients mount /var/lib/vservers/vs1/* via read-only NFS, and /home via read-write NFS.
We found splitting the “client” off from the “server” to be very helpful. We initially had everything on the server, with client stuff all under /tftpboot/ltsp/ltsroot/ but found it too complex to manage the client software. We also wanted to have a very simple, only-install-what’s-needed core for the server.
Diskless pc’s do not have to use “weak” cpus. I used to have a diskless setup at home for testing purposes, and each system had an athlon 64 running at 2.2ghz ea, with 1gb ram. Is that weak?
And why exactly aren’t they? If performance starts to become an issue in a few years there is absolutely nothing preventing you from turning them into thin-clients with something like ltsp. So, you get the best of both worlds.
I agree with you here, but if you use an atom based system for your diskless client there is no reason why you can’t save power, probably not as much as a sun ray, but still anything is better than nothing.
This is the one thing that is absolutely not true at all. Diskless pcs could wipe the floor with thin clients in this area. I used my athlon 64 diskless setup for nothing but compiling stuff. Tell me which setup you think would perform better, 50 sun rays, with a quad core server, or 50 diskless systems with pentium 3s all compiling stuff at once. The thing about using the diskless sysetms is that you can use a distributed compiler like distcc and compile on all 50 systems at once. You can also use stuff like openmosix to migrate processes between the diskless systems, and theoretically end up with unlimited resources for the task at hand.
That is constant network traffic that diskless pc’s don’t require, because once you start a program from the server on a diskless pc, it stays in the diskless pc’s ram and there is no more communication required with the server unless something needs to be written to disk. So, you save bandwidth.
I see no reason why the same can’t be done with a diskless pc. One breaks, just plug a new one. Sure, not anyone can do it because you need to edit a file or two. They also may take up more space than a sun ray, but honestly I would rather deal with those issues than trade the flexibility a diskless pc gives you.
I’m sure something similar could be done with linux and a diskless pc, maybe even using a fingerprint reader, which is something we wouldn’t have to wait for sun to implement in the diskless world.
As far as I know you can’t do this with a diskless system, but there are other options available, like ssh.
I got all my diskless pc’s for free. People where throwing them away. You still need to buy a keyboard, monitor, and mouse for both solutions though. If I were going to buy new diskless clients today, I would probably get something like this, http://www.newegg.com/Product/Product.aspx?Item=N82E16813121359
That is $79 for a 1.6ghz dual-core cpu that uses 8w of power. I found a case with powersupply and 2gb of ram on newegg for another $70. That brings the total price for a brandnew mini-itx diskless system to $150. Another thing that should be noted is that that atom motherboard comes with gigabit ethernet, and according to this http://tinyurl.com/3vv9vv the sun ray2 is $349 brandnew from sun with only 10/100 nic, which is weak.
You can do the same with diskless pcs, but they are not linux only. There is nothing keeping you from using netbsd, freebsd, or anything else that can netboot. Heck, it might even be possible to netboot osx, if you had a little know how. You can also run windows directly on the system using xen or something if the diskless system has a cpu that supports virtualization extensions.
I still don’t see whats so great about thin clients when diskless systems offer so much more. Especially when you consider that they are proprietary and closed source. Tell me how a sun ray can last 20 years if sun decides to stop supporting it in 5 years. What do you do with all those thin clients then? How do you replace one if it breaks and sun doesn’t sell them? They can’t be turned into full blown pcs either if the need arises.
Edited 2009-03-04 21:40 UTC
The other nice thing about diskless setups is that, at least with Unix OSes, you always have the option of running specific apps on the server, with just the display shot back to the client. In effect, for the purposes of that app, turning it into a thin-client.
We do this in the older elementary schools labs, where are the clients are still 600 MHz Cyrix CPUs. We run as much as possible locally, but if something bogs down the machine, we configure it to run that app off the server. A hybrid, best-of-both-worlds setup.
It really throws people for a loop when we can “upgrade” their machine by just editing a config file and asking them to restart the app.
I don’t have the numbers handy, but one of our techs was playing around with OpenMPI?? (some cluster software thingy) in one of the secondary schools. He was able to crack DES encrypted shadow password files (680+ accounts) in stupidly-quick times. It was quite neat to watch. Add a couple scripts to the startup for the diskless clients, they john the cluster on boot, and you have a 680-node super-computer.
We’re trying to figure out how to harness all that computing power in a usable fashion. We’re thinking some of the science teachers might like it for modelling, or the math teachers maybe.
This is what we do. We have 4 spare PCs in each school. When one of the PCs in the schools has issues, a teacher unplugs it, plugs in the spare, reads off a couple numbers on the screen to the helpdesk, and reboots. They’re back up and running in under 5 minutes.
It’s called NX from NoMachine. Works quite nicely, and you can access your Unix account from any Unix, Windows, or Mac system. Works best on a LAN, but is definitely usable across cable or ADSL connections. works a heck of a lot better than VNC and RDP. Even gives you access to your local printer. I believe it even supports file transfers from the local computer to the server, although I’ve never personally tried that.
You can’t login to one diskless client, suspend the login, and login to your still-running desktop from another diskless client. But you can access the same KDE desktop from any diskless client or from an NX client. And you can suspend an NX session, and login from another NX client, and pick up right where you left off. Even across OSes. Which is something a SunRay definitely can’t do.
Ouch. We could get two of our diskless clients and lunch for that. Actually, maybe three diskless clients after the US-CDN exchange.
This is probably the best argument against thin-clients. Sometimes, simple, off-the-shelf, standard hardware is just easier, simpler, safer.
The netbook market is going to do amazing things for diskless systems. Especially with things like this on the way, http://www.mini-itx.com/2009/02/04/nvidias-ion-reference-platform-r… ARM systems will also be a good candidate in the future, and they are only going to be even more power efficient. I can’t really see VIA competing though, but who knows what they have cooking, they might surprise us.
Heh, the add-on market for bolt-down straps/holders for these things will boom. If they get small enough, maybe they’ll become part of the desk.
There were a few CD’s of knoppix that had errors with the terminal server, most versions after 3.8 were good.
Knoppix 5.1 and above offer great, easy and simple terminal server setups. Use the Knoppix terminal server app. Basically click and boot to PXE.
Anyone needing PXE might also look at Gpxe. Bootable pxe images for non-pxe nic’s.
I think they try to re-invent the wheel on this technology so far from what ‘I’ have seen it is not a productive solution for ‘end users’ but if you wanted a locked down solution for a ‘health club’ or other single use situation it would be ok.
The thinking needs to eliminate a PC to do one task when you can have an appliance do it securely and using a Linux distro would be the best choice.
I hope SuSE does well, I think they have an awesome distro and choices.
Yes, it’s interesting the cycles that computing technlogy goes in. First the mainframe and dumb-terminals, then the mini-computer and dumb-terminas, then the server and PC client, then PCs as thin-clients, then dedicated thin-clients which are basically almost dumb-terminals. Looking at some of the blade-server setups, they almost even look like mainframes.
Point-of-sale, kiosks (like in bookstores and airports), bank terminals, even some Internet cafe setups really benefit from thin-client setups, whether using dedicated hardware or recycled PCs. Basically, anywhere that you only run one or two applications (period, not at a time) can be moved to thin-client setups.
It still amazes me to sit across from a bank teller who has a full-blown Windows XP computer (full tower and everything) … just to run a terminal emulator. Put a dedicated terminal there instead!!