Golem.de talked to Fabian Franz and Kurt Pfeifle. NX, developed by the NoMachine aims for nothing less than to revolutionize network computing. The software allows to connect and work on remote desktops even across low bandwidth links such as ISDN or modems.
FreeNX, presented as a preview
during LinuxTag2004 by Fabian Franz (Knoppix Developer) and
Kurt Pfeifle (KDE Project), is a Free Software NX Server based
on the GPL’d libraries from NoMachine. After the aKademy week (where for
nine days more than 400 KDE contributors and visitors plan to
work on the next generation KDE) Fabian and Kurt plan to
release a first snapshot of the FreeNX server under the GPL
license for the first time. During aKademy, a few KDE hackers
are working on the integration of an NX Client into the KDE
framework. Knoppix-3.6 also will contain a first preview on
FreeNX. Golem.de, a German
IT news site talked to Fabian and Kurt just before aKademy.
Daniel Molkentin provided us with a translation.
(Photo courtesy Pro-Linux.de)
At LinuxTag the two of
you showed off your free NX implementation which will also be
included into the next Knoppix version…
Fabian Franz: It wasn’t only
Kurt and myself – at least four people were involved 😉
KDE’s Joseph Wenniger developed the first kNX/FreeNX client,
assisted by Torsten Rahn.
Out part was to get the FreeNX Server running.
Kurt Pfeifle: The actual
code of the FreeNX server was written by Fabian alone….
We basically showed two components at LinuxTag:
- the FreeNX Server and
- the FreeNX/kNX Client.
Only the interaction of both pieces allows remote GUI
connections even over “slow” physical links. Even with
low-bandwith techniques like an analogue modem or ISDN, both,
NX and FreeNX achieve a good performance — even across
operating system borders. A very verbose description on the NX
interna can be found at: Pro-Linux
(german).
Another one will appear in the next few days on OSNews.
Fabian Franz: In fact, our
FreeNX implementation is only the last piece of the mosaic.
99,9% comes from NoMachines’s GPL/NX components,
that we simply use unchanged in FreeNX.
Gian Filippo Pinzari developed the core in several years of
meticulous work. These are the NX components which his company
NoMachine.com offeres as Free and Open Source software under
the terms of the GPL (link?). The exactly same components used
in the commercial products of
NoMachine. NoMachine currently offers several commerical NX
server variants (Personal, Small, Business and Enterprise)
which base on the Free NX libraries.
If NX was already put
under the GPL in March 2003, why did it take more than a year
until a Free version of NX was published?
Kurt Pfeifle: In the last 15
months, there have been servere misunderstandings concerning
the whole NX software, which was considered to be “non-Free” by
several Open Source developers, just because NoMachine also
based its
commercial products on top of it.
Without having a deeper look, rejecting NX as “practically
unusable, if only the libraries are released under the GPL
whereas the NoMachine NX Server remains proprietary”. These
biases simply overlooked, that a commandline tool was shipped
by NoMachine almost from the beginning, including the source code which
allowed everyone who was interested to build an completely
working NX tunnel.
Fabian Franz: I first saw
the commerical NX version with my own eyes at CeBIT 2004, when
Kurt showed it to me. I was immediately impressed. But even
when I saw Kurt’s very simple “nxtunnel” shell script (which
didn’t include a “server” functionality, but only merely
provided a peer-to-peer NX proxy tunnel) I didn’t instantly
start to write the FreeNX code. First my laptop display had to
break and die, before NX became a personal necessity. to me.
With a broken display but without money for a quick repair, the
only help was to access the machine via NX. That was the final
kick for starting to code on FreeNX…
How does NX compare to
methods used up to now?
Fabian Franz: What do you
mean with “methods used up to now”? VNC? Citrix Metaframe?
Remote Desktop? PC Anywhere? Tarantella?
In comparison to all of those, NX performes better while
being cheaper. Additionally it is future-proof, since the
source code is freely available. And it’s much more flexible,
since it covers several platforms.
Kurt Pfeifle: If you meant
to compare FreeNX and NoMachine’s NX product: The performance
is not any different. On the other hand you won’t get a support
contract for FreeNX by us anytime soon. For that, you should
contact NoMachine.
Or, if if you need a new client plattform (e.g. Symbian),
you won’t find it here. And if you need extended administration
tools with graphical user interface or a web interface: Ask
NoMachine. Or find a developer, who develops that kind of stuff
for you…
NX was published by
NoMachnine more than a year ago. But only with your
presentation at LinuxTag, the technology became visible to the
broad public. How is NoMachine involved in the KDE/Debian
integration?
Fabian Franz: NoMachine
totally supports our activities. Gian Filippo Pintari keeps us
informed about planned NX protocol changes and keeps us aware
of bugs in our implementation.
NoMachine values the compatibility between our Free and
their commerical version just as much as we do. Users should be
able to use the FreeNX/KDE client to connect to a commerical NX
Server in the same way NoMachine NX clients from all operating
systems should be able to access the KDE/Knoppix FreeNX
server.
Kurt Pfeifle: NoMachine
committed themselves to use the exactly same libraries for
their commercial products as they are releasing under the GPL,
not different or “improved” variants. In my opinion, this
fundamentially distinguishes their business model from those of
Codeweavers/WINE or artofcode/Ghostscript.
What possibilities
arise for the developers from the implementation of
NX?
Fabian Franz: Our
implemementation was intentionally kept simple. It’s a simple
Bash script…
You are surprised? Yeah, right: FreeNX Server is a Bash
script, which glues together GPL library and executable
components of NX to a working whole. All that stuff existed for
15 months untouched.
The fact that it is Bash means that every Linux developer
can fix errors in our FreeNX server. 😉
Kurt Pfeifle: I was merely a
mentor for the FreeNX development and I do the documentation.
But I can confirm: Fabian isn’t lying… 😉
FreeNX consists of less than 500 lines of Bash code
(additionally to the NoMachine/NX source code parts, which are
under the GPL).
Fabian did the implementation of the FreeNX server all by
himself. First of all, Fabian is a true Bash wizard.
Secondly, this implementation should prove how “complete”
the GPL components of the NX are already since 15 months.
Third, it should prove all the sceptics among the FOSS
developers wrong, who ignored NX and claimed a free NX
implmentation would be “too difficult” without looking at all
at the NX source.
And last but not least, we practially don’t lose any speed
by using shell scripts, since the primary work is done by the
precompiled GPL/NX components.
Fabian Franz: Should I
reveal a little secret? The first working version of FreeNX was
the result of merely one night of programming and the source
code was only 180 lines… 😉
The GPL components of NX are really complete and not just a
“crippled decoy”, like many predicted prematurely after they
looked on the NoMachine website, but not at the NX source
code.
The Open Source community should really be thankful to
NoMachine for this great gift!
Kurt Pfeifle: Developers can
use NX and FreeNX for many occasions — just like every end
user. We hope that FreeNX will also attract developers for NX
core development. There’s a lot of amazing potential.
How is the data stream
of an NX connection protected, especially when operating via
open networks?
Kurt Pfeifle: Using SSH.
The connection initiation always uses SSH. No NX server can
work without an SSH daemon. After authentican, you are free to
decided wether you want encryption of not. Disabling session
encrytion can help older CPUs on NX clients, but should only be
used within self-contained and firewall-protected LANs.
Fabian Franz: An NX server
installation is always as secrure as the respective SSH
installation. NX does not run a seperate daemon on an own port
but uses the ssh daemon for connections.
How can NX be used in
large instllations with a high number of clients? Where are the
limits?
Kurt Pfeifle: Every KDE
fullscreen session takes about 40 kBit/sec of bandwith to work
fluently. This of course allows using non-KDE applications
inside the session too, like OpenOffice, Mozilla or Acrobat
Reader.
Per running KDE user session, an NX server takes about
40MByte of RAM and 100Mhz of CPU. A current standard PC as sold
today, with 1 GB of RAM and 3Ghz CPU should allow 25 sessions
in parallel without any problems. It would probably get flanky
at 35 parallel sessions.
Fabian Franz: Since NX is
distributable among application servers on multiple node, one
could imagine a blade center as offered by HP or IBM, to allow
several hundreds of parallel sessions.
Anyway, a NX application server busts the borders where a
Citrix MetaFrame server prostrates.
Is a Free windows
client planned, based on the NoMachine code or is that left for
others to do?
Fabian Franz: A Free windows
client is not yet planned. That is left as a task for others.
As soon as KDE works flawlessly on Windows, the current kNX
client will automatically also work on Windows, without any
additional effort.
The NoMachine client for Windows is available already today
free of charge — even freely redistributable… So there is
not a very high interest or incentive. By the way, there are
also free NX clients available from NoMachine for Linux,
Solaris, Max OS X, Zaurus, iPAQ, ThinStation and other
operating systems. Those are “free as in beer”.
Kurt Pfeifle: The
interesting aspect about the KDE client for NX is the goal to
integrate it into KDE as tightly as possible, thus using the
full power of the superior KDE internal architecture. That
includes the implementation of an “nx://” kio-slave and an NX
KPart just like the storage of NX passwords into KWallet.
With the opportunity to testdrive
Linux without installing it to harddisk, Knoppix Live CDs have
certainly contributed very much to Linux’ popularity and
propagation. A broad availability of NX should make it even
easier to dive into Linux as a desktop and workstation
operating system. Will we be able to get a taste
of brandnew KDE features via NX?
Kurt Pfeifle: Yes.
And that’s possible already today. Since several months I
“abuse” a root server from “1&1” to do nightly builds of
the most recent KDE CVS HEAD branch, which is rapidly
approaching to become KDE 3.3, using an automatic build
script.
The KDE project will use the NX technology in a broader way
after akademy, to foster and speed up its own development.
Fabian Franz: You have to
understand that KDE consists of more than 1000 more or less
active contributors. Only about half of them are “technically
inclined” C++ programmers. A huge and also crucial part of the
work is done by artists, translators, documentation writers and
beta testers, who can’t necessarily do programming.
The non-technical develoeprs don’t tend to use bleeding edge
versions of KDE for their daily work. Neither do they rebuild
their own KDE from the HEAD branch of the KDE CVS repository on
a weekly basis. They tend to wait for official releases and
don’t install it until their favorite distribution ships
precompiled update packages.
With every KDE release cycle in the past this resulted in a
big gap: Documentation authors, translatiors and artists lagged
behind the current C++ programmer sourcecode by months with
their work.
Kurt Pfeifle: NX can change
that. An automatic, but supervised script builds a fresh KDE
HEAD every night which is supervised by a group of experienced
developers. The result can be tested, used, translated,
documented, polished, bugfixed (or even rejected and reverted)
remotely by hundreds of KDE activists “in real time”, very
early in each development cycle.
NX even allows to install several KDE versions in parallel
in a conflict free manner. So in the future we will have a
reference installation available to verify bugs as well as
report and fix them. This can increase the process of bugfixing
tremendously, making it a lot more efficient.
Please keep in mind that it can be very time consuming to
start a system with KDE 3.2.3 to verify or investigate a small
bug, since many developers actually run KDE HEAD for their
daily work.
We will set up one or more FreeNX servers on strong hardware
which is permanently accessable to all KDE developers and
prooven contributors to work on it. This includes KDE HEAD as
well as the last stable release. Especially the contributors of
the new KDE Quality Team
will belong to the privileged circle of users.
Although we are already in contact with potential hardware
and bandwith sponsors, the talks are not finished. Whoever
wants to support us, can still jump aboard… 😉
”*On first sight, a broad integration of NX could surely
lead to new opportuniteis of server-based desktops in
enterprise, government, public administration and schools. What
is the potential of NX here for the next few years
ahead?*”
Kurt Pfeifle: NX will change
the way we think of “network computing” entirely. A lot of
different use cases in a dozen of different areas come to my
mind…
Fabian Franz: First of all,
NX will mean a new boost to server based computing — not only
for classical server applications, but especially for the
personal desktop (though without making the “personal computer”
superfluos and without degrading everything to small diskless
“thin clients”).
NX-driven application servers are the keyword here.
They’ll not only be attractive to companies, government
agencies and schools — if Familiy Smith owns a root server,
they can easisly turn it into an NX application server
including maintainance, backup, software updates, etc. done by
their service provider.
In the future, “root servers” will not only be on lease with
Apache and PureFTPD, but also with (Free)NX.
Kurt Pfeifle: Secondly, NX
allows for interaction of completely different hardware
components: A Siemens Mobile and a Sharp PDD with a Fujitsu
Mainframe or an AMD Athlon cluster; an Apple iBook with IBM
zSeries, Sony Playstations with Solaris Enterprise
Servers…
Fabian Franz: Thirdly, NX
melts down the frontiers between operating systems. It will
become unimportant what system I am current using: on Linux, on
Windows, on Mac OS X, on Solaris, on Zaurs, or on a Knoppix
booted from an USB stick. No matter which application or which
desktop, on which operating system I want access: to KDE
complete, to OpenOffice, to KMail or to Windows with Photoshop:
The only worry is to have a running network. NX will care about
the access across plattforms.
Kurt Pfeifle: Fourthly NX
raises the importanance of software compared to hardware by
virtualizing servers and applications.
Fabian Franz: Fifthly, NX
will make the increasing number of wireless networks
(bluetooth, GPRS, GSM, UMTS…) in the next years much more
useful. The buzzword “ubiquitous and pervasive computing” will
first come true with NX. And IBM will probably be very
surprised about a new meaning of “grid computing”.
Kurt Pfeifle: Sixthly, KDE
will help to create a new kind of World Wide Web. Where up to
now you had to use static, “thin” and low performant HTML
pages, one can now use dynamic and “rich” applications in both
server-centric and peer-to-peer environments.
Fabian Franz: Seventh: NX
allows the better and more efficient use of hardware. “Big
irons” can serve as application servers with many prallel user
sessions, “weak CPUs” can serve as thin clienta to whom NX
allows access to almost inifinite CPU power from anywhere.
Kurt Pfeifle: Eigthly, NX
will create a new migration wave to Linux. That is because with
NX, a migration plan can be seperated into managable steps.
Distributed in small doses, the Linux medicine often considered
to be bitter will be much easier to take in: NX can get single
applications like KMail or Kontact on a
Windows screen. In reality, the applications are running on
remote Linux computers, completely transparent to users. This
reduces today’s permanent administration efforts like backup or
virus protection by many magnitutes; and it is the first step
of a complete migration.
Let me give only one example:
The project “bring KMail/Kontact onto the Windows screen
using NX” comprises even more hidden possiblities: two years
ago, “Project
Aegypten” was started as a contract work by the BSI
(‘Bundesamt für Sicherheit in der Informationstechnik’ —
the German federal agency for the security in information
technologies). This is a now completed and secure email
communication using well-estabilished encryption standards.
According to my information, less than 1000 federal government
employees currently use KMail — simply because there aren’t
more desktops converted to Linux/KDE yet. With the help of NX
one could offer KMail access (refined with Project Aegypten
algorithms) to every employee in the public sector. Almost
instantly, all problems emanating from Windows’ viruses and
worms would be eliminated if this quite easy transition would
come true.
FreeNX and NX can not only help with the first step of migration to Free Software. But
they can also support projects, that are postponed now only
because 90 steps could be done safely, but the last 10 couldn’t
be done currently so far. NX builds a “bridge” back to legacy
yet important special purpose applications. This is especially
important for those public sector departments and offices that
didn’t start a migration yet because they were afraid of the
last 10% of non-ported applications. NX can get applications to
the Linux/KDE Desktop that run on Windows Terminal Servers
(WTS) or Windows XP Professional workstations.
Fabian Franz: Ninth… I
guess we don’t get the dozen full today 😉
Kurt Pfeifle: Yeah, ok,
let’s not shoot out all our ammunition, and keep some powder
dry 😉
When will we see the
first stable release 1.0 of FreeNX? What features will be
included?
Fabian Franz: Such a thing
as version 1.0 will “never” exist… 😉
This is because Knoppix 3.6 already includes a snapshot of
the 1.4 release. That’s because the versioning sheme is
attached to the one of NX to keep the maintainance in sync with
NoMachine thus making it easier for users and administrators
who might switch back and forth between the GPL NX and the
commercial version of NoMachine, and who want to have use both
on different hosts while keeping intact mutual
interoperation.
Kurt Pfeifle: With NX 1.4.0,
NoMachine introduces two important features:
- printing to the local printer from remote applications,
and - session suspending, session reactivation and session
migration
The new features are currently under development API-wise.
They are especially important for professional use cases…
Despite of things being “in the flow” there, we will release
a “Snapshot 4” of the NX-1.4 based FreeNX Server during aKademy for the first
time — ready for a broad beta test. At least Debian and SUSE
users should have fun with the “aKademy” edition of FreeNX.
Commercial NX 1.4.0 will probably be released in its final
shape by NoMachine during September. FreeNX will update and
releas as soon as possible after this too. The FreeNX Snapshot
is available for download after aKademy.
Compared to X11 over TCP/IP, NX is a big leap forward and a good one to see in Open Source software, but not revolutionary
Just two days ago I was in a house in an isolated area of North Wales, and using my graphite iBook and a dial-up internet connection was able to administer my WinXP desktop (300 miles away in England) using RDP – Remote Desktop Protocol from Microsoft. A 256-colour 640×480 screen was surprisingly responsive over a 41kb/s connection.
I can do the same using Apple X11 to administer my AIX server at work, albeit somewhat more slowly.
RDP is limited in some ways (only one login at a time to XP), but you can buy Terminal Server which can handle a great many, or a third-party implementation called WinConnect XP Server for the same purpose.
So.. NX is good, but no revolution.
It’s very revolutionary.
NX brings the price down so that it is reasonable for someone with a small or medium business to wrap their financials around – and more importantly, allows them to use the apps that Linux/BSD/etc. offers from any interface at all, using very little bandwidth.
Citrix and RDP are great, but horrendously expensive to buy and support. We are a Citrix shop and I’m here to tell you – as a consultant we keep asking ourselves when we are going to make money since the support burden can be brutal.
“Welcome to ThinSoft! From business to education, our mission is to assist you in your challenge of reducing Total Cost of Ownership (TCO) for connectivity, mobility, and access to data. Through advanced modification of RDP, Microsoft® Windows technology, and Terminal Services, our Linux, Solaris, and Windows Thin Client software products enable transparent application access without the need of making a significant hardware investment.”
http://www.thinsoftinc.com/
Is it possible to detach / reattach a running session as with screen using NX ?
Yes.
Thinsoft looks nice (thanks for the link), however it doesn’t exactly beat “free” nor compare with the NX Enterprise server pricing (as far as i can tell).
And of course, can’t get you by the licensing of Windows seats. Microsoft made sure of that (or there’s a lawsuit pending, you choose).
I could run NX client on windows and access NX server running on the same machine via http://www.colinux.org … mmmmmmm !
Sorry if I’m being dense, but there’s not a lot of discussion about what NX does. I don’t use Citrix, Remote Desktop, or even VNC, but I do occasionally remote some X11 windows.
What’s the simple explanation for how NX changes things?
It still uses the X11 protocol. It wraps SSH for encryption along with compression so that it can be used over slow links. You cannot use pure X11 over a modem. NX also provides sessions so you can close out a session and resume it later. You cannot do this with regular X11.
Well, let’s just say I understand a little better. I’m typing this from within Mozilla 1.6, using a GNOME desktop on the NoMachine German testdrive server.
The experience so far is that it is not too shabby for a remote application. My machine is a 5-year-old Athlon 500, so I don’t expect things to fly.
Now, I wonder if I can use the free client to setup a remote desktop for my friends and family to use? That would be rather neat.
has anyone compared to DXPC?
from what i can tell, NX and DXPC intend the exact same goals: not only reduce data size, but reduce number of messages sent. They’re both heavyweight proxies from waht I can understand.
Is this more than LBX over SSH?
Does this compete with Tarantella (http://www.tarantella.com)?
I started experimenting with VNC server and client.
I went to school(windows) and logged into my home computer(pclinuxos) over the internet, alas I didn’t bring my headphones to test sound.
TightVNC says it can be used for slow modem connection, hence the tight. I have cable and school is high speed.
What’s the difference between this and vnc? I’m pretty sure I can use vnc on windows, but not on NX.
I’m pretty sure I can use VNC server and control a windows machine.
NX seems to lack this feature.
Excuse me if I’m sceptical of something that will claim to change the world.
mikesum32:
“I’m pretty sure I can use VNC server and control a windows machine.
NX seems to lack this feature.”
It’s not lacking a feature at all; this is simply the *design*
We’re talking about a “Citrix-Killer” that is also touted to be a “Windows-killer.”
It’s not about simply remotely accessing a product, ala VNC, but creating a true “Thin Client” environment.
Imagine either a school or business with an Athlon 2400+ and 1Gb RAM, as well as a pile of useless 486s laying about that they picked up for 2 cents each. Whack a network card in them and voila – you have a thin client – able to run all of the latest productivity apps. Why? Because it’s all on the server! And yes, it scales. Think big!
Unlike RDP and VNC, the performance is great! It’s actually usable and responsive over dialup – something that I cannot say about RDP or VNC! Ick!
No, it won’t be a games machine – but it’s not trying to be. Nor will it play your Windows apps – because it’s not trying to!
Open your mind to the possibilities and read the article. You’ll see that we’re heading for a big revolution in businesses and schools.
Even if the FUD mongers and trolls are successful and this only helps 20% of businesses, that’s a pretty massive task, no?
Yes, all of this is possible with other technologies; I’ve been running a mixed bag of Citrix (for the one Windows application that cannot be done without – the accounting package, whose use is non-negotiable unfortunately) and LTSP for the rest of the desktop.
In short – it rocks! My admin time has dropped to non-existant (no more quarterly Windows re-installs! Boohoo! No more trojans/virus worries when people are surfing sites that they shouldn’t…)
By comparison, though, there is latency in the LTSP environment (It’s using X), where as NX is far more responsive.
NoMachine’s NX Client supports NX, Microsoft RDP, Citrix ICA and VNC remote sessions according to http://www.nomachine.com/products.php
The FreeNX server afaik only supports NX, but anything else wouldn’t make much sense anyway.
NX is really good, but in terms of technology is not revolutionary, is just a optimized implementation of something that already exists…. nothing new.
We just got done with an implementation of this in a fairly good size school. We have modified the system to allow authentication by their windows active directory. One Mandrake Linux server, with a nomachine client on 120 + windows machines, which include one computer lab.
The response has already been outstanding, and the next step is to replace their Windows files server with a Linux files server, allowing the users to have the same home directory in Linux and Windows.
It is the nomachine technology that is allowing us to introduce these people to Linux, and the same technology will allow us to migrate them to Linux in a year.
But rather the price and implementation it seems.
Mostly, yes. The technology is not 100% new and super, but the fact that it is very optimized and cheap finally makes “Thin Clients” a real, feasible idea. That is the revolution.
I’m just remind of in Star Trek movies…
NX Transwarp enabled desktop… which might suck, I dunno.
quotes not required, but yeah.. I’ll probably see them anyhow…
I wouldn’t think so.
It is good to see that vnc/rdp/citrix etc gets something OSS to compete with, and as it seems to already be better and cheaper we could expect a serious change in how many people look on remote desktop, indeed.
But when they in the article starts to count uses they go a little over the edge. It simply won’t happen for that many.
Now, I can say that I thank nomachine for releasing this product under the GPL.
and the idea of integration tightly with kde in the kNX client is also very welcome.
Good work all of those who’s been part of this!
Yeah thin clients, but you need a killer server to run one of these, plus if your server goes down, nobody does any work! Single point of failure isn’t always a good idea. Can these NX servers be Clustered & Load Balanced? Otherwise downtime could be a problem.
Thin clients don’t make sense anymore! They did back when you where talking about a 300MHz PCs 64MB RAM but not now when you can get a 2.8Gz 256MB for about €550 including VAT and a monitor… What’s the point!
Most banks I’ve worked for only do this because they have old PC and don’t want to upgrade them. If a company has already invested in PC hardware in the last 3 years most will only use this type of Server to manage ‘Application Installations’ which is MS Windows only real Achilles heel. Try and install an App onto more then 100 Windows Desktop becomes a serious pain in the ass. That’s where NX could have a role. But in the corporate world they’ve moved Java and some use .NET, because of distributed application or the web as the interface, but IE6 security holes are holding them back. It’s real only legacy applications that make sense on this type of platform. Citrix / NX are middleware tools that will become obsolete once all the old apps are replaced by likes of Java or .NET frameworks. Citrix was big back in 1998-9! It’s now 2004! Internet / Intranet development & deployment has moved no since then.
Imagine either a school or business with an Athlon 2400+ and 1Gb RAM, as well as a pile of useless 486s laying about that they picked up for 2 cents each. Whack a network card in them and voila – you have a thin client – able to run all of the latest productivity apps. Why? Because it’s all on the server! And yes, it scales. Think big!
Nothing they couldn’t do already with Citrix or Terminal Server
Unlike RDP and VNC, the performance is great! It’s actually usable and responsive over dialup – something that I cannot say about RDP or VNC! Ick!
RDP (Windows TS) is quite usable over a 56k modem. VNC isn’t, but it’s not exactly designed for efficiency so that’s hardly surprising (nor is VNC really anything like NX, Citrix or RDP, so I’m not sure why it’s even in the discussion).
Open your mind to the possibilities and read the article. You’ll see that we’re heading for a big revolution in businesses and schools.
It’s hardly a revolution when schools and business have been able to do it for years already.
I’ve been using remote control for a very,very long time now. From PC Anywhere to rdesktop. But I think this kind of remote control is in no way as useful as the excellent X11 Xclient-Xserver interface. With X11 I run Kmail in my laptop, xmule in a server, konqueror at the webserver and all those aplications show in my desktop computer, so I can use the same keyboard to control all these aplications over all these different computers.
I just ssh to the computer and whenever I need to show something graphical I run the gui and X11 makes the rest.
I couldn’t be able to use kmail on another computer if I had to remotely control a whole session.
Can you do it with NX? can you nxssh someplace and run X11 programs from the console? or ssh there and then nx kmail it?
Hope someone tel me, so I can decide if NX is so revolucionary.
Tnx
It’s very revolutionary.
A marginally different implementation of something that has been available on other platforms and available in only a slightly less featureful form on unix – for years is not in any way “revolutionary”. It barely qualifies as “evolutionary”.
NX brings the price down so that it is reasonable for someone with a small or medium business to wrap their financials around […]
Windows TS is not particularly expensive – about AU$1000 for Windows 2003 Server plus about $150AU per client. That’s chicken feed.
Citrix is certainly more expensive, but I’m also willing to bet it does more than this product as well.
[…] – and more importantly, allows them to use the apps that Linux/BSD/etc. offers from any interface at all, using very little bandwidth.
Why (not to mention how) is this “revolutionary” ?
The only big differences between this and regular X are a) it’s low-bandwidth and b) it runs entire desktops (and keeps them running after disconnect). Neither of those would really be showstoppers for someone who _needed_ unix apps.
People have had the ability to access “the apps that Linux/BSD/etc. offers” for years via thin clients already.
Citrix and RDP are great, but horrendously expensive to buy and support. We are a Citrix shop and I’m here to tell you – as a consultant we keep asking ourselves when we are going to make money since the support burden can be brutal.
Where do most of your support expenses go ? Why do you think they would be significantly reduced with this product ?
We just got done with an implementation of this in a fairly good size school. We have modified the system to allow authentication by their windows active directory. One Mandrake Linux server, with a nomachine client on 120 + windows machines, which include one computer lab.
Out of curiosity, what’s the server specs, how many concurrent clients are you typically hosting (with what sort of workload) and what is the load on the server ?
Pax Vitae wrote:
Yeah thin clients, but you need a killer server to run one of these,
False. An 2.4Ghz server with 1Gb RAM will happily server up to 25 simultaneous users in a business environment. More if you tweak…
plus if your server goes down, nobody does any work!
As opposed to what? A peer-to-peer? No thanks. What about a client-server? Same problem mate.
The key to a good network design is redundancy.
Single point of failure isn’t always a good idea. Can these NX servers be Clustered & Load Balanced? Otherwise downtime could be a problem.
Yes, they can be load balanced and clustered. Did you RTFA or spend anytime at nomachine.com in the last 12 months?
As to downtime, that can happen in any network topology. Say it with me “redundancy.”
Thin clients don’t make sense anymore! They did back when you where talking about a 300MHz PCs 64MB RAM but not now when you can get a 2.8Gz 256MB for about €550 including VAT and a monitor… What’s the point!
You’re not serious, are you?
For starters, I’d think twice about a cheap’n’nasty for production use.
Then there is performance; If you’ve only got 40kbps going around the network per client, you’re going to have a much less congested network, no? One of the most popular accounting apps down this way sends tens of megabytes around the network all of the time (very bad design!) and performance with “traditional” peer-to-peer or client/server environments is up the proverbial insitute without means of propulsion.
The same app under a thin client works like a dream!
Then there is security; (a more generalised statement in this case, but): Things can be locked down a lot better than on a Windows box. My lusers cannot install WinMX or whatever piracy software they want to. It just doesn’t happen!
Redundancy: Yes, with load-balanced servers, your “eggs-in-one-basket-duplicated” approach (commonly referred to as “centralised administration,” and even strongly touted by Microsoft) is the ultimate way to go:
“Workstation/client goes down? No worries. Here’s a backup for just this situation. I’ll just unplug this box, plug this one in and away you go. Everything it just the way you left it!”
And your “What’s the point!” (Gotta love the use of exclamation points to pose a sentence) is a rather shortsighted perspective.
A high-end server with high-quality, including Dual Xeons, Hotswap SCSI RAID, Hot-swap Redundant PSU and Win2003 Server (which you can return for a nice bonus, thank you very much) will cost >$3500 AUD. Hell, because you’re worried about redundancy, let’s throw in two of them! (Total $7000AUD))
Now, for workstations I’m going to use all of the old doorstops around here & add some new monitors & keyboards, etc. Let’s say 15″ LCD’s at ~$400AUD each. (Total client cost $450AUD each).
Now, ignoring network switches, cabling etc (which you have to do in both environments – hell, at 40kps per client, wireless is even starting to look usable!) you’re looking at $17,000AUD for this as a 25 client network. And with dual-Xeons, you’re not looking at a very high load, so there is plenty of room for expansion (and you probably won’t need the backup server, either. And let’s not forget the donation of a Win2003 Server cashback that I can take home…)
And that sort of grunt would easily allow you to throw another 25 clients (another $10K) in to get 50 clients for $27K.
(And I’m not going to mention that many people get by with a $1,200AUD P4 2.8Ghz to achieve the first 25 clients… Or the massive savings if you went with CRT monitors)
Now let’s look at your fat clients… I’ll assume that you want to use as “cheap-as-possible” (ie, somehow your fears of downtime gets thrown out of the window.) Now, a new machine with budget specs will cost $999AUD including CRT monitor (hey, it saves money. Who cares about desk space?)
Now, 25 * $999 Well, when I went to school, that would have worked out to $24,975. Now, whilst I might have found integral calculus to be quite irritating (or maybe it was the teacher who I couldn’t stand – no matter.), I was under the impression that $17,000 was less than $24,000.
I could, of course, be wrong…
Hmmmm…. I wonder what 50 * $999 would be? I don’t have enough fingers and toes to work that one out, sorry.
And did I forget one of the perks? What if the clients were turned into laptops, just for the hell of it? Sure, that’d cost more, but with it being all-business when connected using the thin-client at work, the employees could have personal-only use at home.
And the IT staff could even have an rsync of vital backups for that last bit of paranoia about the off-site-backup systems.
Most banks I’ve worked for only do this because they have old PC and don’t want to upgrade them
Really? Wow, I was under the impression that they did it so that they could work with their mainframes and have redundancy, as well as central administration of applications. At least, that’s how the banks work here.
Could you be making unfounded statements again? Nah, didn’t think so…
Citrix / NX are middleware tools that will become obsolete once all the old apps are replaced by likes of Java or .NET frameworks. Citrix was big back in 1998-9! It’s now 2004! Internet / Intranet development & deployment has moved no since then.
I choose to disbelieve. Yes, Java is a great idea. *coff*.
But to say that the thin-clients are obsolete? As I sit here at home, working from my dialup (because Telstra still hasn’t added any more than the initial 400 DSL ports to the exchange and I’m on the waiting list. Bah!) I’m happily working in the accounting system and accessing my work-apps. All of them remotely, all of them quick and responsive.
The only thing that Thin-client sucks at is 3D games. Since I’m not a gamer, I couldn’t care less.
Oh, and FWIW, the X-Box makes a great, compact thin client.
Thin clients don’t make sense anymore! They did back when you where talking about a 300MHz PCs 64MB RAM but not now when you can get a 2.8Gz 256MB for about €550 including VAT and a monitor… What’s the point!
There are no less than TWO points:
1 – Reliability and maintenance costs: thin clients cost around 250-300$, but they have NO moving parts, and they generally have an expected life of around 10 years. In those 10 years how much money will you spend on spare parts and replacement machines with those 2.8Ghz PCs? Not mentioning the reduced admin costs: I’ve done tech support and I can tell you that I have wished many times I had thin clients instead of clunky windows PCs.
2 – Price: maybe using recicled Pentiums and 486 don’t make much sense in rich corporate america, but keep in mind that there is some milliard of people that would consider those 500-600$ per machine a HUGE expense. Thin clients on old machines require only a couple of those off-the-shelves powerful PCs to have a very fast server with 1-2G of RAM and redundancy (or at least a failover machine) for dozens of clients.
Oh, almost forgot: you mention the single point of failure problem… put TWO servers in a redundant setup and you still save MUCH money(*). Apart from the point made in (1) think of the cost of backupping effectively 100 clients. With a terminal server you backup the server and you are done.
(*) Search the net for stories of the City of Largo (California) for the details of an existing and succesful installation of this kind.
Yeah thin clients, but you need a killer server to run one of these, […]
You shouldn’t. Modest hardware by today’s standards (2.4Ghz P4, 1GB RAM, IDE RAID) can easily handle 20+ clients running a typical application mix (Outlook, Word, IE) using Windows 2003. I’d certainly hope this product was _at least_ that efficient.
Thin clients don’t make sense anymore! They did back when you where talking about a 300MHz PCs 64MB RAM but not now when you can get a 2.8Gz 256MB for about €550 including VAT and a monitor… What’s the point!
User management, SOE, lowering support costs, etc.
Try and install an App onto more then 100 Windows Desktop becomes a serious pain in the ass.
Assuming the app is any good and you have a decent setup (Active Directory, Windows 2000/XP desktops, etc), it’s trivially easy.
Citrix / NX are middleware tools that will become obsolete once all the old apps are replaced by likes of Java or .NET frameworks. Citrix was big back in 1998-9! It’s now 2004! Internet / Intranet development & deployment has moved no since then.
No, they won’t. The advantage of thin-client computing is in reducing user management and support costs. It’s a hell of a lot easier (ie: cheaper) to manage a few hundred users and a couple of servers in one location than it is to manage a few hundred users and their PCs spread out in twenty locations around the country.
I’ve been using remote control for a very,very long time now. From PC Anywhere to rdesktop. But I think this kind of remote control is in no way as useful as the excellent X11 Xclient-Xserver interface. With X11 I run Kmail in my laptop, xmule in a server, konqueror at the webserver and all those aplications show in my desktop computer, so I can use the same keyboard to control all these aplications over all these different computers.
The major disadantage to X is that it doesn’t keep state. So if your network connection goes down, then your applications all quit and anything you were working on is lost. You also can’t move to another client without quitting everything and reopening it, or leave a session running at work and reconnect to it at home.
I couldn’t be able to use kmail on another computer if I had to remotely control a whole session.
The real question is whether or not doing that is solving the real problem you have (whatever that may be).
Hope someone tel me, so I can decide if NX is so revolucionary.
It’s not revolutionary in the slightest. It’s basically just Citrix/Windows Terminal Services for unix.
{re Schools with thin client, etc}
Nothing they couldn’t do already with Citrix or Terminal Server
I don’t recall saying that they couldn’t. In fact, I’ve been using Citrix for years, though I detested Winframe.
The difference being that FreeNX is as responsive, if not better, than ICA (and far better than RDP) for a much nicer license (free-as-in-speech & free-as-in-beer).
RDP (Windows TS) is quite usable over a 56k modem. VNC isn’t, but it’s not exactly designed for efficiency so that’s hardly surprising (nor is VNC really anything like NX, Citrix or RDP, so I’m not sure why it’s even in the discussion).
I have had poor results out of RDP over dial-up – even directly to the server. ICA is far superior in this regard.
I didn’t raise VNC as a solution, I was merely responding to someone’s comments about using VNC on Windows. Yes, they are designed for two different purposes, and I would only keep VNC as a “last-hope” when it comes to constant remote application support.
It’s hardly a revolution when schools and business have been able to do it for years already.
Again, I was referring to the infrastructure costs. So far most schools, at least in this region, have been bullied into being Microsoft shops and thus part of the vicious upgrade cycle. IT budgets are tight in schools, so anything that can give them more-for-less is a good thing.
Yes, K12LTSP has been around for years and has done a fantastic job. It’s still sluggish compared to NX though. (Try one side-by-side).
The revolution will be in the sense of schools and businesses “revolting against their MS oppressors.”
NX itself is more evolutionary than revolutionary.
It’s not revolutionary in the slightest. It’s basically just Citrix/Windows Terminal Services for unix.
Persackily!
But, it’s GPL!
I don’t recall saying that they couldn’t. In fact, I’ve been using Citrix for years, though I detested Winframe.
Maybe it’s just me, but I wouldn’t call something that people have been able to do for years “revolutionary” .
I have had poor results out of RDP over dial-up – even directly to the server. ICA is far superior in this regard.
I’ve not used ICA over dialup, however, I do regularly use RDP over a dialup connect and with suitable tweaking (drop the colours, turn of graphical effects, etc) I find it quite usable. We’ve also got a handful of people here who use it over 56k dialup and then consider it to be quite usable as well (for Outlook, Word, etc).
Again, I was referring to the infrastructure costs. So far most schools, at least in this region, have been bullied into being Microsoft shops and thus part of the vicious upgrade cycle.
Bullied ? Sounds like you need to sic the ACCC on them.
IT budgets are tight in schools, so anything that can give them more-for-less is a good thing.
True, however, the costs for Windows TS aren’t huge. It’s much cheaper than Citrix, but Citrix is damn expensive. Windows TS runs about $1000 for Windows 2003 + maybe $150 for each client machine – and that’s retail, so I imagine a school could get it much cheaper. The Enterprise version of NX is 494 Euro = $851AU (probably +GST) “introductory” price and 794 euro = $1370 normally – although it does of course have the advantage of no CAL costs.
It’s certainly competitive. I think you’d need to do some careful number crunching for any specific environment before ruling one or the other a winner, however.
The revolution will be in the sense of schools and businesses “revolting against their MS oppressors.”
Not everyone using Microsoft software is “oppressed”.
Add a MOSIX server cluster in and a
carefully structured network.
An interesting blend that MS and Citrix
won’t quite match.
I have been using NX for years (single user license, which is rather restrictive, so I am happy about the FreeNX server)
First of all, if you want to use NX in a corporate environment, gett an official enterprise license. You will get support, which the freenx developers wont give (They dont want to interfere with NXs business, they have to be thankful to be allowed to use their code at all)
Secondly, NX is excellent, much better than the Windows terminal server (which is based upon an older Citrix version) and probably better than the current Citrix stuff. I dont have any clue how it compares to Tarantella.
I want to give a short comparison betwenn RDP and NX (cannot speak for Citrix and Tarantella) RDP basically is something like a super VNC, a system which basically streams the frame buffer with a very high compression to a client and basically does some message queue compression to keep the signal level small and increase latency robustness. NX is a different approach, which can only work with X-Windows and therefore will only run on X based servers.
If anybody knows the old lbx project, NX is done by the same developers but years ahead. NX basically acts as a smart protocoll proxy between X server and client and does heavy compression on the X protocoll. The advantage is:
X itself although network centric, clogs thanks to the high volume of issued commands a network very swiftly given, the current state of affairs (with cairo as high level protocol things will get better, there is no discussion about that)
X itself is very problematic with higher latencies like you have it in dial up and wan networks.
LBX basically sits in between and acts as a protocol proxy/translator which basically compresses the protocol by translation of X command patterns into something smaller.
(For instance if a font drawing pattern occurs, the whole draw line, draw pixel sequence is converted into a single font command)
It also adds to that streaming where it is feasable, so that in the end a combination of X commands, a high level protocol and streaming is transferred between client and server.
Additionally to that, there is the option to keep the font handling local (even under Windows)
It definitely is not the best approach, in fact, the best approach would be to clean up the mess which is called X protocol on a single protocol level (which hopefully will be possible with Cairo in the long term), but it is the best approach without breaking the X protocol.
If anybody of the developers read this, I am just a user, but thats basically what I could gather from using and and reading some of the specs. Correct me if I am wrong.
I don’t think that producing a search engine in the 2004 can be considered revolutionary. Still Google is revolutionary in the approach, in the way it binds Web resources together to create new applications. NX is revolutionary in the same way. It is not just compression of the X protocol (we needed it to make the rest possible) but a new way of thinking at network computing. NX is a sort of VPN that links the client and the server. It lets the client use the resources of the server (the CPU, the storage) and the server use the resources of the client. It’s all about protocols and network daemons. The X-Server is just one of these network daemons.
Peer-to-peer computing it’s in the TCP/IP by design. NX is an application exploiting it and making it pervasive.
/Gian Filippo.
NX is really good, but in terms of technology is not revolutionary, is just a optimized implementation of something that already exists…. nothing new.
Not the technology in se is revolutionary, but the important facts are that
1. it’s a free and open technology now
2. Linux/OSS wil leverage this
How does this compare to RDP, the protocol used by windows for
its terminal services ? I use it often over a modem, and it’s
not blazingly fast, but certanly usable. Why not implement an
RDP server for Linux ?
– This is old: In Unix world this has been pretty much available all the time. I don’t see that many people in the Unix world have problems, that NX would solve. Some edge cases yes, but big scale – no.
– This is non-revolutionary: To make it revolutionary, it has to affect Windows world too. Linux at desktops? That’s marginal usage. It has to work like Windows terminal services to be anything useful outside ICT companies. And even then, if the user needs aren’t analyzed properly, you’ll generate an incredibly inflexible centralized hell. Admins are happy, but no one else is.
– This is boring: Blazing fast computers are cheap anyway. Who has those “486 computers” bouncing around? Why bother to have a centralized system in anywhere except tit he biggest organizations? Shared folders maybe, but shared terminal server? Boring.
The savings? You can fire some IT support people. That’s propably the biggest impact.
NX sounds cool, but it solves problems that almost no one really has.
If it was revolutionary, we would have seen the revolution with PCAnywhere, Citrix, X-Windows and VNC already.
I’d be interested in just ONE believable scenario, where this kind of solution is revolutionary compared to ordinary desktop with client-server apps. And not from the admin point of view, but the user’s point of view. Like closing sales or treating a patient.
I just re-read the comments when the thought hit me:
“why am I even thinking of this product?”
y-windows (if ever completed with the specs that are now) would beat any remote desktop big time as long as it’s not sending images, and when it is it would be about as good?
of course y-windows would require slightly more than thin clients, but on other hand the painting of windows etc goes to the clients. (no hdd’s required, but a decent graphics card and mem/cpu could be well needed)
aww.. now I wish the NX people could’ve helped y-windows out with the network streaming part
Out of curiosity, what’s the server specs, how many concurrent clients are you typically hosting (with what sort of workload) and what is the load on the server?
Oh yes, the server specs required are IMHO greatly exagerated here also. We are running the school system on a dual Pent3 800, with 1 gig of ram. It carries a steady load of 20+ users, and has no issues keeping up.
Yes, it can. From the NX Client you can freely choose the desktop environment you want to start (think X/G/KDM), the desktop’s screen size as well as single apps if you prefer that over having a complete desktop.
At the aKademy Kurt Pfeifle repeatedly demonstrated his diverse KDE installations he has on a 1&1 dedicated root server (currently Intel Celeron 2.400MHz, 512 MB DDR-RAM, 80GB IDE, 80GB Backup, 100GB traffic for 69Euro per month, actually not the best offer available in Germany atm). It compiles CVS versions of KDE daily while having other stable version of KDE parallel installed. Kurt can access his desktop on that server from any computer with internet access, as long as he has an NX client of some sort (NX server and client are now included on Knoppix tho). As he pointed out in the interview and emphasized at the aKademy meeting he sees great uses for NX with regard to project contributors like translators and documentation writers who usually can’t afford compiling/running the bleeding edge of the project. The response at the aKademy after the initial reluctance (just like we see here again) was outstanding, and we can expect tight integration of NX into KDE (where it makes sense) for the above reasons.
Is it possible to use the NX libs to record application demos and tutorials? A video codec based on NX would be really cool.
Currently I’ve got a couple of gigabytes of learning videos mostly for 3D applications. The files are HUGE and the quality is awful.
Don’t know about NX, but you can do it with vnc NOW. Just look for vncrec and similar.
After capture you transcode to DivX and you are done.
>Don’t know about NX, but you can do it with vnc NOW. Just >look for vncrec and similar.
NX offers a far better compression and less latency problems than tightvnc and newer VNC versions, it reverts to VNC like streaming wherever it makes sense and compresses the X protocol on other protocol patterns.
>- This is old: In Unix world this has been pretty much available >all the time. I don’t see that many people in the Unix world >have problems, that NX would solve. Some edge cases yes, >but big scale – no.
Yes and no, if you go for plain X then better be prepared to have a very fat lan, modern X desktops make remote X not a viable option. It even becomes worse once you go for WAN or DSL then you run into heav latency problems, which plain X has a problem to handle.
If you really want to do serious terminal services on X you have to go for other solutions if you have many clients. Companies like Citrix and Tarantella make a good living off that problem.
NX is another solution for that problem, by providing a smart proxy to reduce the latency problems and network clogging.
It is comparable to Citrix and RDP in many regards and sometimes even better.
(Cannot say anything for Tarantella though)
How is this different from LTSP? I have LTSP in use and find the performance great.
Would I get any benefit using this instead? I don’t use it over dial-up so I really don’t need something that is bandwidth friendly.
I don’t use it over dial-up so I really don’t need something that is bandwidth friendly.
I know of a school which runs LTSP on a 10 MBit, non-switched network. Their “collision domain” makes running more than 15 concurrent sessions a big pain.
They pondered to upgrade to a 100MBit ethernet, switched. Which would have solved their performance problem, but would have involved unbearable costs (given their tight budget) for upgrading network hardware and re-wiring the building.
Now that they have discovered NX, they are sure to be able to run 100 concurrent or more NX sessions on the same, unchanged network….
Cheers,
Kurt
has anyone compared to DXPC?
DXPC is dead. NX is DXPC’s legitimate 3rd generation successor. NX builds on some ideas of the original Differential X Protocol Compressor. But NX is a completely new implementation of the basic idea to use a X traffic compressing proxying system in between the X application and the X server. NX also implements a very efficient, new and original compression algorithm for X traffic, developed by Gian Filippo Pinzari. (Compared to generic ZLIB compression it takes 10% of CPU to generate a 10-fold increase in compression efficiency).
BTW, the 2nd generation successor to DXPC was mlview, also made and released by Gian Filippo Pinzari some years ago, but now un-maintained.
NX is *much* better, *much* faster and much more stable than DXPC and mlview.
Is this more than LBX over SSH?
LBX is dead. Not declared dead by *me*, but declared dead by its parents, Keith Packard and Jim Gettys. See their document LBX Postmortem ( http://keithp.com/~keithp/talks/lbxpost/paper.html ) for details.
Keith and Jim repeatedly publicly dissuaded from using LBX any more. The reason is simple: LBX doesnt implement neither satisfactory security nor satisfactory performance. They instead advocated to use “ssh -C -X” instead of LBX (The -C enables ZLIB compression on the ssh link, and the -X turns on X forwarding.). But that was before NX became publically know….
Cheers,
Kurt
I was referring to recording sessions for educational purpose. I highly doubt you’ll record those on a low-speed link, so VNC or NX really does not matters.
I’m pretty sure I can use vnc on windows, but not on NX.
…and I am pretty sure you are pretty wrong.
In fact, NX can tunnel (compress and cache) X11 as well as VNC and RDP traffic.
In fact, you can use VNC with NX on Windows as well as on Solaris, Mac OS X,….(insert all VNC-enabled platforms here).
The remote NX server will proxy the link to any VNC server as requested. The VNC traffic between the remote NX server/proxy and the remote VNC server will be translated into NX traffic (between the remote NX server/proxy and the local NX client/proxy), thusly making it automatically benefit from NX compression and NX caching. Overall, for low bandwidth, high latency links this will improve the connection efficiency by 2- to 10-fold as compared to plain TightVNC.
Of course, if you are naive enough to “test” and “compare” this with a local-only setup, like some readers here did (in the worst case on the same (local)host), you will not see much of a difference….
NX is meant for connections over ethernet and other remote links, not for connections over local Unix domain sockets/named pipes….
What’s the simple explanation for how NX changes things?
Traditional X11 remote desktop connections are unbearably slow even over cable modem / ADSL links. You wouldnt expect your secretary to work on such a setup day-in, day-out from home as a home IT worker.
With NX you can. NX works great and fast even over ISDN or modem links, and provides a very snappy user experience.
NX works cross-platform. NX enables me to “have my personal desktop always with me”, without having to carry a physical device like a laptop around. And I use it that way….
Don’t know about NX, but you can do it with vnc NOW. Just look for vncrec and similar.
After capture you transcode to DivX and you are done.
That’s nice. But not exactly what I meant. After transcoding you still have one huge avi. What I wanted is to store the NX stream into a movie file container (AVI, Ogg, etc) and use an NX video codec to play it.
Does it use something like XDamage? If not, this together with XDamage greatly reduce the bandwidth usage. If it does, it is most likely a factor in the low bandwidth usage.
There’s always something like XNest. There’s also a screen-like program for X but i forgot the name. If you hire someone who programs some glue/scripts you have a cheap, solid solution, too.
PS: The thin client discussion is ignorant. If you don’t see the advantage of a thin client (+ AD / LDAP) well sorry then you probably haven’t worked with it or haven’t worked in a big IT environment. Heck, there are already major pro’s when you use it in a small env (e.g. 3/4+ desktops).
Traditional X11 remote desktop connections are unbearably slow even over cable modem / ADSL links.
Nope. Depends on the situation.
ADSL lines (for home users) here have some upload bandwidth available. Typical 320 kbit or 640 kbit minus 15% ATM overhead. This is enough for a few users (with VNC i’d say 5-10). However i take it home users are NOT the group you aim to profit from. SDSL lines (“business DSL”, for corporate environments) are symmetric and you get yourself a 2 mbit down _and_ upstream with that, minus 15% ATM overhead. That’s the environment which got a lil’ bit to spend on a solution like yours however how is 2 mbit minus 15% ATM overhead “too slow”? Why wouldn’t they use a VPN + compression + X or VNC?
ADSL lines (for home users) here have some upload bandwidth available.
That’s not the point. “Slowness” is not what I mean when I describe the sheer, raw bandwidth of ADSL, but what a user feels if he runs a remote X11 session over it.
See, X11 produces a lot of so-called “roundtrips”. A roundtrip consists of an “X request” by the X application that facilitates an “X reply” by the X server. If that roundtrip goes via ethernet (not Unix domain socket as if you run the application locally), this has a latency of 50 msec easily, and with modems up to 500 msec or more.. Mozilla starting up needs around 3.000 to 4.000 roundtrips (with the current version, former versions needed even 6.000 roundtrips) to only draw its initial window.
Roundtrips are one major factor to make remote X11 applications slow. From a certain level you can make your data pipe as thick as you want — it wont effect the speed of your remote application any more. What it does is that it leaves the data pipe nearly empty while you wait for tiny packages to complete their roundtrips.
NX eliminates these roundtrips on the remote part of the link. That is the major factor to give NX a large edge over previous technologies such as DXPC and LBX (which only tried to implement the compression part and left out the roundtrip elimination). On top of that, NX implements a new superior compression algorithm as well as an ingenious “cache & re-use” method….
320 kbit or 640 kbit … is enough for a few users (with VNC i’d say 5-10).
Weeeeeell — now tunnel your VNC sessions through NX, and you may get 2-times to 10-times as many sessions into the same link. Plus, the sessions will benefit from NX’s caching capabilities, and feel faster (you can see it if you click any menu the second time).
—–
But my feeling is that our current debate is not on equal footing. Because you dont yet know from experience what I am talking about (NX), while I do know from my own usage what you talk about (VNC)…. 😉
So please have a testdrive of NX and come back after that to continue the discussion (if you still feel the need for it)
Cheers,
Kurt.
Kurt,
Do you or the NoMachine team plan to interface the NX caching with the X Damage extension? How about the newer XCL/XCB replacements for XLIB? Supposedly, they help to eliminate latencies and I would suspect help reduce round-trips.
Among other things. This is only recently avaiable on RDP with version 5, and Server 2003 was needed for this.
I second this question. I use http://www.rdesktop.org to control various remote Windows2000 TS and stand-alone WindowsXP Pro machines without issue. The performance is quite good. Is it a legal issue of being able to add an RDP server to *NIX? For that matter, any other platform such as Win9x, Win2k (non-TS), etc?
The reason why everyone is using the comparison to VNC is that it’s everywhere and both the server and the client run on everything. You can’t say the same for NX. It’s only a *Nix / Xwindows thing. Maybe in the future, it can be made more generic much like RDP could be.
–David
I think costs are over-epmhasized here. I do not live in America, but here in South Africa, you can get a decent usable computer for R3000. In USA you can probably get a good computer for about $300. which means that computing costs are much less than costs of employing someone. such network technologies are invariably meant for people to work remotely and all, and with broadband being about $50 a month, I think it is wiser for a company to spend money on giving the employee a high speed connection than saving money using NX. It is also probably much better to give the employee a PC than to ask him to use a thin client because you think you can save by skimping on a decent PC.
employees are probably the costliest item on a companies income statement, and therefore leveraging their productivity is preferable to anything else.
> I think costs are over-epmhasized here.
Multiply costs by large numbers of users & tight budgets – schools, for instance. There is no point wasting money for -no reason-.
> such network technologies are invariably meant for people to work remotely and all,
No – they can save a lot of administration while being employed on a LAN, for instance.
> and with broadband being about $50 a month, I think it is wiser for a company to spend money on giving the employee a high speed connection than saving money using NX.
Using NX does not prevent you from having a high speed connection… why intentionally waste resources?
> It is also probably much better to give the employee a PC than to ask him to use a thin client because you think you can save by skimping on a decent PC.
It’s better to have a virus-ridden, maintainence heavy standalone PC than a thin-client in -all- circumstances?
That said, again, NX does -not- require that you use thin clients. It can be used perfectly well on ‘fat’ clients as well – for instance, running apps from a central server.
> employees are probably the costliest item on a companies income statement, and therefore leveraging their productivity is preferable to anything else.
This is another tool that helps people be more productive. Setting it up against broadband and fat clients is at best a strawman… it adds flexibility by -allowing- thin clients and dialup to be reasonable alternatives, more cheaply than before but it certainly doesn’t dictate them.
I for one am quite happy about NX.. but then again, I’m a CUPS user.
In short, this can be cheaply implemented in a variety of flexible ways – what’s not to like? Heh.
>Among other things. This is only recently avaiable on RDP
>with version 5, and Server 2003 was needed for this.
Yes, Sound does work with NX.
cu
Fabian
>I second this question. I use http://www.rdesktop.org to control
>various remote Windows2000 TS and stand-alone WindowsXP
>Pro machines without issue.
NX does encapsulate RDesktop as NXDesktop so you can control Windows machines with NX too.
> The performance is quite good.
Yes, but the performance of NX is outstanding. Even just running rdesktop over NX makes it faster, than without rdesktop …
> Is it a legal issue of being able to add an RDP >server > to *NIX?
Its a technical issue. You can’t. RDP is a proprieatairy protocol.
>For that matter, any other platform such
>as Win9x, Win2k (non-TS), etc?
See above. Please just testdrive NX. You won’t want RDP back then …
> The reason why everyone is using the comparison to VNC >is that it’s everywhere and both the server and the >client run on everything.
> You can’t say the same for NX.
>It’s only a *Nix / Xwindows thing. Maybe in the future,
>it can be made more generic much like RDP could be.
Yes, it can even connect to VNC and also make VNC faster. Its just that the X-Protocol is very efficient that it was used as the underlying layer …
Please test it, before you jugde about it. you are doing (wrong) assumptions all over the place.
I just can’t write about _fast cars_, because I do not understand anything about it and everything I could say would not be correct, but if I had driven in a fast car I could report from my experience.
cu
Fabian
Oh yes, the server specs required are IMHO greatly exagerated here also. We are running the school system on a dual Pent3 800, with 1 gig of ram. It carries a steady load of 20+ users, and has no issues keeping up.
Sounds about right. That sort of workload benefits greatly from dual CPUs, so you’ll probably find that machine can support more clients usably than a much faster single CPU machine.
For those interested to see what a session to a NXServer looks like, check out: http://www.mail.ac/users/kevin/nomachine/
Connecting to a Solaris Server (various versions) running NXServer via SSH/NXClient
i just caught the last statement about session control.
i’m not entirely sure what session migrating entails.
“Yeah thin clients, but you need a killer server to run one of these”
No you don’t. Of course if you are planning to support multiple clients, then you need a good server. But how would that be any different from regural setup where you have good servers for fileservices and the like, and good workstations to run apps? And besides, if you use your obsolete workstations as thin-clients, would a good server cost one cent more than replacing those workstations with up-to-date workstations would cost? I bet the server would cost LESS than those workstations would cost!
“plus if your server goes down, nobody does any work! Single point of failure isn’t always a good idea.”
Make the server redundant. And besides, many servers have uptimes that workstations can only dream of. And besides: thin-clients also mean single point of maintenance. Thin-clients are more or less maintenance-free (no hard-drives, no fans etc.), so you don’t have to run around fixing users computers, you just work on the server. And if a thin-client does break down, you just replace it with another thin-client. It takes about 5 minutes.
“Can these NX servers be Clustered & Load Balanced? Otherwise downtime could be a problem.”
Don’t know about NX (I would assume the answer is yes), but that’s not a problem with regural remote X (like in LTSP).
“Thin clients don’t make sense anymore!”
Maintenance-free machines that can either be bought very cheaply, or you could use obsolete computers you can buy for dime a dozen. Yeah, they just don’t make any sense!
“They did back when you where talking about a 300MHz PCs 64MB RAM but not now when you can get a 2.8Gz 256MB for about €550 including VAT and a monitor… What’s the point!”
the points are these:
– Thin-clients can be bought VERY cheap or they can be free!
– They consume considerably less electricity
– They are practically maintenance-free, which considerably lowers their TCO
– Even if one does break down, replacing it with a new one is trivial and doesn’t even require IT-staff to do it.
– Since they have no moving parts, they don’t generate any noise. Which means more comfortable work-places.
– Since they have less components in them, they consume less desk-space
– They can give superior performance. Think about this: One user starts a KDE-session (for example) on his client. The server loads the necessary files from the hard-drive and the session starts. Another user starts a KDE-session from his client. The server doesn’t have to access the hard-drive, since the required files are already loaded in the memory!
Your example would still be about 550e more expensive than a thin-client would cost, and that’s NOT including the costs of maintaining that low-end machine. And it would propably have worse performance, it would generate more noise, consume more electricity and it would be less reliable while requiring more IT-staff to look after it.
“Most banks I’ve worked for only do this because they have old PC and don’t want to upgrade them.”
So thin-clients DO make sense, eh? That’s what I thought. Why buy brand new machines, when you could use your existing machines just fine?
i’m not entirely sure what session migrating entails.
It means you walk away from your desktop at work with 10 apps running, login from home and connect right back to that same session where you left it without having to start the apps again, reload the documents, etc.
“Yeah thin clients, but you need a killer server to run one of these”
No you don’t. Of course if you are planning to support multiple clients, then you need a good server. But how would that be any different from regural setup where you have good servers for fileservices and the like, and good workstations to run apps? And besides, if you use your obsolete workstations as thin-clients, would a good server cost one cent more than replacing those workstations with up-to-date workstations would cost? I bet the server would cost LESS than those workstations would cost!
Don’t forget the shared memory involved whereas that didn’t happen with commodity PC’s. A server with a support contract, or a 2nd one for redundancy (e.g. failover, replication) is one possibility. Ofcourse you have some thin clients available as drop-in replacement. Then you have a pretty stable, cost-efficient network. Btw, you’d need a server anyway really if you think this is a huge problem then look futher.
Its a technical issue. You can’t. RDP is a proprieatairy protocol.
No, you can. It is a technical/time issue since it would be hell of a nuisance to implement it. Check out some discussions on Google via “RDP server Linux” or something similar.
—
@ Kurt Pfeifle yeah i’ll read some more about it and try it out. What about the XDamage comments. Does NX use something similar as that?
> Is it possible to detach / reattach a running session as
> with screen using NX ?
Well, yes, almost, and only with version 1.4 (client and server)
Almost because the applications will be stopped if they fill the display buffer. If you plan to keep your programs computing in the background while disconnected, make sure they don’t create any output. I was told that perhpas covering the active window(s) may prevent the display buffer from becoming full.
There is a duscussion about this on the nxusers mailing list.
/jarek
It does sound impressive so i decided i’d test it. I’ve been searching my ass off to get the source for NX client and FreeNX server. Finally, i’ve found it here:
http://www.kalyxo.org/debian/pool/n
in the directories “nx” and “nxserver”.
I’d like to port both over to IRIX. The commerciel/proprietary server only runs on Solaris and Linux. There are also seems to be proxy software available, which is awesome.
(I’m currently using VNC indeed.)
Is this a new technology?
The recent addition of the XDamage extension to the core X11 protocol is meant to help speedup “screen-scraping” utilities like VNC. By “screen-scraping” I mean that some or all parts of a screen are copied as they are represented on that very screen: as bitmaps. XDamage helps the clients to identify these parts of the screen which have been modified (=”damaged”) by X protocol requests (producing a re-rendered bitmap on-screen), and sends to all registered clients an update of that part of the screen bitmap.
NX at its core works differently. It processes genuine X protocol requests (not screen bitmaps like VNC). If NX is asked to draw a circle, an arc or a rectangle, it sends highly optimized 2D primitives like X_PolyArc and X_PolyFillRectangle requests (and not the rendered bitmap representation of these very same primitive forms) to the X server. Hence NX is much faster and leaner than any screenscraping technology like VNC.
Having said that, it should be noted that NX can of course also transport bitmaps, compress them and make use of its caching technology for them. That means you can tunnel VNC traffic through an NX proxy system and speedup remote, low bandwidth VNC by 2- to 10-fold.
Hence XDamage finally will also benefit NX in so far as it is asked to connect to VNC servers. Since XDamage also makes “session mirroring” more efficient, it will become a nice additional feature to NX too. (Session mirroring means that you can multiply the content of one screen to multiple clients — think of a remote or classroom teaching/learning scenario).
However, the core NX protocol (which uses X11 2D primitives) is unaffected by the XDamage extension. NX at its heart will stay much more efficient than any VNC implementation.