For a while, X terminals were a reasonably popular way to give people comparatively inexpensive X desktops. These X terminals relied on X’s network transparency so that only the X server had to run on the X terminal itself, with all of your terminal windows and other programs running on a server somewhere and just displaying on the X terminal. For a long time, using a big server and a lab full of X terminals was significantly cheaper than setting up a lab full of actual workstations (until inexpensive and capable PCs showed up). Given that X started with network transparency and X terminals are so obvious, you might be surprised to find out that X didn’t start with them.
↫ Chris Siebenmann
I did indeed assume X terminals were part of the ecosystem from day one, but it makes sense that it took a while, and that they didn’t enter the scene until X had established itself as the standard windowing system in the UNIX world. I’ve been trying to get my hands on specifically the last HP X terminal, but they’re hard to find and often very expensive. I’d love to get a taste of a proper networked X environment on real UNIX, in the way people actually used to use it professionally.
As a sidenote, Siebenmann is doing such an excellent job with these stories about UNIX, X11, and related matters. He’s like the Raymond Chen of the UNIX world.
For tftp loads of NCD Xterminal firmware, I have that… NCDware 3.5.120
A funny thing about those X terminals is they would often have 12MB or 16MB of RAM. Anything less than that, and you needed to swap out to an NFS share to avoid RAM exhaustion for a non-trivial windowing session. Again, those were the memory requirements for the dummy terminals (aka the “X servers”)!
Windows NT 3.1, an entire fully memory-protected OS, had a minimum RAM requirement of 16MB (sure, that won’t get you far for demanding apps unless using a remote desktop session, and you also need a harddrive, but it shows the memory requirements of X11).
X11’s reputation as a memory hog was well-deserved.
To be fair, NT on a 16MB system was basically unusable.
One thing to remember is that those X-terms were usually driving resolutions multiple times that of VGA (which was the PC standards of that time). Stuff like the frame store/buffer adds up quickly in terms of memory requirements.
E.g. 1280×1024 will require 4 times the memory of VGA per frame for the same bit depth.
Not saying X was particularly optimized, which wasn’t. Since it was a design by committee trying to be as abstracted as possible (at the time).
Whereas, NT was designed and built by a team that was hyperfocused, among other things, in optimizing the system for general PC support (at least premium tier PC of the time). And it skipped a lot of the network distributed/transparent goals of X. So it end up mopping the floor with a lot of the Unix of the time in terms of price/performance.
kurkosdr,,
I will never submit to calling the x clients “x servers”, no matter what they say and I don’t care that the server initiated a tcp connection back to the client, it’s still the X server 🙂
Xanady Asem,
They weren’t designed for text like telnet/ssh, and of course hires graphics are much less efficient. But the design made sense as a network protocol. The client held the pixel buffers, which means one can repaint windows at the client without round trips to the server for updates. Trivial actions like moving a window or bringing it into focus would constantly demand server updates. These days we have more bandwidth, but when X11 was developed dialup was still king so it created an extremely high incentive to store window buffers at the client. This is why the client needed a lot of memory. Given these tradeoffs, the design made sense.
Also, storing large window buffers server side would have made X much less scalable for servers. Unix servers could serve hundreds of graphical clients without breaking a sweat Using Xlib this way was actually quite efficient. However software developers would soon wrap X lib behind other toolkits and layers that render everything server side and then submit the entire buffer to the client. X11 design isn’t optimized for today’s software.
the inefficiencies, I was referring to, were in terms of the software stack and its abstractions. Esp at the application layer.
X wasn’t designed with dial up in mind particularly.
My simple example was just about single buffering reqs which has to happen locally regardless.
Xanady Asem,
That really depends on if xlib primatives are used as originally intended. If you are rendering the entire UI with another toolkit and then only using xlib to blit pixels to the client, then it adds a layer of indirection and isn’t efficient.
Obviously if you were local you’d use a LAN, but it was common for unix systems to be accessed remotely via racks of dial-in modems for remote users. Broadband is so much better but you’ve to remember this didn’t always exist. X was born before broadband. Even businesses used dialup or maybe 1.5mbps T1 lines! Broadband internet would come, but later.
xlib is just the basic client library. I was referring to the entire stack of the X architecture.
Running a remote x session through modem is far from a common use case, if ever done.
Xanady Asem
Xlib is special in that it has a 1:1 relationship with X. Everything you do in xlib translates directly the to the protocol. Originally X software used to be written to make use of this but then new frameworks were created that rendered into their own buffers and reducing the entire X11 protocol to an inefficient blit target.
There’s no doubt about it. People connected remotely with nothing more than dialup and a windows desktop. There were two big players providing X software for people running windows, yes often over dialup. I think both were available for windows 3.x on 286 era hardware.
https://en.wikipedia.org/wiki/X-Win32
https://en.wikipedia.org/wiki/Hummingbird_Ltd.
It’s largely overlooked today, but X’s network transparency was a killer feature. Using GUI tools remotely was just as natural and intuitive as if you were on campus. Want to view bitmap files on the server, not a problem. Visualize data, go ahead and use Xplot as you normally would. Same with LaTex documents, etc. Obviously modems were extremely slow by today’s standards, but you also have to understand this in the context of the time period when millions of households and businesses only had dialup.
One of my favorite examples of online graphics in the dialup eara is “RIP”. People today probably can’t appreciate it, but I think it’s neat.
“RIPscrip – the online graphics format that predates the modern Web”
https://www.youtube.com/watch?v=uyEj4Rm8mzE
Rest in peace RIP, haha.
“I’d love to get a taste of a proper networked X environment on real UNIX, in the way people actually used to use it professionally.”
Fair enough, though having used such an environment back in the dim ages, I suggest the experience might be a little underwhelming.
Aside from the initial setup and boot process (which can vary a bit between X Terminal manufacturers), it’s really not very different from running basic X Server software on a regular PC or setting up a traditional UNIX workstation as a network terminal, i.e., booting to a network chooser or to a foxed remote session.
The hardware can be cool, though. Some of those old X Terminals had very nice CRT displays.