Last month’s news that IBM would do a Hewlett-Packard and divide into two—an IT consultancy and a buzzword compliance unit—marks the end of “business as usual” for yet another of the great workstation companies.
There really isn’t much left when it comes to proper workstations. All the major players left the market, have been shut down, or have been bought out (and shut down) – Sun, IBM, SGI, and countless others. Of course, some of them may still make workstations in the sense of powerful Xeon machines, but workstations in the sense of top-to-bottom custom architecture, like SGI’s crossbar switch technology and all the custom architectures us mere mortals couldn’t afford, are no longer being made in large numbers.
And it shows. Go on eBay to try and get your hand on a used and old SGI or Sun workstation, and be prepared to pay out of your nose for highly outdated and effectively useless hardware. The number of these machines still on the used market is dwindling, and with no new machines entering the used market, it’s going to become ever harder for us enthusiasts to get our hands on these sorts of exciting machines.
I think the author is mistaken, the VAR business model never went away.
He also has it backwards; workstations went the way of the dodo, because cobbling something together out of PCs was not only cheaper but really far less painful at the end of the day.
Yes. Dude does not know what he’s talking about.
Yeah. Workstations are just expensive PCs these days. Usually called so because of the difference in hardware compared to gaming or office PCs.
I feel that the headline is false unless one uses a particular definition for workstation that excludes modern day workstations. If anything it seems like workstations are actually in high demand these days with demand outpacing supplies…
He kind of alludes to the fact that commodity computers are good enough, which I agree on, but he doesn’t provide any explanation for why commodity computers can’t be workstations. If it passes the duck test (ie looks like a duck and walks like a duck, etc), then it is a duck. We still have and use workstation class computers today, branding doesn’t change that, and yet it seems to be the central point of this article. A more accurate title would have been “The untimely demise of non-x86 workstations”. I know it’s a small point, but it bugged me throughout the article.
The author also doesn’t realize we don’t really need workstations in most instances anymore. Networks are fast enough, compute resources are ubiquitous, and software is being built to take advantage of these facts. People can setup things in the client on their laptop, and they can send the job off to servers which will crush through the job.
All of the specialized gear lives in the datacenter now where it can be accessed remotely and shared.
Yes, the “workstation” age that is being alluded to was the age of the 1M pixel (black and white), 1MB, 1MHz device, or within a decade or so of it. These days it’s hard to even imagine anything useful being done on such a puny machine. That’s a bit early for me: I think that the first “workstation” that I had was probably more like 1M pixel of 8-bit colour mapped screen, 4MB DRAM and probably 16-20MHz of go. That’s still orders of magnitude smaller than your phone…
Also want to add that around that time (mid 90’s?) a fellow by the name of Forest used to post every year to comp.arch about his notion of the “Forest curve”. The idea that the amount of computing that the majority of people needed to do was not actually growing continuously as a function of time, and so as time passed the number of people keenly anticipating the next generation of Moore’s-law determined performance increase would decrease. It was hard to imagine at the time, and many made their counter-arguments and considered him a crank. Sure, some people can use as much as they can get, but most don’t need all that, even enthusiast video gamers or video editors, let alone word-processors. Eventually the market for “high end” becomes a bespoke market, which is more or less where we are now. Even most engineers of my acquaintance don’t spend much time waiting for compiles or Matlab simulations on their current desktop, and if they do, there are rentable cloud options, or specialist, GPU-loaded kit.
Pretty insightful. My i7-4790 is old, but it will still keep up with modern procs for my usage. I’d like to get an 8-core Ryzen, but that’s more for vanity reasons then any real need. 🙂
Did Forest work closely with end users by chance?
While I have 8 cores and rarely need all of them, I do find the 32GB of ram limits me sometimes. I wish I had a machine with 256 GB of ram, also for some reason the SSDs do not always give me full performance. That’s why I want the extra, a RAM DISK always seems to give me top performance.
Earl C Pottinger,
Alot of SSDs have bottlenecks of their own particularly with random access data.
A long time ago I was interested in something like this…
ANS-9010BA 5.25 inch Dynamic SSD SATA RAM Disk (Ram Modules Not Included):
https://www.ebay.com/itm/ANS-9010BA-5-25-inch-Dynamic-SSD-SATA-RAM-Disk-Ram-Modules-Not-Included/264683020760
I thought it was an awesome idea for ultra high speed storage with a battery backup and flash card for cold storage/power failure. As you can see it’s too darn expensive especially for such an obsolete item. But if it could be updated to modern specs I still think this would be awesome! It would be the fastest persistent storage bar none. Perfect for ultra-high speed database use.
I’ve though about building a DIY version of this, but it’s probably beyond my hardware capabilities.
64GB is nice because you can spin up many VMs without compromising on RAM. Also there’s never a shortage of ram for caching. My biggest bottleneck here is my 1gbps network. I’m looking at a few grand to upgrade my NAS/switches/nics. As much as I want to do it, prices for the gear I want have been slow to come down.
areilly,
That’s too arbitrary a definition to be useful, but even if that’s your definition it doesn’t seem to jive with the author’s usage of workstation.
Workstations are usually more expensive, often specialized machines that have top notch components including for graphics/studio work. What you are describing might apply more towards “mainframes”, which still use patently ancient terminal technology after all these years. But even those are still around.
At the time, workstations _were_ more expensive than the alternative. The alternatives at the time were early Macintoshes without memory management, or 286-based PCs on one hand or something like a microvax with a bit-mapped display or a TI Explorer or Xerox desktop machine at the other. The really important thing about those early workstations, IMO, was that they were almost universally connected to the then-new Ethernet, with a big, fast file server sitting in the background somewhere. That made them an entirely different experience than PCs of the day.
areilly,
And they still are. Workstations never went away, but they do target a different demographic. Most home users, both now as well as back then, don’t need a specialized/high end workstation.
1 MFlop, not 1MHz 😀
Sorry, you’re right, I mis-remembered. And given the state of floating point at the time, each flop was worth half a dozen or more cycles, so even those early 68k-based “workstations” probably ran at a dozen or so MHz.
Cite: https://en.wikipedia.org/wiki/3M_computer
> 4MB DRAM and probably 16-20MHz of go.
Ah, the good ol’ days.. When 16 *mega*bytes of memory was a huge amount, a 5 MB hard drive was “big”, and a 20 MB was “huge”, and programs AND operating systems could fit on an 800 KB disk with room to spare.
And the era where Netscape was the “most advanced browser”, HTML *3.5* was “the hot new thing”, and the web *still* ran faster than it does today. (Mostly because JavaScript and ubiquitous 15-layer-deep CSS hadn’t been invented yet. Pray tell me, why does Twitter need to nest things in at least *9* “div” layers?)
Yes Alfman, there is a clear problem with the definition and classifications.
I’d assert the machines being purchased by kids as alternatives to Playstations or XBox are more than suitable for the Workstation category, but of course they aren’t in that category. I’m not sure the more traditional workstations used in CAD/CAM have demised either, unless of course you think a Mac Pro is the only measure. This market seems quite strong in the boutique builder sector, and Dell and HP still offer such high end systems, so I find the claims confusing.
I miss those SGI Bumbles at the animation studio I used to work for years ago. That gave way to using SUN WOrkstations, then LInux based commodity workstations from HP. I agree, the title is misleading, thats why I was suckered in. LOL
One thing you cannot get in modern PC “workstations” is reliability. Unlike workstations of the past, today’s “workstations” are prone to crashing and constantly need to be restarted. Today’s users don’t seem to even remember a time when computers were reliable, since PCs and Macs never were.
Meanwhile, reliable computers are still sold, but only as servers, not workstations, and they’re still just as ridiculously expensive as they were historically. So much for computers getting cheaper due to improvements in technology. Instead, cheaper computers came only as a side effect of declining quality.
bugmenot23,
I doubt there’s compelling evidence that modern workstation grade hardware is systematically less reliable than older workstations, but if you think there is then please share it.
I suppose we can nitpick whether a specific machine is a server/workstation, but given that component requirements can be the same it can be a bit arbitrary. Obviously rackmount computers are intended to be servers, whereas tower computers can be effective as either workstations or servers.
This workstation’s got xeon/ecc/quadros/redundant NICs.
https://www.dell.com/en-us/work/shop/workstations-isv-certified/precision-7920-tower-workstation/spd/precision-7920-workstation/xctopt7920us_3
It’s less common, but you can even get a modern workstation with redundant power supplies…
https://www.supermicro.com/en/products/system/4U/7049/SYS-7049GP-TRT.cfm
My university used the exact same sun microsystem pizza box hardware for both workstations and servers, so this concept of using the same hardware for multiple roles isn’t particularly new.
Wait until he gets a taste of vendor lockin. It’s like those faces of meth triptychs: healthy, not so healthy, cracked out skeleton.
Been there, done that, not going back.
By workstation you mean large, heavy, power hungry, low capacity, very slow, very old Unix boxes?
Or are you talking about today’s workstations which are very very very fast and could even be running Linux (my preference)?
Lay off the crack bro…
Vintage Workstations were orders of magnitude faster than commodity PCs due to having far more ram, multi processor configurations, and dedicated hardware while PCs were using crap like softmodems and the ISA bus. As far as Linux … its vastly more bloated than Unix of yesteryear.
How so?
Full, batteries included distros like RHEL/CentOS, Ubuntu, and OpenSUSE are pretty fat, but Alpine Linux, Gentoo, and Arch can be pretty slim.
The smallest Linux distributions need at least 128MB of RAM, maybe 64MB, if you believe the stats: https://en.wikipedia.org/wiki/Light-weight_Linux_distribution
Most of the early 68K and first-gen SPARC workstations only had a couple of megabytes. In the ’90/’91 time frame I tried moving off the shared multicore-sparc + xterminal environment of my research group to a 486 with 4M RAM and a super-fast 16-bit ISA-bus graphics card and network card, all to run the just-released 386BSD from Dr Dobbs. Bit flakey, but worked like a bought one: TWM on a colour display, 10base2 network connection to the file server. The NFS had no trouble totally saturating that link… Yes, there was some swapping, which we wouldn’t tolerate these days, but virtual memory actually worked pretty well in those systems. Helped that shared libraries hadn’t yet been invented.
Sure, the old stuff is super efficient with resources, but why is that, aside from they had to be? What are the technical details which cause current Unix-like kernels to have a lower limit of 64MB on RAM usage? The kernel could be BSD or Linux.
The Linux distributions require 128MB these days, but contemporary versions where the same or less than commercial Unixes. I ran both SLS and Slackware on a machine which was 8MB, and I knew people who would run it on 4MB (with a reasonable amount of paging).
The big win of Linux (and the BSDs) on x86 vs the commercial workstations (besides price) was really the graphics — you could get X with more than 256 colors, which meant that you didn’t have palette switching when you changed windows.
Vintage Workstations stopped being “orders of magnitude” faster than PCs the minute Pentium Pro entered the scene. PCI was a thing as early as ’92. Other than for some very specific applications and software needs, custom workstations lost most of their value proposition by the late 90s.
Software “bloat” is one of those qualitative, extremely subjective, arguments that mean nothing. Sure compared to a vanilla MS-DOS fresh install those Unix of yesteryear was even more of a fat pig.
But then again, those unices of yesteryear did not come with a compiler, or common libraries, or more than a couple of shell options. Heck some of them did not even include a TCP/IP stack or support for graphics/X11.
Which is why these arguments are idiotic. You can do so much with a vanilla installation of Linux nowadays, for free, than with one of those “slim” commercial unix… which you had to pay literally thousands of dollars to add a compiler and X-windows environment on top of the non-trivial cost of the base OS itself. Oh, and good luck getting updates and basic bug fixes without an expensive support contract.
I worked with SUN, SGI and various HP workstations for more than 10 years.
When I could cut my spending with more than 200000 euros a year I didn’t blink an eye. After years of experimenting and running Linux beside those behemoths finally paid off. First as much cheaper storage through NFS and little by little as my new processing park. Those Unix workstations were overrated in reliability. They were faster in certain niches then what was on the common market at that time and you paid for it. I was happy when they were gone and the researchers were happy because they got 10 times more than they got before. Why I was glad? Because the mixture of various Unixes was difficult to maintain, Throw Windows 3.x in the mixture and the nightmare became real.
Build your own.
https://www.pcgamer.com/gaming-pc-build-guide/
This is the natural progression of the times.
PCs used to be custom architecture with completely incompatible parts across vendors (think Apple II, Commodore, pre-ISA PC times). Then IBM PC with the ISA bus became standard, over time it became a commodity. And, there are still custom cases and motherboards (like Alienware).
Workstations have gone the same way. You can now get an EATX motherboard with a dual Xeon Socket (something like FCLGA3647), add in two CPUs, hundreds of gigabytes of ECC RAM (sometimes a terabyte!), place enterprise level GPUs (much more stable, and again uses ECC), and you get yourself a workstation.
Or you can get a refurbished one for cheap:
https://www.dellrefurbished.com/computer-workstation
We no longer need to buy highly customized, expensive, and proprietary machines.
This. Horizontal integration won, and we’re better for it.
System76 also has some hot kit in the Thelio line, but they don’t sell refurbs. 🙁
I seriously don’t buy the following:
Hmm…. Let’s see:
https://www.schmeling-ol.de/sun/solaris10/installation_en/release-notes.pdf
And that’s on top of graphical issues witht he XVR-100 I remember hearing on another forum about another machine, also being a known issue (Ultra 60 I think).
Mind you this is a top-to-bottom custom-designed system, with the XVR-1000 being a custom-designed GPU based on Sun’s MAJC chip, not some 3DLabs card (like some other XVR graphics accelerators were). Looks cobbled together to me, where the different vendors just happen to be part of the same corporate tree.
Which is why the great Unix workstation ultimately failed: The supposed quality you got from vertical integration wasn’t there. Most Unix workstations were just Unix SVR4 (most of the times) ported to the particular system, like a Linux distribution can be ported over to an Internet-Of-Things device. The only exception I can think of is SGI, which did produce good systems but failed due to idiocy, basically abandoning MIPS despite the fact there was nothing wrong with it because the thought Itanium would be the cat’s whiskers (no, seriously).
Anyway, if you want a Unix workstation, buy Apple. I hear the are even doing their own RISC processor in the next model year.
kurkosdr,
Well, if you are talking about the macpro, it’s only competitive if you are comparing it to enterprise grade hardware, which is expensive. For most consumers/prosumers though the macpro’s $6k starting price only gets you a quaint system and you can do much better with commodity parts. Many people/companies may buy a macpro for the brand name, and that is fine, to each their own. But if you’re buying a macpro strictly for performance then you’re looking at a $10-15k macpro to get into high spec territory, which is really hard to justify considering how far that money goes with commodity components.
http://www.youtube.com/watch?v=jzT0-t-7-PA
As far as buying MBP/macos for a solid unix workstation, I think that used to be more true. However IMHO there have been regressions over the years. Macos is getting stale and they haven’t invested in the platform to keep up with the times. A lot of the code base hasn’t been updated in a long time. They don’t support cuda. Not only have they abandoned the opencl & opengl standards but they’re not supporting the new vulkan standard either. Hardware-wise they’ve dropped ports that I still need forcing the use of dongles for ethernet and USB peripherals. Rather than focusing on power users, the platform is becoming IOSified with user interfaces designed to sow confusion and create obstacles for running 3rd party software. While none of this implies macs can’t be popular anyways, it’s arguably a far call from the original OSX as an elegant unix workhorse.
If I had to choose, I ‘d prefer having a Mac Pro (even if it doesn’t support Vulkan or the latest OpenGL or have the latest CLI utilities), than have an old Sun workstation with a “Java Desktop” Gnome knock-off stuck in the stone age and no premium apps. Not that there ever was a time Unix workstations all supported the same APIs… For example, even in the days of OpenGL’s hegemony they were all kinds of “extensions” even for basic things like texture compression.
BTW since when pricing is a consideration for Unix workstations? Unix workstations were not expensive because of the instruction set or the custom motherboards, they were expensive because each one came with their own niche OS that had to be maintained exclusively for that niche. Why shouldn’t the Mac Pro be the same?
I too have rose-tinted memories of some SGI O2 computers and some Sun Ultra 25 computers that my university had in the labs (and the Sun Blade 1500 ones, those had an embedded speaker for YouTube, woo!), but Unix workstations stopped making financial sense when Windows PC and Linux servers caught up.
Apple isn’t doing what they are doing to bury the other Unixes (they are already dead and buried), they are doing what they are doing so the last Unix workstation (Mac Pro) stays relevant. That means less time spent updating the CLI, more Metal (which they can evolve independently) and more UI glitz. Unix evolves, it always did.
kurkosdr,
Well, you’re kind of preaching the the choir with that choice. Haha.
I never said mac pro workstations aren’t in the same category or price point.
I don’t disagree. Windows, linux, and mac workstations have obviously replaced those sun workstations.
I’m not generally a proponent of microsoft, it can be argued that they are doing more evolving to cater to the unix market than apple is. They may not always admit it, but I suspect many macos users feel neglected and disappointed that apple hasn’t done more with macos in the past decade. Anyways, feel free to disagree, it’s just my opinion.
MS has/had (?) a lot longer way to go then Apple, so MS’s efforts look and feel more substantial. Versus the decade plus of MacOS being *nix based. Plus, WSL is just a VM now; it’s not native like the *nix stuff in MacOS.
There are things Apple could do to improve MacOS’s Unix cred, but it’s a good medium. I much prefer working on Fedora, but MacOS is better then the alternative.
Things I would like to bring MacOS to parity with Fedora:
* KVM or Virtualbox equivalent out of the box. VMs are invaluable.
* An official source of third-party binary *nix tools. Homebrew is crap, and I have to schedule time for Macports to compile.
* LUKS support.
* Read write ext4, XFS, UFS, F2FS support for external media.
* Official FUSE support.
Or buy a Dell Precision with RHEL or Ubuntu installed at the factory. System76 is also a consideration with their new offerings.
Linux is the new Unix, and everything else is an also ran. The majority of new development is targeting Linux. I say this grudgingly as a BSD fan.
I would consider the mac pro and nvidia dgx workstations. Each made for a specialised task (AI, Video, respectively), expensive, high-end enclosures, custom hardware …
Architecturally they are both just PCs. The way they are designed is just to extract more money from the customers.
It might change a little with things like the CXL interconnect but that would still be an industry wide standard, not a vendor lock in gimmick.
So? Does that automatically make them worse?
That is a bit of a reductionist argument.
Everything is just a PC…
Well, one of the “charms” of the old workstation market was that they all had proprietary expansion busses, graphics subsystems and boot ROMs (not a BIOS in sight). No concerns about third-party device driver quality, because there generally were no third-party devices (the “workstation” era predates USB, IMR). So no, they weren’t “PCs” in the sense that they did not have any backwards compatibility back to an environment of VGA console and io-port mapped parallel-ports.
Some of them got quite fancy in these respects: bootloaders that spoke forth and contained entire debug environments; several experimented with big, fat crossbar chips instead of busses (of course modern SoCs all do exactly that, but you couldn’t fit a modern SoC onto a pizza-box motherboard with the VLSI of the day.
What we call a PC today has absorbed most of those features of course, and more.
That’s complete rubbish. Despite a lot of expansion busses being “proprietary”, there was indeed quite a few 3rd party peripherals and drivers available for them. The PDP-11 was well known for having a lot of 3rd party peripherals, and was quite often expanded upon with 3rd party controllers beyond what DEC had originally designed for it. That came out in the 70’s and was popular throughout the 80’s
Workstations in the sense described in the article were a result of a gap between high-end mainframe and server hardware and low end desktop computers (typified by an 80s or 90s PC)
What makes a “PC” has broadened significantly. It’s become more extensible in many ways, while also concentrating important parts into the CPU. These days, you can have a low end single threaded embedded x86 machine with a few hundred MB of memory and MMC on USB storage, or you can have a multi-socket, multi-dozen-thread-per-socket machine with gigabytes of RAM and access to vast storage arrays running exactly the same software – not just the same source but the same binaries – and both are forms of a “PC”. You don’t need something to fill the gap any more, because there isn’t one. If you want a workstation nowadays, it IS a PC. You can put anything you want on the PCIe bus directly or via a standard interconnect (which is either built in like USB or accessible via an off-the-shelf adaptor). You don’t need custom buses and specialist architectures any more – doing that cuts you out of a whole world of stuff that’s ready-to-use and forces you to reinvent the wheel.
PC isn’t a “car” or a “truck” – it’s the standards that road-legal vehicles are built to. If you really want an F1 car, go for it, but it’s not going to have much of a market and it will cost you a fortune.
Workstations hark back to an age where standards were thin on the ground, forcing engineers to get creative in solving problems. Workstations were the halo products that made a vendor stand out from the crowd; a unique, cutting edge mixture of hardware and software that when they got it right, pushed the boundaries, redefined customer expectations and sometimes created new markets. In an age where standards were few and far between, vendors funnelled huge sums of money into proprietary hardware, encouraging peripheral vendors to make add on cards that worked with only their system, creating an ecosystem that would lock in their customers. And it worked. For a while at any rate. The likes of SUN used to be the backbone of the internet and SGI owned the movie and broadcast graphics markets. Commodity machines (Intel, AMD, Cyrix, VIA, etc.) couldn’t touch them in terms of performance, but maintaining that lead costs huge sums of money. As the commodity platform gets better, the cost of maintaining that lead became ever bigger, as did the development time frames, just as your customer base was shrinking, which led to higher price per unit, and/or a drastically reduced profit margin. Sadly, most of the Workstation companies didn’t have a strategy to deal with this new reality, although SGI arguably had a go by embracing the Intel platform, and ultimately failed as they couldn’t stand out from the crowd.
Workstations now are all the same. There’s nothing that really differentiates them from any of their peers. They all have the same CPU, memory, system bus and peripherals. The jumps in innovation that shook up the competition no longer happen. It’s a much slower, gradual evolution with slimmer profit margins, creating less space to take risks.
I’m not saying whether that’s good or bad, it’s just different.
Economics of scale and an open ecosystem too.
x86 chips were also selling by the truck load at lower margins, but in volume.
The x86 vendors would sell a chip to anyone who wanted one, and since they would sell to anyone, people were able to get a hold the hardware to build things. That’s kind of the thing for hardware companies. If no one can obtain or use your hardware, the product is useless.
Meanwhile, the workstation vendors were too precious with their hardware, and no one could get a hold of it. The example is the RPi getting ARM into the hands of enthusiasts.
Anyway, thank god we’re at a point where we have standardized interfaces and mostly modular systems. Paying a several hundred dollar more for a special version of a card for a stupid interface or buying something that only works with a particular system is for the birds.
Hardware became a commodity. The vendor/product differentiation is now at the software/support level, i.e. the added value.
Technically, Lenovo, HP, et all… basically sell the same HW (with different physical enclosures and options), but if you’re an organization you’re not buying a specific workstation as much as you’re buying the specific vendor that supports the price/support that best works for you. The HW/SW is going to be the same.
It’s a different business model. Back in the old days, they had to differentiate themselves by the HW, since it had not became a commodity then (there were still lots of open ended problems and not a standardized solution). Also, a lot of these workstation vendors were upstarts, which had not had the ability to grow into a large organization yet, so their value proposition was in performance not in support (which is what the “established” vendors like IBM or DEC offered in contrast). The inital systems from SGI and SUN, for example, were awful in terms of stability and support… but they were faster at what they did that the competition. Thus people put up with them…
The real reason workstations went the way of the dodo is because IBM PC compatibles got powerful and capable enough to displace them.
For example, the SGI workstations were immensely powerful in their day, but developments in graphics technologies from the likes of AMD/ATI, nVidia and 3DFX largely displaced the needs for a dedicated graphics processing machine. These add-in cards could easily make a generic PC with even the most lacklustre celeron processors into competent graphics workhorses.
Add to that, the continued development of x86 processors by both Intel, and mainly by AMD, and you can see why dedicated CPU powerhouses like the BeBox (SMP in 1995), PowerMac, and Sun workstations also suffered and lost out to multi-CPU and later multi-core x86 systems.
Frankly, the Workstation didn’t disappear, it just became another IBM PC-compatible. You can still buy absolute powerhouses of x86 Xeon/EPYC machines today, from the likes of HP, Lenovo, Apple and Dell. Sure, they’re just crusty old office machines on steroids, but if you need a serious amount of processing power in either CPU or graphics intensive loads, options are still there.
Moore’s Law gaveth (throughout the ’80s, the golden age for workstation vendors) and then it tooketh away (in the ’90s, particularly after Intel released the Pentium). Then came the dot-com crash which did a number on Sun’s financials.
I looked it up – Gordon Moore retired as chairman of Intel in 1997. He’s still alive.
Clearly workstations are ubiquitous. What is gone are machine with a singular workflow theme or architecture, like Lisp Machines or the Commodore 64. I miss that kind of machine. Maybe Emacs is such a thing to this day. Even game consoles have bloated beyond load and play.
I think that if someone comes up with a Power/MIPS/ARM hobbyist system that has a bunch of expansion slots, fits in an ATX case, allows you to have shi^H^H^Hshoveloads of ram (32gb and up), has a couple of disk slots and is around that 400$US price point with 4-6 modestly performing cores and it’ll start a fresh revolution.
I know i’d buy one. Tried to a few times when the LoongSoon 8-way boards were announced. No chance though. 400$US is enthusiast level ..with subsequent upgrades. 1400$USD is not going to get enough people for traction. Some sort of esoteric hobbyist workstation platform that is reasonably priced and accordingly priced and people can use for a bit more than just a esoteric hobby system and do some potentially real work would be a smash hit at this stage.
120$ for a quad core RPI4 with 8gb ram .. what could be done with 400$US and zero/byo ram? Just a board. Maybe intergrators like System76 or so could use it. I’d buy.
uridium,
I don’t know about any revolution, but it would certainly interest me.
I am impressed with a lot of these ARM systems. So far though they can’t match the options and expand-ability of x86 hardware. Also perhaps even more importantly it can be very frustrating to be stuck with the manufacturer for OS images & support. I don’t want to be dependent on the manufacturer for anything after I buy the hardware. Thankfully x86 hardware allows us to EOL hardware on our own terms, but this is harder with ARM due to greater dependency on their long term support. I think you should add this to your list too.
Solid Run Honeycomb LX2k almost checks all of the boxes. https://www.solid-run.com/nxp-lx2160a-family/honeycomb-workstation/
$750 and only an 8x PCIe slot though.
Flatland_Spider,
Yeah, we’ve talked about it before…
http://www.osnews.com/story/131363/nextspace-a-nextstep-like-desktop-environment-for-linux/
I’m almost tempted to pull the trigger and buy it as a 10gbps router, but as you know I’d prefer ethernet over SPF. I mentioned this last time, but I’m still disappointed by the lack of mainline linux support. I don’t want to end up with hardware that I cannot easily support in my own linux distro. I don’t like it, but this is still common with ARM hardware.
With any luck running mainline linux on ARM hardware will get better in the future…
Solid Run has the Macchiatobin which is router focused and has 10G SPF ports.
http://macchiatobin.net/
Solid Run has quite a few interesting products.
One of the Honeycomb devs hangs around the Phoronix forums and comments on posts related to that. I’m not sure where their effort is at, but the dev seems very enthusiastic about getting everything mainlined.
ARM dev boards getting mainline Linux support would be nice.
My hardware baseline of OpenBSD supports the Macchiatobin. 🙂 https://www.openbsd.org/arm64.html
That’s capitalism. The PC just evolved more and more in time, thanks to it’s enormous market and it’s architecture openness. The masses demanded for cheap computers, and that drove mass production, which drove prices down, more and more. With the time, stablished vendors for proprietary architecture workstations had to pass thru more and more difficult times until they went into bankruptcy.
I can see a good market niche, however:
Everybody remember old video game consoles like the Atari, NES, SNES and so on, And the owners of the respective brands -Nintendo, Atari and other- has been bringing back new “mini” versions of the good old video game consoles, with fairly good results. The nostalgia also sells well.
How complicated would be for somebody to bring old workstation to the life? How complicated would be to bring back new “mini” of “updated” versions of the good old workstations?? For the nostalgia.
I’m grateful and fortunate to have a number of workstation-class machines in my computer collection. I have an SGI Indigo2 “Impact10000,” and a pair of NeXT machines (original Cube + later TurboColor slab), an HP 712/100, a Macintosh Quadra 950, and a few others I’m forgetting right now. If the acceptable definition of “workstation” in this context is “professional-grade machine with significantly beefier specs than even a high-end consumer machine would have,” then wouldn’t Apple’s new Mac Pro fit the bill? In just about every measurable sense (raw horsepower, technologically-advanced architecture, configurable options, borderline-obscene cost-scale, UNIX-based OS, etc.) the newer Mac Pro *feels* like a spiritual successor to the legacy workstations in my collection, and I look forward to the day when I can pick up a 2020-era Mac Pro for $50 at a surplus sale as I’ve done with other machines in the past.
wowbobwow,
I would definitely put new macpros in that category. I would not have put the last “trashcan” macpros in that category.
That said, I suggest you take a look at the specs for the $6k entry level macpro, it’s downright disappointing. You can expect less horsepower and capacity than a much cheaper commodity PC. That’s ok when you consider that with enterprise grade parts that it’s not all about performance, but quality, robustness, reliability, etc. Those things carry a premium price, but you’ll likely have to spend at least $10k-20k to get a mac pro from apple with high performance & specs.
I’ll probably never get to play with one, but I imagine the macpro could make a nice server or workstation. There are some notable cons too though, personally I’d be annoyed with apple’s vendor locking for NVMe storage. Also, I feel apple’s engaged in some serious price gouging for things like needing expansion kits for a large SATA or SAS array, you usually get this out of the box with typical consumer and server grade computers, with the macpro it’ll cost you more 🙁
Let’s see what happens with apple