“I had reduced the size of my ongoing Z80 project down to something more wieldy by using CPLD chips, but it was still a bit too bulky to fit into an acceptably sized case. The next step was to look into FPGAs and see where they could take it. One thing led to another and I ended up building a self contained post modern home computer laptop.. thing.” Kroc: Can I haz port of BBC BASIC plz?
If it can run CPM, then yes, can haz BBC BASIC.
I may have some old 5-1/4 inch CP/M disks (for the Apple II with a Z80 card) lying around. However, it’s been over 25 years since they were last accessed, and I make no claim as to their readability.
You want CP/M? You can have it on an AVR! (with emulated Z80). See here: http://spritesmods.com/?art=avrcpm
I admit, it’s a lot more of a hack than the Z80 laptop, and it doesn’t have its own keyboard and screen. But it’s still a very cool hack in my opinion.
That article seems to be from 2007…
Still cool
Very nice work.
This guy is going places. Great Scott! He had to go into the past to get back to the future! Amazing commitment and such a broad range of skills. Hat is off.
I’ll be impressed if that thing can browse the internet.
Just kidding. Seriously, it looks like a fun project. Any plans to market these as kits? Heck, I bet some folks would pay for just a parts list and detailed instructions.
This one’s cool
http://www.youtube.com/watch?v=ooi9rpx6ECM
I would totally love to build something like this, I wish I could.
This looks like quite a fun project! I can’t wait to see future revisions into an even smaller case though. Given that the TI graphing calcs and the original Gameboy were Z80 based, I know it would be possible to lose some of the bulk.
I don’t mean make it to a calculator or game system form factor, by the way; rather, maybe thin out the bottom case some. I love the keyboard, and the overall width and depth are great! It’s like a retro netbook.
I’m going to be following this project; it gives me some ideas of my own.
I was thinking that given that with modern technology one could fit quite a lot inside an original Gameboy cart if anybody had made one that contained WiFi so as to connect the Gameboy to the Internet by running Contiki on it Considering I do carry an original Gameboy around with me a lot of the time, a portable 8-bit Twitter client would be handy.
Seriously kicks ass.
First off, its cool to build your own hardware, but why not build something around a truly elegant early CPU like the Motorola 68K series. I’ve written assembler for both Z80 and 68K, I can tell you all those general purpose registers and the flexibility of addressing modes in the 68K are a joy to work with. The Z80 on the other hand is a really bad hack.
Its really a shame that the x86 instruction set ended up winning out over technically far superior offerings such as MIPS, PPC, Spark, PA-RISC. Perhaps someday we might all be using ARM based chips, that would be nice.
I’m not trying to argue with you, I really don’t know what’s so great about the M68K series. Aside from the fact that it was used in possibly the greatest Sega home video game console ever, what is so special about the processor? I’d really like to know… from what I understand, it was just another CISC CPU, competing with Intel.
Once again, I don’t know what is so good or bad about x86 vs. the others. The others are RISC, right? Is that it–minimal instructions, simplicity? Or was the x86 architecture simply poorly designed to begin with, and/or later bolted on with so many new things that it has become a monster?
Don’t get me wrong though. I would love to get my hands on an ARM- or MIPS-based machine and try it out… but it’s about impossible unless you’re talking about a cell phone (ARM) or classic gaming console (N64).
Edited 2010-11-08 05:23 UTC
Not really, non-cell phone based ARM machines are quite easy to get your hands on now. And they are quite cheap too. So if you are really interested you can try:
http://beagleboard.org/
or
https://www.alwaysinnovating.com/store/home.php
Compared to Intel it was a remarkably clean and frankly, powerful (massively overused word). The word everyone used at the time was “orthogonal” which made it easy to program in asm and easy to write compilers for. I wrote large amounts of 68K and built a compiler for a Forth like language – it was fairly simple.
8086 on the other hand was a mess of segment registers (even C code became non-portable on 8086 due to having at least 2 pointer types, often more). Registers had special meanings (counter – cx, index – si, di etc) so writing compilers became a tedious mess of register allocators (or lots of wasteful stack operations). Worse than that was the way using different registers for the same operations took differing amount of time – I seem to remember a memory read using “si” as the register took more time than using “sp”. Optimisation was damn painful!
They were the big ones for the programmer – 68K’s flat 32 bit memory model versus Intel 16+16=20bit segmented model and the register set (68K had a much bigger set as well as orthogonal instructions). 68K also had a rudimentary supervisor mode.
There were others – I had a brief hack on the 32000 (nat semi?) series – they were almost 68K goodness but the market never picked up on them (lazy marketing?). Z80000 was “ok” but far too late and Zilog also made a bit of a mess of the chip packaging.
The most astonishing CPU of that era was probably the first gen Transputers (the 414 32bit GPU and 212 16 bit IO CPU). I saw a data logger built using them some 20 years back – it was a brilliant piece of design – I’ve still got my OCCAM book somewhere.
Having said all that I saw guys building hi-end data loggers using 6800’s well into the 90s. It’s amazing what good hardware design and an excellent algorithm can accomplish. 6800 was lovely, so was the 6303 super-clone.
6502 was nice but that damn page 1 stack always got in the way (stack is limited to 256 bytes) but making a BBC B multitask BASIC was a delightful trick… Recursive image compression is a bitch with a tiny stack!
The worst thing about all the early 16/32 bit CPUs was the speed of multiply.
Sorry… nostalgia 😀
Soooo… for it’s time the 68K had everything – very good instruction set, excellent hardware interfacing (including being able to use older peripherals), reasonable price, plenty of supply. Sadly we know where the market went but by the time x86 had hauled it’s backside through to the Pentium it wasn’t _too_ bad a CPU.
You mean, compared to the 8086. Yes, the 68000 was clearly a superior microprocessor/architecture. But by the time the 386 came around, most of the issues you were mentioning were addressed.
Then there was also the “tiny” issue that x86 compilers were managing higher performance than the 68K counterparts. Which is ironic since people like to wax poetic about how much easier to program 68K was supposed to be. Usually I see people giving qualitative arguments for these sort of matters as red flags, usually meaning that their argumentation is not based on first hand experience.
Indeed. By the time the 386 had finally supplanted our old 286 machines (sometime around 1989) most of us already saw the writing on the wall for 68K. The hold outs (Amiga, Atari, Apple) were fading, maybe except for Apple but we weren’t in DTP software so it meant little to us!
The compiler thing – there seemed to be performance parity between x86 and 68K until around 386. This is a tricky one… Many 68K applications were written directly in assembler whereas x86 code tended to be C (albeit non-portable C with segmented address capabilities) with core routines in asm.
After 386 it became much easier to compile code and by 486 things were ticking along nicely for Intel. Nearly all the ‘gruft’ (notably those pointer issues I mentioned) had gone. Pentium was a big change in optimising and some of us went back to asm to rebuild our graphics routines etc. Of course by that point I hadn’t seen a 68K in years. Only the hard core had ST’s or Amiga’s pimped with 68040’s. I certainly never saw them viewed as a platform for commercial software development beyond a few niches.
How much 8086 code did you have to write!
It was fun but I wouldn’t want to go back there. To be frank I’m not sure I’d want to go back to 68K. For the time it seemed packed with features but reality was that it probably didn’t need to be! Instruction decoding for a 68K is pretty heavy – How many of those address mode do we _really_ use?
The 68k series was doing fairly well commercially, particularly in the Unix workstation and workgroup server and high-end embedded (remember VMEbus?) markets, I believe. It shipwrecked on the RISC tsunami due to bad management.
My team built custom high-end embedded computers, and went to Motorola to preview their next-generation processor. What we found were two teams in heated competition bordering on open warfare – think the Lisa and Macintosh teams at Apple. The 68040 team championed the legacy line, and the 88000 team had a new RISC design with a truly awful 3-chip implementation that seemed optimized for high cost. (We used the 88000 anyway for a very demanding real-time application, and it did at least have the performance we needed.)
In the end, neither team won – the 68040 was the end of a proud line, while the (single chip at last) 88100 morphed into the IBM PowerPC architecture favored by Apple. Of course, Apple then deep-sixed PowerPC for Intel…
A final note. National Semiconductor’s 32000 series was originally favored for Atari’s “Amiga-killer”, until the prototypes ran dog slow. It seems that in practice, the greater the orthogonality of the architecture, the slower the performance, so they chose the 68000 family for their very forgettable (especially compared to the Amiga’s awe-inspiring) design.
Never get an old engineer reminiscing. 😀
I had to write a ton of 68K, x86, MIPS, SPARC and other less knowns archs.
The thing is there is no such thing as a “pretty” ISA, there is just people who know what they are doing and people who do not. 🙂
It is really a shame that people who don’t understand the basic difference between ISA and microarchitecture keep giving vapid qualitative arguments when dealing with architectural considerations.
It is funny, because a few centuries ago… Copernicus model was deemed technically inferior because it was “ugly” compared with the beauty of earth-centric universe models.
Copernicus didn’t know about the ellipticity of the planetary paths, assumed circular paths, found contradictions between his pure heliocentric approach and the observational data, and thus was forced to embed epicycles in his theory.
The Ptolemaic epicyclic model was not beautiful; it was extremely complicated and based on unscientific assumptions.
And there the point flieth at supersonic speeds….
Very cool
Speaking of Z80’s I went to a retrogaming event on Saturday and was a bit excited to see a twitter client running on a 48k Spectrum. It worked too!
http://www.r3play.info/ and my writeup here: http://blitt3r.blogspot.com/2010/11/r3play-6th-november-2010.html