“MineAssemble is a tiny bootable Minecraft clone written partly in x86 assembly. I made it first and foremost because a university assignment required me to implement a game in assembly for a computer systems course. Because I had never implemented anything more complex than a ‘Hello World’ bootloader before, I decided I wanted to learn about writing my own kernel code at the same time. Note that the goal of this project was not to write highly efficient hand-optimized assembly code, but rather to have fun and write code that balances readability and speed. This is primarily accomplished by proper commenting and consistent code structuring.” Just cool.
“and requires no more than 4 MB(!!!) of RAM.”
WoW!. I wonder how much ram it would consume if it had full server with network and everything.
Edited 2013-06-17 18:47 UTC
I hope not as much as the 1gb Minecraft’s java process can soar to!
Yeah, large game world innit. Minecraft is odd amongst games in that the graphics aren’t what is using up memory, the actual game data is.
I don’t just think it’s the huge open game world. I think most of it is Java bloat.
Minecraft graphics are so crappy I’m surprised that this even uses 4MB. Back in the day, games with that level of graphics excellence were squeezed into 640KB or less memory.
Spoken like someone who doesn’t get Minecraft. The “crappy” graphics are a feature, not a bug. It’s designed to be a block game, therefore the tiles are meant to be blocky. If you don’t like the default texture it’s a snap to change textures and there are a lot of great, photorealistic ones out there.
Edited 2013-06-18 00:55 UTC
Morgan,
I think the point still stands though.
Most of today’s software isn’t optimized like it would have had to have been in the past. Every generation of hardware gains seems to get robbed by software which continues to become less efficient.
We take ever faster hardware for granted. I wonder, in a scenario where hardware had not improved so dramatically over the years, what would the software landscape look like today? I imagine there would not have been the drop in demand for efficiency minded programming skills that I’m finding prevalent among clients.
Edit: Not to attribute “blame” to anyone, it’s just a simple cost analysis. Upgrading hardware is often cheaper than paying programmers to produce better optimized code. The big question is: to what end can/should this inefficiency continue?
Edited 2013-06-18 02:44 UTC
Efficiency still matters, but it’s now in server farms. Whereas on the desktop and consoles gross inefficiency doesn’t cost too much, in server farms, a little bit of inefficiency multiplies hardware costs.
kwan_e,
“Whereas on the desktop and consoles gross inefficiency doesn’t cost too much, in server farms, a little bit of inefficiency multiplies hardware costs.”
I guess in an enormous R&D server cluster it make more sense to pay engineers to optimize the software.
Even when I was employed to work on a large corporate website distributed across multiple oracle-RAC nodes, they still seemed to prefer the hardware route over software optimization. The fix for shoddy performance was investing in faster hardware. It wasn’t really my preferred way of doing things, but it seems to be the norm at all of my former employers and clients. I’d love to hear more specific counter examples though because I’d be curious to understand how they manage to buck the trend.
Now it is even worse, thanks to virtualization.
If you need a few more machines, just right mouse click on your snapshot image and select clone or similar operation, in a few minutes you have a new machine serving requests.
This beats the salary of any engineer worth his/her salt in optimization.
However, I am aware that one of the reasons for the whole going native discussion started by Microsoft is the electricity cost of using VM implementations for programming languages.
FB started the efforts for the PHP compiler, now JIT, because of that. Andrei Alexandrescu mentioned in a C++ conference, that FB measures requests per Watt on their compute center.
Because it’s too expensive to optimise most software. The software that needs optimizations(low-latency applications, high fault tolerance applications and similar critical applications) gets optimised no less than 30 years ago.
You are missing the point where those gains allow for more software to be written, that would not be written 30 years ago. The hardware gains were not wiped out, we got either better software with more features or software that would be too expensive to write on a hardware restricted platform.
There was a drop in HPC oriented developers?!?!?! When did that happen? A not well known HPC oriented developer can get up-to GBP110k in London.
JAlexoid,
“Because it’s too expensive to optimise most software.”
Like I said.
“You are missing the point where those gains allow for more software to be written, that would not be written 30 years ago.”
Yes, obviously it’s true to a certain extent. On the other hand software could be even better if our collective optimization skills were not suffering from atrophy.
“There was a drop in HPC oriented developers?!?!?”
In the SMB business space that I was talking about, I’d say there has is absolutely a drop in demand for “efficiency minded programming skills”. I’d still like to hear about specific counter examples though.
In the SMB space the demand for developers went from minuscule to huge and the costs of development went down. All due to less need in early optimisation.
And why would you think that these skills are suffering from atrophy? The software development market has expanded and got diluted. I could bet a lot, that the number of people with these skills only increased with time. Not as fast as the total number of developers, but still.
Go to JobServe and search for threading or low-latency, if you want to see examples of huge demand for those kinds of people.
Edited 2013-06-19 09:35 UTC
JAlexoid,
“In the SMB space the demand for developers went from minuscule to huge and the costs of development went down. All due to less need in early optimisation.”
The demand for IT overall has grown, but the rising tide hasn’t been evenly distributed for skillsets. The overall demand curve has skewed significantly away from the “hard cs” skills that used to be absolutely essential for software development.
“And why would you think that these skills are suffering from atrophy? The software development market has expanded and got diluted. I could bet a lot, that the number of people with these skills only increased with time. Not as fast as the total number of developers, but still.”
Don’t get me wrong, there would be many developers who would be capable of employing such skills, but the need simply isn’t there in typical modern jobs. Historically companies that needed features (compression/encryption/image decoding/code optimization/etc) in their software would pay for developers to bring those specific skills in house, even to the extent of paying directly for university costs, which is just about unheard of these days. It isn’t that the need for these features has disappeared, it’s just that they’ve been commoditized and are already available in highly efficient libraries that cost far less than inhouse developers, are of better quality, and are available for immediate use.
Today, developers aren’t expected to know anything about JPEG compression in order to use them in software. It used to be that every software producing company would use it’s own inhouse database engines, today virtually no SMB companies do this anymore. Few companies that want virtualization will hire employees with the skills to implement their own hyper-visors. Most companies need to use cryptography, yet few are hiring developers with skills to build/understand implementations in house any more.
It’s generally good that we get to reuse higher quality software without having to rebuild the wheel over and over. But at the same time we have to recognize that demand for these kinds of hard-CS skills have been wayning overall as commoditization has gone up. That’s the point I’m trying to get through. It’s obviously true that developers are needed to implement these features, which are still very important to business. But unlike the past, SMBs no longer seek to fill that need with inhouse feature implementation specialists. These jobs have been replaced by systems administrators of various kinds and what I’ll call “lightweight coders”, who are good at putting together existing software components to meet the needs of the business.
Edit: I may be coming off as insulting, which isn’t the intention. I myself am a lightweight coder by my own definition at my day jobs. It’s not that I’m unskilled or incapable of hard-CS, it’s merely that these clients are only interested in filling lightweight coding needs in the first place.
Edited 2013-06-19 14:51 UTC
You are aware that computers in business settings are there to optimise and reduce costs. Optimising high cost coders, or any high cost for that matter, is essential to an effective business. In a few years even lightweight coders will be optimised away. It’s a new market where software development has moved into. Different market = different skillsets = different goals.
JAlexoid,
“You are aware that computers in business settings are there to optimise and reduce costs.”
I donno if I’d go all-in with that generalization. It depends on the nature of the job, sometimes we are seen as a way to open new opportunities (web devs are often seen in this light), other times we’re there to help the business replace other people with computers, etc.
“Optimising high cost coders, or any high cost for that matter, is essential to an effective business. In a few years even lightweight coders will be optimised away. It’s a new market where software development has moved into. Different market = different skillsets = different goals.”
I know, I’ve witnessed it personally in my short stint of a career. Dramatic changes of skillsets are needed compared to a decade or two ago. We must change and adopt to stay relevant, but I almost feel resentment over the changes which place me further away from what interested me most about CS in the first place. Oh well, many would say I should be happy to have a job at all.
I understand the point you’re making, and you’re right: A game like Minecraft shouldn’t consume as many resources as it does for all of its apparent simplicity.
Clones like Minetest show just how efficient a block based game written in a lower level language like C can be; that game (while not very fun right now) can be played at high framerates on any computer made in the past 12 years. Minecraft on my quad core i5 with 8GB of RAM is just playable with the built in Intel video, and requires a midrange Nvidia card to be really enjoyable. That, to me, seems to be either inefficient coding, bad choice of language, or both. But I’m no programmer so I can’t say for sure.
Regardless of all of that, being a block/tile based game with simple graphics can indeed be fun, which was my point to begin with. I’ve had more fun playing FTL, a very simple sprite based top-down space sim, than I had playing DOOM 3 or Crysis back in the day, amazing visuals or not. Graphics alone don’t make the game.
The graphics are fairly modern. The default textures are intentionally low-res. You can get texture packs that have much higher quality, but most people don’t care for that in Minecraft.
Each block in Minecraft can take 4 bytes.
The Minecraft world is 256 blocks high by default, and thousands of blocks wide and high (and that extends as required).
So, for a small map, you could have 4096x4096x256x4 bytes used up just for map data. That’s 17179869184 bytes – or 16,384MB to you or me. A lot of this is paged out to disk, but I think you can now see why Minecraft can use 1GB without breaking a sweat. And don’t you have 4-16 GB in your system anyway?
Think before you comment next time.
Sykobee,
“So, for a small map, you could have 4096x4096x256x4 bytes used up just for map data. That’s 17179869184 bytes – or 16,384MB to you or me. A lot of this is paged out to disk, but I think you can now see why Minecraft can use 1GB without breaking a sweat. And don’t you have 4-16 GB in your system anyway?”
“Think before you comment next time.”
This is exactly the kind of mentality that was different. Older software developers didn’t give up when the limited computing resources made problems non-trivial to solve. No, they were far more creative in finding ways to optimize the memory and cpu utilization to make it work. They couldn’t take things for granted the way we do today.
Why was this downvoted? That there are programmers who don’t even consider the possibility that there may be more efficient ways to use data than in it’s raw form is disappointing, at least to me.
I’m not saying optimization is a high priority these days, quite the opposite it’s often easier and cheaper to to use the most trivial approach enabled by the hardware at our disposal. But it’s still no reason take a close minded approach to what’s possible with very clever software algorithms running on less capable hardware than what we are used to.
Off topic: I tend to ignore downvotes here, as they are almost always by someone with an agenda and nothing relevant to say. You did indeed make a valid point, and even though not everyone will agree with you, downvoting shouldn’t be a concern. Usually, enough sensible people will see the system being abused and keep your comment above the threshold.
And since I know my comment is off-topic I don’t mind if it is downvoted for being so.
Actually, in this specific case, Elder Scroll games have shown for decades how large game maps can be used without excessive RAM usage. Simply put:
* Take a huge world map
* Slice it into smaller chunks
* Only keep the current chunk and its nearest neighbours in memory
* Profit !
I would be surprised if Minecraft didn’t do something similar on the inside, keeping the whole world map loaded all the time sounds like a needlessly inefficient way to go about it.
Edited 2013-06-18 20:48 UTC
http://gamingbolt.com/ten-largest-worlds-in-video-games
http://www.youtube.com/watch?v=5v2_WiQH7Tc
http://www.youtube.com/watch?v=HhyyUiYQolA
http://www.youtube.com/watch?v=BiQCz2NjPR8
Neolander,
“I would be surprised if Minecraft didn’t do something similar on the inside, keeping the whole world map loaded all the time sounds like a needlessly inefficient way to go about it.”
Yes, minecraft does take advantage of the fact that it doesn’t need everything all at once:
http://www.minecraftwiki.net/wiki/Alpha_level_format
http://www.minecraftwiki.net/wiki/Chunks
It actually has a very limited working set, according to the documents above the default is 441 “chunks” (of 16*16*256 blocks). Blocks which remain idle for 30s are written back to disk (in the case of a local game).
I’d say this is probably fine for single player gameplay. On massively multiplayer servers it may be difficult to scale ram to hold enough raw chunks for every player. Java’s own object overhead + deferred garbage collection probably amplifies the ram situation.
The interesting thing about minecraft in particular is that the world is initially generated algorithmically from a random seed. It could be possible to render the entire initial world functionally without actually storing any data blocks whatsoever. The only storage strictly necessary would be delta blocks, and these are very likely compressible due to their distribution and relationship to one another. I think with some clever software optimization, such worlds may have been possible on late 1999’s era hardware.
Edited 2013-06-19 04:08 UTC
OTOH “older software developers” gave us something so stupid as y2k bug, so you’re probably looking at the past through rose-tinted glasses, a bit.
Software was also notoriously more unstable in general.
zima,
“OTOH ‘older software developers’ gave us something so stupid as y2k bug, so you’re probably looking at the past through rose-tinted glasses, a bit.”
You’re absolutely right, and it was a somewhat poor generalization on my part to imply a difference in aptitude.
Well I think you’re wrong, I think that… …wait, WHAT?! You’re saying I’m “absolutely right”?! You aren’t supposed to do that on the internet, you’re supposed to disagree; you’re no fun arguing ;P
Oh well, I’ll just add that most software in existence gets abandoned relatively quickly, only good is still used (similarly, we also tend to remember only good-ish games)
zima,
Haha, next time your on long island we can go grab a beer, I’m sure we’ll find something to argue over
Obligatory mention of MenuetOS, the complete graphical operating system written entirely in Assembly:
https://en.wikipedia.org/wiki/MenuetOS
“I was tired of people saying nothing big can be written in Assembly, so I wrote an operating system.” It blows my mine every time I see it.
Why is it impressive? Until the 32 bit age, most OS for home systems were fully or partially written in Assembly.
Start here :
http://www.folklore.org/StoryView.py?project=Macintosh&story=I‘ll_Be_Your_Best_Friend.txt&sortOrder=Sort%20by%20Date&detail= medium
Welcome to a new mindset…
Kochise
Nice stuff.
Sure, those old operating systems were impressive too.
Are you saying your don’t find MenuetOS impressive? Have you tried it?
No, because I experienced first hand all those operating systems when they were new.
Sure Assembly programming is hard when compared to using higher level languages, but many old dinosaurs like myself used it as our daily tool back then.