“EVE Online’s complicated inter-corporate politics are often held together by fragile diplomatic treaties and economic agreements. So fragile, in fact, that a single misclick can lead to a fracas that quickly snowballs into all-out warfare. That’s what happened to two of the spacefaring sandbox MMO’s largest player alliances in the Battle of Asakai, a massive fleet vs. fleet onslaught involving 3,000 players piloting ships ranging from small interceptors to gargantuan capital ships. Straight from the wreckage-strewn outcome of the battle, we’re breaking down the basics of what happened for everyone to truly fathom one of the biggest engagements in the game’s history.” The costs of this battle in in-game currency is, so far, 700 billion. While MMO’s don’t float my boat, I have to say that this is still pretty awesome. Penny Arcade looks at the technical details server-side, and what a battle like this does to the game’s backend infrastructure.
Still prefer the beauty of homeworld’s battles ( looking at the videos it look like a random pile of ship ).
A battle of 3000 people is not massive and not even fun, specially when you loose your ship because of lag or disconnect. Fun in EVE came from small battles, where you had to use your skills to succeed.
Uh, how is it not massive? I’m sure it might not be fun but 3000 is a pretty massive number for an online battle.
Agreed. For some context, 3k is bigger than the populations of most medium sized WoW servers.
Hi,
I’d like to know why they need to disconnect people when they shift a solar system from one node to another.
As far as I can tell, EVE Online uses separate servers (where the client disconnects from the previous node and connects to the new node when the player “warps” from one solar system to another), and they’re lying about it being a distributed system.
It’s like pretending that having completely separate web servers where there’s HTML links from one site (e.g. osnews.com) to another site (e.g. penny-arcade.com) constitutes a “distributed system” because the user’s client (web browser) can switch between separate servers when they click on a HTML link.
– Brendan
You just described a textbook example of a distributed system, but then criticized it for calling itself one… Why on earth did anyone mod you up?
??? The World Wide Web is a distributed system – the poster child for one actually. Really, try to find a definition of the term that doesn’t site it as the prime example.
Hi,
If you want to stretch the definition of “distributed system” to include the “least distributed possible” cases; then you could pretend almost anything is a distributed system (all the way back to the old “telnet into a server” MUDs MUSHes and MOOs) and it becomes meaningless joke. I’m fine with that if that’s what you want – let’s call it a “barely distributed system” (no fault tolerance, no load balancing, etc; where the entire pile of crud has to be taken down for a few hours every week because they’re too stupid to figure out live migration or even handle the complexities of symbolic links).
So; are they lying about it being a “more distributed than everything else” system?
– Brendan
Traditional MUDs… no. But there was some effort to make distributed system of MUDs
http://en.wikipedia.org/wiki/InterMUD
Note there is no mention of fault tolerence, load balancing, distributed processing, or anything of the sort.
I agree with your gripe – only pointing out that fault tolerance and load balancing are only tangentially related to distributed systems architecture. Many distributed systems have none of those attributes, and many systems having those attributes are not distributed systems.
If they run a large number of primarily independent servers that do most of their work on local independent datasets, only communicating to each other over narrow channels, then they are textbook distributed systems. The more state they share the less “distributed” they are…
Hi,
I was looking at wikipedia’s list of “distributed computing architectures”:
http://en.wikipedia.org/wiki/Distributed_computing#Architectures
You’ll see that boring old “client-server” (potentially including one client on one computer talking to one server on a different computer) is the first architecture on their list.
In my opinion, boring old “client-server” (including multiple clients talking to one server, and multiple clients talking to multiple separate servers) is just client-server and doesn’t really qualify as a true distributed system.
Now; EVE Online (as I imagine it) is a slightly more complex case of client-server. I’d imagine that each individual client is talking to at least 3 different servers (one for chat, one for the economy/trade, and another for “objects in space”); but despite this it’s still all just client-server, and still doesn’t really qualify as a true distributed system in my opinion.
These are all practical considerations only (e.g. shared state and/or heavy communication tends to kill scalability/performance; and local independent datasets is the result of minimising shared state and communication).
What does qualify as “true distributed” is when multiple computers work together, rather then independently. Google (many computers working in parallel for each query), Wikipedia (front-ends, caches, databases, media servers, etc for each page request), BitTorrent, SETI@home, supercomputers.
– Brendan
I want to avoid getting lost in the weeds when it comes to terminology is all…
A distributed system is really just a collection of network connected machines running software to facilitate a common goal.
BitTorrent is a good example of a distributed system. So is Seti@Home and the other examples you gave.
But one of the things you mentioned is live migration. One way to do live between two nodes in simple layman’s terms:
1. On the source node, snapshot the current servers complete state, and start doing incremental differential snapshots at set intervals.
2. Send the full snapshot to the destination node. Once it has it, start sending the incrementals until you get “caught up”
3. Freeze the state on the source node, send the last incremental to the destination and have it wake up, and then direct all clients to reconnect to the destination node.
My point is this process has absolutely nothing to do with distributed computing – it is the same process you would go through if you only had one server and wanted to migrate to another one. It also has an achilles heel, namely the larger the amount of state and the faster it changes, the longer it takes to complete. There is almost always a significant amount of “lag” involved with a fall over unless the amount of state is trivial.
All I was getting at is that EVE Online may be a poorly designed systems in some respects (I really don’t know much about it), but the deficiencies you mentioned don’t have anything to do with it being more or less distributed…
Take seti@home… How does it deal with a non-responsive node (i.e. a user’s computer that goes offline)? It simply moves on – give the work to someone else. The point is the system doesn’t care about a node going away, because all a node is to it is a compute resource working on a small dataset. For seti@home, compute cycles are the gold – they don’t care about latency because it doesn’t matter to them.
EVE is the opposite – the problem they have to solve is reducing the latency to a large number of clients in a world represented by a shared state – latency is their gold. Performing compute in separate “islands” is the way they reduce latency.
They are both distribute systems, but they are almost opposite in purpose and design. The bigger an EVE backend node is, the more clients it can handle with low latency, but the more state it has to manage. Live migration of a large amount of state, without negatively impacting the latency of the users, is simply not an easy problem to solve. Not saying it can’t be done, just that it isn’t trivial and it would almost certainly not be transparent to the users…
I wonder how many Noctis ships were reaping the rewards of the battle.
I’m sure they wrote that thinking people would be impressed but that’s not what I got out of the article.
Also… it looked like the ships were just sitting there shooting, not even moving relative to each other.
Couldn’t high level commands like “sit here and continue firing until I say otherwise (or die)” be delegated to the server?
Then at any given point the server would be dealing with only a fraction of interactive clients and a majority of clients set to auto-fire.
The problem would become a lot smaller with things being processed locally.
As far as I can tell from reading this and other articles, the size of the battle was impressive only because the back end of EVE Online is written in Stackless Python. Using Stackless for something like this is rather like conducting the Berlin Philharmonic through Tristan using Morse Code. From your base on the moon.
A combination of Java/Scala and Akka could have handled this without breaking a sweat.
Python expresses complexity clearly.
Java/Scala/Akka clearly express complexity.
Your way might end up faster, but I doubt it would be very nice to look upon…
I disagree. I dearly love Python since it’s the first language that I ever felt truly competent with. But experience has taught me that approaching concurrency using multiple threads and shared state – even microthreads a la Stackless Python – is a very messy business. Actor models, in my opinion, are far cleaner conceptually and scale much better. Hence my preference for Scala over Python in this instance.
Well I can’t really argue with that… I was mostly just commenting on the language itself, although to be fair Scala isn’t half bad.
I would tend to think this kind of thing would be ideal for something like Erlang, but that would be begging the question since its even uglier than Java
I’m looking forward to someone try doing a mmog in Go. I think it might be interesting to see some of its features leveraged for that type of work – it seems like a good fit to me.
Edited 2013-01-31 04:33 UTC
What?! Erlang ugly? Oooh, I get it.. “ugly” is the new “sick”/”bad”.
…and the poop throwing begins…