I'm part of a small team which is adapting an old game to use a client/server architecture, so game logic/state can be more centrally controlled. The game server currently uses MongoDB, but it's still early enough to switch. I have a plan to avoid server performance problems if we need to scale to support a lot of users. Can you tell me if it seems reasonable? (We don't actually plan to do this stuff until it's needed, but we don't want to develop ourselves into a hole we can't get out of.)
The server is stateless (responding to each message based on what's in the database), so we can have more than one server talking to the same DB. The server is hosted on Amazon Web Services. So the first scaling step will be to use one or more instances/VMs running their own copies of the server, but all connecting to the same database. This assumes the server CPU power will be maxed out before the database. A load balancer will spread the requests to all the servers.
The next step is to change the single database to a MongoDB replication set, so the many servers can talk to many databases. My understanding is that we can read from any database, but must write to the "master". This will require slight changes to the server, but in the real world, do we need to minimize writes or somehow batch them? What does this entail?
Hearing your feedback/knowledge/experience would be appreciated.
↧