Part 1 – Mongo Message Migration

Here at HootSuite, our members rely on our Publisher to send and schedule messages to their social networks.

Due to a tremendous amount of growth over a very short period of time, our Publisher has been through some growing pains and we decided it was time to move our Publisher data to its own infrastructure.

Have a Plan, and Make Sure Everyone Knows About It

Sometimes, Mongo is beautiful and treats you with respect. Other times, Mongo turns into a monster that haunts your dreams. We knew migrating a ton of data out of a heavily populated MongoDB cluster into its own Sharded Replicate Set wasn’t going to be as easy as it seemed.

The first step, and huge savior during this process, is creating a plan. Not just a plan in your head, not just a plan that the engineering manager is tracking, but a truly open, public plan. A plan that is visible to everyone in the company, that anyone can poke holes in. In a company like HootSuite, you’d be surprised what secret mini-projects are occurring that you haven’t been informed about (because it’s a secret!) that may affect your migration plan.

My original thought was to shut down our publisher for a short period of time while we completed our migration. I announced the plan internally on Yammer, and it was met with frowny faces from our stakeholders. Bringing down the publisher, for even a short period of time, wasn’t going to work.

Fair enough, so a full shutdown is out of the question. We decided to pick a date two weeks out (which happens to be the date 50% of our data is scheduled within), migrate everything past that date, and let the Publisher continue to run on the old architecture until that day passed. This way, we only have to stop people from scheduling anything more than 2 weeks out, and only while we work on the migration. Partial reduction in availability of the Publisher turned out to be a reasonable idea to everyone involved.

Had we not made our intentions clear, and not left room to adapt to other happenings around the office as they popped up, we would have had an unmanageable blocker on our hands.

Test in Production

As early as possible, you need to be testing your code changes, and your full migration, in production.

We have a saying here at HootSuite, “All learning is done in production.” If you’ve only tested in a development environment, or on your staging server, you’re still completely blind as to what will happen when your code hits the wild world of production.

At HootSuite, we have a “dark launch” system that allows us to wrap functionality in toggles in our admin panel. This means that for any features we launch, we can roll them out, limit who can see them, and even turn them off again with the click of a mouse. I highly suggest having something like this in place when you’re making infrastructure changes in your code. You don’t want to be left out in production without a quick way to return to a working state if you run into problems.

If you rely on Mongo’s balancer while importing data, you’re gonna have a bad time.

Mongo’s balancer can be slow, and it will degrade the speed of your data import when it needs to re-chunk/re-balance. Pre-splitting is the answer, but it’s not well documented on the Mongo website in my opinion. Keep in mind, all split/chunk commands need to be run against the ‘admin’ db.

The docs also don’t mention that you should turn off the balancer. You should turn off the balancer. We wasted half of migration day trying to figure out why our splits weren’t being distributed as we commanded, only to realize that the balancer thinks it knows better than us.

MongoDB Will Silently Laugh At You

According to Mongo’s documentation, you should run any scripts from your terminal shell (not Mongo’s shell). Running from Mongo’s shell seems to be an alternative suggestion.

They suggest doing this as a first solution:

$ mongo localhost:27017 presplit.js

Mongo happily took in the script, ran for 5 minutes, and returned me to my prompt without a status message. Upon logging into our mongos instance, none of the splits had been created. I was having trouble understanding if anything was actually happening, so Beier suggested I try running one single split command from the Mongo Shell itself.

Good call:

$ db.runCommand({split: "hootsuite.message", middle: { socialNetworkId: 10000 }});

Error: You must use the admin database to run this command.

We realized Mongo was failing silently from the bash prompt commands, and that’s why our scripts were being run, but our splits weren’t being created.

This worked fine:

$ mongo localhost:27017/admin presplit.js

This seems like a better solution:

mongos> load(‘presplit.js’)

Mongo caches your exports in memory

We found that Mongo is happy to throw your previously exported results in memory, and only grab new results from disk. Running a full data export before your actual migration time will help things run way faster.

Test, Test, Test, IN PRODUCTION…

I can’t stress enough how important it is to test your whole plan from start to finish in production. Make sure you know what you’re doing before you get to doing it. With a solid plan that was reviewed by everybody and scrutinized to the fullest, we turned a 2-hour projected migration process into a 20 minute process of beauty and grace.

No data is like production data, so do your mongoexport/mongodump with a full set of real data to find out how long it takes. Then transfer that data over your network (or however) to its new location and run your import on your new infrastructure. On Amazon, this is especially important as your EC2 instances won’t have all of their space available up front, and may take extra time to expand as you do your import.


  • You must have a plan that is available for all to see and critique. It’s important that everyone knows this is occurring, and can plan accordingly.
  • Be testing in production ASAP. Real learning happens in production.
  • Remember that Mongo’s automated balancer is not your friend during massive imports, so pre-splitting chunks of your own and moving them between your shards before your import is a huge time saver.
  • Don’t forget to pre-export your data before your actual migration happens in an effort to put as much data into memory as possible.
  • Be careful running Javascripts into MongoDB from your terminal shell. If you think something is going wrong, run one command from your script in the Mongo shell yourself to see what is happening inside of Mongo.

Good luck with your Mongo data migrations. I hope this information helps your process run as smoothly as possible.