At Hootsuite, we have begun to move some of our new services into Docker containers. Among the many benefits of application containerization, the ones that stand out the most are security, high availability, and non-conflicting server configurations for different applications.

During my co-op term at Hootsuite, I worked under Mark Eijsermans and Mark Allen on the Build and Deploy team helping configure the infrastructure for the deployment of containerized applications. Marathon was the framework chosen to schedule containers to a provisioned Mesos cluster. The implementation of a scheduler system is important because it allows developers to quickly get an application up and running in a consistent environment. Furthermore, it allows for efficient resource use and the ability to quickly scale services up and down as needed (allowing for significant cost savings). Marathon provides many benefits such as ease of use and an extensive REST API. Furthermore, it is currently in use at companies such as AirBnB and Shopify (among others), so it has been production tested.

However, within the past few months, Twitter’s Aurora project recently graduated incubation to become a top-level project at Apache. With the scale of its use at Twitter – tens of thousands of nodes running hundreds of thousands of tasks, and its recent support for Docker containers – it is hard to ignore the potential use cases of it in our infrastructure. I decided to spend some time attempting to install Aurora to our existing development Mesos cluster to see what I could learn from it. This post will attempt to compare and contrast the two Mesos frameworks for use in container orchestration.


The installation was done onto Ubuntu 14.04 servers, a Mesos primary and replica. Mesos and Zookeeper were first installed and properly configured before the frameworks were installed. The versions of Marathon and Aurora used are 0.13.0 and 0.10.0 respectively.


Installation of the latest version of Marathon was to simply get the latest package from the Ubuntu apt repository. Configuration is done by creating files corresponding to each configuration parameter of Marathon and placing them in a specific folder. You then start the Marathon service. If everything is configured correctly, it should run cleanly and be able to register with Mesos. Once Marathon is running, tasks can immediately be scheduled from web client with a variety of configuration options (quite literally with the click of a button!).


As expected, Aurora was more difficult to install and needed to be built from source. Aside from the limited documentation available on Aurora’s website, there is very little information on how to deploy to an existing cluster. To do so required digging into build scripts to install the proper dependencies and set up correctly. There are Mesos-version specific python dependencies, which may need to be build from the Mesos source (depending on your Mesos version). Since there was little to no documentation on how to deploy Aurora to an existing cluster, I have written a short gist outlining the steps I needed to take to get Aurora running correctly with my local cluster running Mesos 0.25.0:

For our use at Hootsuite, we want to be able to support clean rolling upgrades and deployments. The installation and deployment of these frameworks has to be consistent, facilitating the need for an Ansible task for provisioning hosts. The installation process of Marathon makes this part much easier than Aurora. Furthermore, the automated deployment of Aurora would necessitate hosting our own version of the source pertaining to our specific configurations. While Aurora is actually older than Mesos, it was used internally in Twitter. It feels very much like an internal tool and as it evolves, the installation process will likely change. This would necessitate further time devoted to changing deployment scripts in the long run. For this reason, Marathon wins out.

Ease of Use

As Marathon has been publicly available longer, it has a longer lineage of general use constraints and thus easier to use. Like Aurora, it has a web interface that displays the status of containers running on your cluster. However, it stands apart from Aurora as tasks can also be scheduled directly from the web client. In addition, there is a robust REST API for scheduling more complex tasks, including docker containers as defined here. These REST endpoints take well-defined configuration options passed in as JSON parameters. While there is plenty of documentation on scheduling and configuring Marathon for specific uses, there are still sparse details on some specifics, and source code still needs to be examined. Marathon provides full support for Docker containers and scheduling them is extremely easy when combined with our internal Docker registry.

Web interface for scheduling a new task in Marathon
Web interface for scheduling a new task in Marathon

On the other hand, Aurora still feels very much like an internal tool. The documentation is sparse, with limited instructions on the various options in its configuration. Accessing the web server will allow you to see some information about the running tasks in a well-organized manner. However, the web interface is read only, there is no way to schedule any tasks. To do so, Aurora comes bundled with a command line client and its own DSL. Scheduling basic tasks necessitates learning the basics of an entirely new language. Configuration options for the jobs are very robust and specified in a ‘.aurora’ file. The command-line client has many options allowing users to schedule, kill, update and view the status of tasks that they may be running. However, much of the task and cluster specific configuration options are contained in the .aurora file for the task. I found myself frequently consulting the documentation and source to get even my simplest tasks running correctly. In addition, Aurora has labeled Docker support as in beta currently, due to there being some security issues with Docker in certain situations (and thus not suitable for a large scale organization application like Aurora).

Scheduling Control

When examining scheduling capability one can examine the amount of control that a developer has over the deployment. With both frameworks, there are hooks for when a task changes state through Mesos (NOT STARTED, RUNNING, ABORTED, STOPPED). Marathon provides a rich API for accessing the status, and the status may also be viewed from the web interface. The control over the deployment is limited to rescheduling the task if it fails. While Marathon is limited in this sense, simply restarting the container is sufficient if applications/services are containerized properly.

Internal State model for tasks running in Aurora
Internal State model for tasks running in Aurora

Each task scheduled with Aurora has its own well-defined state model. Client applications can be written to hook into task state and define actions to take. This allows developers to finely tune the functionality of a job for how to respond to a change in state. Furthermore, as Aurora has its own DSL to configure tasks, a much finer granularity is exposed to developers for the purpose of designing deployment strategies. For these reasons, Aurora provides far more scheduling control than Marathon.


Another major difference between Aurora and Marathon are the scale at which they can be utilized. With support for preemption in Aurora, it can be used as a scheduler for production, staging, or any other intended cluster types at the same time. What this means is that tasks are classified into priority groups when they are scheduled. Anything marked with a “production” classification is treated with top priority. When a scaling up for production tasks is needed and the cluster no longer has adequate resources to meet the scaling request, Aurora will start killing tasks of a lower priority to make room for the production tasks. Developers can borrow resources from the global cluster at any given time if there is a need for resources for a test application, however Aurora guarantees that no production applications will be affected. Furthermore, this concept of prioritized scheduling allows Aurora to be effective at handling scheduling requests for an entire organization. Each priority group is implemented as a queue. When a resource request is sent to Aurora, it adds the request to the scheduling queue corresponding to its scheduling group, allowing for all tasks to eventually be scheduled in a fair manner.

High level model of how Aurora launches tasks through task groups
High level model of how Aurora launches tasks through task groups

With Marathon this is not the case, the support for scheduling multiple tasks is guaranteed to be fair, however there is no preemption support. For this reason, multiple clusters will be needed to support different environments so non-production tasks do not disrupt key production tasks. This means that there is additional overhead for infrastructure and operations developers to maintain separate environments. For these reasons we can see that Aurora is better designed to scale to a large organization, however marathon can still support very large clusters.


If the system is to be used in production, there is a significant amount of information that needs to be collected. This includes information not only about the individual hosts that are being used in the cluster, but the state of the cluster as a whole. While Mesos itself provides a systems engineer with a significant amount of information regarding cluster health, sifting through this data may be too difficult to allow for rapid diagnosis of a task/service failure. In addition to rudimentary task health checks which, Marathon provides little information that is helpful for debugging.
Aurora on the other hand, provides a significant amount of other information, which can be extremely useful for debugging errors in the cluster. Some examples of these metrics include:

  • http_500_responses_events: The total number of HTTP 500 status responses sent by the scheduler. Includes API and asset serving. An increase warrants investigation.
  • jvm_uptime_secs: additional JVM metrics to help debug why the scheduler may be crashing

Additionally, accessing this information is trivial in Aurora. It can be found at the /vars endpoint of aurora, and can be exported in text/plain or JSON formats for easy integration into most existing stats aggregation tools. Furthermore, this information is also available in graphical form at the /graphview endpoint. This endpoint also supports simple aggregation and compositions of values, which can help engineers with rapidly debugging problems.


When examining which scheduler to choose at Hootsuite, we looked at the above criteria. Because our primary goal of this project was to create a system that facilitated quick and easy deployments of applications for developers, ease of use and system/installation complexity were two factors we weighted heavily. Marathon is extremely easy to get started with, install, and deploy for immediate use. Ease of installation is important for us at Hootsuite because it needs to be scripted in order for our environments to be consistent. The deployment of a new scheduling framework must be easy, and should not break the cluster during an update. For this criteria, Marathon wins out as its installation process is drastically simpler. It is no secret that Aurora provides extremely powerful scheduling primitives, but Marathon does not facilitate the need to learn a very specific DSL as Aurora does. The importance of this comes with the technology itself. With the goal of having deployments customizable by an application developer, developers should be able to easily specify their resource requirements and just have an application up and running rapidly without having to learn Aurora-specific configuration options. This means that we trade off some scheduling control in favour of ease of use. Marathon wins out.

However, we feel, that the loss in scheduling control by using Marathon can is acceptable in our use cases. This infrastructure will first be used at Hootsuite to run a handful of micro services initially, not our massive PHP monolith. Marathon automatically reschedules containers when their health checks fail, which in the case of containerized services, usually is enough to mitigate any issues that may occur. Scaling of this cluster is not a primary concern for us at this time. It is an issue that will be faced down the road as more services expand to start using this infrastructure. However, as we are piloting this program with the intention of use for new services, the massive organization-wide scale use cases that Aurora supports are not a priority at this point in time. Thus with choosing Marathon the additional overhead comes with maintaining environment specific clusters. As we have used Ansible to script the provisioning of our Mesosphere cluster, this can be greatly reduced.

Lastly, for our monitoring needs, we needed something that didn’t require much configuration and development time to integrate into our stack. Mesos provides a significant amount of cluster health information. In addition, the information that Marathon provides on task state, task health, and framework use can be easily imported into our existing logging infrastructure. The additional cluster-wide JVM information that Aurora provides might be useful, but for the small scale that these clusters will be in use for, we have not run into issues with debugging issues with the information provided to us from Mesos and Marathon.

For the above reasons we have decided to use Marathon as our framework scheduler. The benefits that Aurora provides can largely be realized when a scheduling system is implemented on an organizational scale. As the first services roll out onto the clusters and more and more developers become familiar with the technology, the idea of using Aurora as a framework scheduler should be revisited. I’m confident that as Aurora matures, the benefits it provides will significant outweigh the complexity and ease-of-use barriers.

Thanks to Noel, Ben, Mark A, and Mark E for their feedback and Kimli for the editing!

Mohit GuptaAbout the Author

Mohit Gupta is a co-op student from the University of Waterloo who worked on the build-and-deploy sub team within the platform team at Hootsuite. Aside from researching and configuring new production environments for services, he helped build out and maintain the CI infrastructure.