Setting up a Mesos/Marathon cluster on RHEL 7.1 little endian


Mesos and Marathon

Mesos is a distributed cluster manager that aims at improved resource usage by dynamically sharing resources among multiple tasks. Mesos provides a unified view of resources on all cluster nodes and seamless access to these resources in a manner similar to what an operating system kernel does for a single computer. Hence, Mesos is also called as the kernel for the data center. By using Mesos, you get a core for building data center applications and the main component of Mesos is a scalable two-phased scheduler.

The following are the key components of a Mesos cluster manager:

  • Master: A cluster manager that coordinates all the cluster operations. Multiple masters can be present to obtain high availability.
  • Slaves (or the nodes): Cluster members where the tasks run.
  • Frameworks: Actual tasks that run in the cluster. There are many existing frameworks that allow varied sets of applications and services to be deployed on a Mesos cluster manager.

Refer to the Mesos Architecture for more information.

The following section describes how to use the Marathon framework to deploy applications and services on Mesos.


The following table lists the location of the relevant packages for IBM PowerPC® Little Endian (ppc64le) platforms:

Linux distributionPackage location
Red Hat Enterprise Linux (RHEL) 7.XUnicamp

Refer to the Unicamp repository at:

Note: For other distributions on IBM Power®, you must build packages from source.


Marathon is a framework that is used to run long-running applications or services on Mesos. These applications have high availability requirements, which means that Marathon can monitor and automatically restart the application instance in a failure and would be able to elastically scale the application. Marathon can run other frameworks, such as Hadoop, and itself. A typical Marathon usage workflow would be to run N number of instances of an application somewhere within the cluster, and each of the application instance requires one processor and 1 GB of memory. You can submit this request to Marathon that creates N number of Mesos tasks to run on the slaves.

Marathon provides a Representational State Transfer (REST) API for starting, stopping, and scaling services. There is a browser-based GUI and also a command-line client. It can run in a highly available mode by running multiple Marathon instances.

In this article, you can see how to deploy a service through Marathon and use the service in a sample application. The instructions that are mentioned here applies to both Intel® and IBM Power architecture (OpenPOWER) based servers. The service is a MySQL database service.

At a high level, a Mesos/Marathon cluster is illustrated in the following figure:

Figure 1. Mesos/Marathon cluster

Introduction to services

A service is a self-contained, independently deployed and managed unit of functions. Service-oriented architectures (SOAs) and recently, microservice architectures, encourage applications to consist of loosely coupled services. More modern applications consist of multiple microservices, because they offer a number of advantages, such as code reuse, ease of scdefault port range of 31000 to 32000, use the followaling, independent failures, support for multiple platforms, flexibility in deployment, and greater agility.

Mesos handles batch, real-time, and other processing frameworks, where operations typically take less time to complete. Enterprise infrastructure runs a lot of applications and services that take a longer time to complete and have different requirements than the data processing frameworks. These long-running services are critical for business and also consume a large portion of infrastructure resources. Hence, the ability to run services on Mesos is important.

To run services at scale, the infrastructure needs to be able to support the following requirements:

  • Deployment of a service can be complex if the service depends on other services and there are constraints about where the service can be deployed.
  • Configuration management and packaging is about ensuring that all the dependencies for a service are met and the environment is configured properly for the service before the service starts.
  • Service discovery and load balancing become important when multiple instances of a service are running. Service discovery answers the questions where there are the instances of a particular service running and load balancing is about deciding which instance a particular request should go to.
  • After the service is deployed, it is important to do health monitoring of the service. Health monitoring information can be used to take further actions, such as scaling a service up or down, or relaunching the service on a failure.
  • Availability requirement demands that the service needs to be available when there are high loads and failures.

Setting up a Mesos and Marathon cluster on OpenPOWER servers that are running RHEL

The following instructions describe how to set up a Mesos/Marathon cluster on OpenPOWER systems, such as, Tyan running RHEL little endian (LE).

Installation and setup of Mesos master and Marathon

Perform the following steps to install and set up Mesos master and Marathon.

  1. Add the Unicamp package repository. Ensure that the following repository is added to all the systems that are going to be part of the Mesos cluster (mesos-master and mesos-slaves):
    	# cat > /etc/yum.repos.d/unicamp-misc.repo <<EOF
    	name=Unicamp Repo for Misc Packages
  2. Install the required packages by running the following command:
    	# yum install mesos python-mesos zookeeper marathon
  3. Configure the Mesos master. Edit the /etc/sysconfig/mesos-master file and add the following information:

    If the IP address of mesos-master is, the complete configuration file looks as shown in the following code:

        # This file contains environment variables that are passed to mesos-master.
    	# To get a description of all options run mesos-master --help; any option
    	# supported as a command-line option is also supported as an environment
    	# variable.
    	# Some options you're likely to want to set:
    	# For isolated sandbox testing
  4. Restart ZooKeeper and mesos-master services by running the following command:
    	# service zookeeper start
    	# service mesos-master start
  5. Open the network ports. By default, mesos-master communicates on port 5050. Ensure that it ie following section, you can see how to deploy a s not blocked by a local firewall. If you are using firewalls, run the following commands to open a TCP port for the public zone:
    	# firewall-cmd --zone=public --add-port=5050/tcp --permanent
    	# firewall-cmd –reload
  6. Configure Marathon on the system by running the Mesos master.
    	# cat >/etc/sysconfig/marathon<<EOF
  7. Start the marathon service by running the following command:
    	# service marathon start

Installation and setup of the Mesos slave

Ensure that all Mesos slaves have the Docker setup configured. For more information about installing and configuring Docker on RHEL LE, refer to Docker for Linux on Power Systems.

  1. Install the required packages by running the following command: # yum install mesos python-mesos
  2. Configure Mesos slave. Edit the HOSTNAME variable in /etc/sysconfig/mesos-slave to point to the Mesos master IP, followed by setting the MESOS_EXECUTOR_REGISTRATION_TIMEOUT and MESOS_IP variables.
    For example, if the IP address of the mesos-master is and that of the mesos-slave is, the configuration file looks as shown here:
    # This file contains environment variables that are passed to mesos-slave.
    # To get a description of all options run mesos-slave --help; any option
    # supported as a command-line option is also supported as an environment
    # variable.
    # The mesos master URL to contact. Should be host:port for
    # non-ZooKeeper based masters, otherwise a zk:// or file:// URL.
    # For isolated sandbox testing
    # For a complete listing of options execute 'mesos-slave --help'
    # systemd cgroup integration
  3. Restart the mesos-slave service by running the following command:
    # service mesos-slave restart
  4. Open network ports. By default, mesos-slave communicates on port 5051. Ensure that it is not blocked by a local firewall. If you are using firewalls, run the following commands to open a TCP port for the public zone:
    	# firewall-cmd --zone=public --add-port=5051/tcp -permanent
    	# firewall-cmd -reload

Marathon UI is accessible in the http://mesos_master_ip:8080 website.

For example, if the IP address of mesos-master is, the Marathon UI link is accessible in the website.

Deploying an application through Marathon

The source code is available in the website.

The source code contains Docker files and related setup scripts to build the MySQL Docker image on both Intel and Power (ppc64le) systems.

In the following examples, is the IP address of the system that is running the Mesos server and Marathon.

You can use the Marathon UI or the REST API directly to deploy an application. For example, the following commands deploy the application by using the REST API of Marathon:

curl -X POST -d @mysqlcontainer.json -H "Content-type: application/json"
#cat mysqlcontainer.json
  "id": "mysql",
  "cpus": 0.5,
  "mem": 64.0,
  "instances": 1,
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "ppc64le/mysql",
      "network": "BRIDGE",
      "portMappings": [
        { "containerPort": 3306, "hostPort": 0, "servicePort": 0, "protocol": "tcp" }
  "env": {
     "MYSQL_ROOT_PASSWORD" : "password",
     "MYSQL_USER" : "test",
     "MYSQL_PASSWORD" : "test",
     "MYSQL_DB" : "BucketList"

A nonzero hostPort results in a random port being assigned. It is also possible to explicitly specify the hostPort value. Ensure that the ports that are specified in hostPort are included in some resource offers. For example, if the port ranges from 7000 to 8000 need to be used in addition to default port range of 31000 to 32000, use the following option:
–resources="ports(*):[7000-8000, 31000-32000]"

Connecting to a service deployed using the Marathon framework

This section describes how to connect and use a service that is deployed through Marathon by using the MySQL service and use it in a sample web application.

The source code is available from the website. The sample code has Docker files for both Intel and PowerPC LE (ppc64le) architecture.

Connecting to a service by using Docker links

After a service is deployed, you must discover and connect, that is link to the service from an application. A service in-turn can depend on another service. Hence, linking containers becomes important.

When you are linking services by using Marathon, note the following points:

  • Mesos/Marathon does not have a method to use a Docker link alias. Hence, if your application configuration depends on a link alias name, it would not work. For example, if the web application depends on a DB container and uses the environment variables with the DB link prefix (DB_PORT, DB_TCP_ADDR, and so on), ensure that the application configuration does not depend on a link alias prefix.
  • The linked application and the service need to be deployed on the same host for them to communicate.

Use the constraints parameter to deploy linked containers on the same node, as shown in the following example:

	$ curl -X POST -H "Content-type: application/json" localhost:8080/v2/apps -d '{
	   "id": "sleep-cluster",
	   "cmd": "sleep 60",
	   "instances": 3,
	   "constraints": [["hostname", "CLUSTER", ""]]

To use the above code, start the mesos-slave with the hostname parameter, as shown in the following example:

	# mesos-slave --master=zk:// --containerizers=docker,mesos --executor_registration_timeout=10mins --ip= --hostname=Ubuntu

Starting linked container by using the Marathon API

The setup is on an OpenPOWER (PowerPC architecture) based environment. However, you can use the same instructions for an Intel-based environment.

Specify the target container name as a value to the link key. Additionally, use the constraints parameter to ensure that the new container gets deployed on the same host where the target container is running.

curl -X POST -d @flaskcontainer.json -H "Content-type: application/json"
# cat flaskcontainer.json
  "id": "flaskappcontainer",
  "cpus": 0.5,
  "mem": 64.0,
  "instances": 1,
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "ppc64le/flaskapp",
      "network": "BRIDGE",
      "portMappings": [
        { "containerPort": 80, "hostPort": 0, "servicePort": 0, "protocol": "tcp" }
      "parameters": [
                { "key": "link", "value": "mesos-b81f9a21-3133-49de-acf6-988226eb6874-S18.5d3dcaa7-05c6-4a5b-af68-dba32b7d1835"}
  "constraints": [
                [ "hostname","CLUSTER","ubuntu" ]

Using mesos-DNS for service discovery and connection

The mesos-DNS creates application to IP address and port number mapping for each application running in a Mesos cluster.

Mesos-DNS is available at the website. It requires the Go compiler and building it on any platform is straightforward if the Go compiler is available.

A sample configuration file is included with the source itself and can be found in the website.

The following is a sample configuration that was used for the setup:

  "zk": "zk://",
  "masters": [""],
  "refreshSeconds": 60,
  "ttl": 60,
  "domain": "mesos",
  "port": 53,
  "resolvers": [""],
  "timeout": 5,
  "listener": "",
  "SOAMname": "ns1.mesos",
  "SOARname": "root.ns1.mesos",
  "SOARefresh": 60,
  "SOARetry":   600,
  "SOAExpire":  86400,
  "SOAMinttl": 60,
  "dnson": true,
  "httpon": true,
  "httpport": 8125,
  "externalon": true,
  "IPSources": ["netinfo", "mesos", "host"],
  "EnforceRFC952": false


  • zk is the location where ZooKeeper is running
  • masters is the location where masters are running
  • domain is the domain name for the Mesos cluster
  • port is the Mesos DNS port
  • listener is the IP that binds to mesos-dns
  • resolver is the external DNS server
  • httpport: port which will run mesos-dns HTTP API

For more information about the mesos-DNS configuration parameters, refer to the Mesos-DNS Configuration Parameters website.

You can either run the mesos-dns directory on any host or run it through the Marathon framework. For example:

curl -X POST -d @mesos-dns.json -H "Content-type: application/json"

This starts mesos-dns through Marathon.

curl -X POST -d @mesos-dns.json -H "Content-type: application/json"

This starts mesos-dns through Marathon.

# cat mesos-dns.json
    "cmd": "<path>/mesos-dns -config=<path>/config.json",
    "cpus": 1.0,
    "mem": 1024,
    "id": "mesos-dns",
    "instances": 1,

Use the following command to run mesos-dns directly on the host:

# mesos-dns -v=1 -config=<path_to_config_json>

Name the service using the following format:


Hence, the MySQL service DNS name would be mysql.marathon.mesos

Mesos-DNS also creates DNS SRV record for the services. An SRV record associates a service name to a host name and an IP port. Mesos-DNS generates a DNS-SRV record for the service name as _task._protocol.framework.domain.


  • Task is the application/service that started (MysSQL in this case)
  • Protocol is UDP or TCP, framework is Marathon or any other framework
  • Domain is the cluster domain (Mesos in this case).

This SRV record can be used by other Marathon applications to discover services.

As an example, any application that requires MySQL service can look up the SRV record for _mysql_tcp.marathon.mesos

# docker ps
CONTAINER ID IMAGE          COMMAND    CREATED       STATUS                PORTS                        NAMES
e227390bfb3d ppc64le/mysql "/" 3 seconds ago Up Less than a second>3306/tcp   mesos-fabb6e52-064a-425a-a501-330bc772cd55-S16.85fb3e7c-b2ca-412f-ac75-1ec314bee575
# dig _mysql._tcp.marathon.mesos -t SRV
; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7 <<>> _mysql._tcp.marathon.mesos -t SRV
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2126
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;_mysql._tcp.marathon.mesos. IN SRV
_mysql._tcp.marathon.mesos. 60 IN SRV 0 0 31172 mysql-4huw5-s16.marathon.slave.mesos.
mysql-4huw5-s16.marathon.slave.mesos. 60 IN A
;; Query time: 1 msec
;; WHEN: Mon Feb 08 14:27:38 IST 2016
;; MSG SIZE rcvd: 147

For more information on mesos-dns naming, see Service Naming.

The following is an example Python code using the dnspython module to query the SRV record and retrieve the host and port required to access the service:

import dns.resolver

a = dns.resolver.query("_mysql._tcp.marathon.mesos",dns.rdatatype.SRV)
for i in a.response.answer:
    for j in i.items:
        print j.port

The following is the output in the example setup:


Thus, it can be inferred that the MySQL service is running on the slave with the host name, mysql-4huw5-s16.marathon.slave.mesos, and at port 31172.

A similar logic can be directly incorporated into the application configuration, or the data can be used to set relevant environment variables required by the application configuration. For example, you can set MYSQL_TCP_ADDR and MYSQL_TCP_PORT to the values returned by target and port respectively.


The IBM Linux Technology Center (LTC) is a team of IBM open source software developers who work in cooperation with the Linux open source development community. The LTC serves as a center of technical competency for Linux. Connect with us.

Follow us on TwitterJoin the communityRead my blog

Downloadable resources


Sign in or register to add and subscribe to comments.

ArticleTitle=Setting up a Mesos/Marathon cluster on RHEL 7.1 little endian