How-tos

Real-life production experience with Secure Gateway

Share this post:

When I need a secure way to connect Bluemix applications to remote locations on-premises or in the cloud, I use the Secure Gateway service. Setting up the Secure Gateway turns out to be really straightforward, especially when using the Docker-based gateway client. Yet, when it comes to deploying and running the Secure Gateway in a real production environment, you will be facing lots of questions from your infrastructure and security guys.  Does the gateway support load balancing? How do you guaranty an end-to-end secure connection? What about certificates? Which firewall rules do I need to set up? This post helps you get prepared by addressing most of them one-by-one!

Q1: What is behind the Secure Gateway in Bluemix Public?

Secure Gateway is a multi-tenant highly available and load balanced cluster of several servers that share the hosting of all the gateways. Each gateway gets a unique host/port combination from one of the available servers, and using mutual authentication prevents any other applications from utilizing that connection.

Secure Gateway overview

The gateway (or tunnel) in the middle, which is the connection from the cloud to the on-premise gateway client component, is essentially a secure websocket tunnel similar to ssh.

Once the tunnel is connected, the next step is to create a destination and define the certificates that will be used for the TLS mutual authentication from application to the on-premises client. As part of this definition, you will also indicate the destination IP and port to access. Once we have added this destination, we will be provided the cloud host and port.

Q2: What is the protocol used by Secure Gateway tunnel?

The gateway tunnel itself is essentially a secure websocket tunnel similar to ssh. IPsec technology is not supported for the time being.

Q3: Is Secure Gateway bi-directional?

The Secure Gateway is uni-directional in the sense that all requests have to be initiated by the Bluemix app. Yet, a request can be a read-write to the database running on-premises.

Q4: What do you have to deploy on the client side?

The implementation requires to deploy a gateway client, which can be a Docker container, a virtual DataPower, or native installers for Linux and Mac. Running a Docker container requires to install a Docker engine.

Q5: What is the installation process to start using Secure Gateway?

  • Provisioning a Secure Gateway instance within Bluemix gives you a unique space to create gateways and endpoint in, and creating a gateway gives you the unique gateway ID to use when connecting the Docker client.
  • A Docker container would be installed on-premises. When starting this container with the ID, there will be 2 outgoing calls to open the tunnel from on premises to the Secure Gateway service running in Bluemix public:
    • one outgoing call on port 443 (HTTPS)
    • one outgoing call on port 9000
  • Then, destinations (hostname/IP + port) get created in the Bluemix Console.

Each destination (=endpoint) created in the Secure Gateway service will have a unique cloud host and Port. This unique cloud host is mapped to a unique IP address that is independent from the application itself. This IP address is static and can be filtered at the firewall level. Destinations can use HTTP/HTTPS, unencrypted TCP, TLS server-side, and TLS Mutual Auth. TLS Mutual Auth being the obvious secure option for something like a database. Those options only apply to the connection between the application and the front-side host/port of the Secure Gateway servers.

Q6: How many Secure Gateway servers are available in Bluemix Public?

Secure Gateway servers are static servers, not based on containers. The number of servers might increase horizontally in the future to increase. Secure Gateway uses clustering technology to share the load. We have horizontal and vertical scaling.

Q7: Can the addressing in the destination be done using hostname?

The addressing of destination can be achieved using either IP or Hostname. However, there is no DNS resolution in the SG when using hostname. The host that the gateway client is running on needs to be able to resolve the hostname of the endpoint. The server side doesn’t need to know anything about the DNS.

Q8: How do I debug the gateway client running in Docker?

After launching the gateway client through the Docker run command, you can enter in debug mode. To do so, press enter in the command line and set the log level DEBUG or TRACE as follows:

For reconnecting to a running Docker container, you can simply type docker ps to get a list of running container IDs, and then you can type docker attach $ID to get back into the container. Pressing the Enter key will get you back to the CLI prompt at any time once you are inside the container, and you can use the ‘C’ and ‘s’ commands to check the current status of the gateway and its connections.

Q9: Does Secure Gateway support high availability and load balancing?

Multiple gateway clients can connect to a single gateway by starting them using the same gateway ID. The connection to these clients is load balanced by using round robin to prevent a single point of failure in the connections. If one of the clients goes down, you’ll still be able to connect to the on-premises resource through another client that is still connected.

Assuming you have two separate Docker gateway clients, each of them on separate VMs to get the round robin working, all you need is to start the client on one VM with the docker command, then start another client on another VM again with the same docker command.

The Secure Gateway server does not need to know the IP address of the VM with the client. So all you have to do is start the clients on VMs that can access the Secure Gateway server. Putting them on separate VMs will prevent you from having a single point of failure and more resources, but it’s not required.

Q10: What happens if my Secure Gateway UI is showing the gateway as disconnected even though the client was actually still connected?

The client automatically reconnects if it loses the connection to the server for any reason, so you shouldn’t need to take any action there unless there was some kind of failure on the client side. If the connection is still up but the UI shows the wrong status, then you can do the enable/disable using the buttons in the UI to force the client to reconnect and update the UI correctly.

Q11: As of version 1.3 the Secure Gateway implements a new native client. How does it compare to the Docker client?

The new RPM package will give better performance as Docker virtualizes the container, so it has a performance cost. However, the Docker client gives you the flexibility to run several clients and update easily.

Q12: I have different flows going through the Secure Gateway; can I break these down into different gateways, thus several clients?

Yes, client separation can be a great way to control some of the load and to keep things well organized. The client requirements are definitely related to the number of concurrent connections that they will be handling on average. Memory usage for buffering can go pretty high on a single client if it’s handling a large amount of throughput and a lot of connections, and also CPU cycles can be fairly important when using TLS encryption. Overall, a 2 CPU – 4GB would be okay for one or two client instances with moderate traffic, but 4CPU – 8GB would be recommended to go for hosting three clients with heavy traffic. This is a pretty rough estimate, so you may need to allocate more resources if the traffic really skyrockets.


Part 2 continues with all the security challenges you may faced when setting up the Secure Gateway in production!

More stories
October 19, 2018

Part 1: Build Messaging Solutions with Apache Kafka or Event Streams for IBM Cloud

As part of the iterative approach described in the main introduction blog of this series, the first step is to building messaging solutions is to identify the use case requirements and quantify these requirements as much as possible in terms of Apache Kafka and Event Streams.

Continue reading

October 18, 2018

Mount iSCSI Block Storage on VMware ESXi 6.5U2

It seems like pretty much everyone is using VMware ESXi virtualization nowadays. In this article, I'll cover how to mount IBM Cloud Block Storages onto this popular hypervisor using the iSCSI protocol.

Continue reading

October 18, 2018

Journey to Cloud – Moving On-Premise Mobile Foundation Apps to IBM Cloud

IBM MobileFirst Platform Foundation powers many on-premise customers in more than 50 countries, delivering the best-of-the-best apps and serving a large number of users. IBM Cloud Mobile Foundation Service offers all the same capabilities available in on-premise MobileFirst Foundation, with the additional benefits of fully managed service with instant deployment and scale-out option.

Continue reading