In March 2016, the DataPower team introduced support for running a DataPower gateway in a Docker container. Docker is quickly becoming a very popular tool for streamlining the development cycle, especially with Continuous Integration and Continuous Deployment.
The DataPower GitHub repository contains several projects, including datapower-labs, which contains a docker directory. datapower-labs/docker is the place to go for sample Dockerfiles, scripts, and Makefiles which demonstrate using DataPower on Docker within the context of a Software Development Life Cycle (SDLC). The files here are sample code to get you started, you'll want to customize them for your own usecases, and development patterns.
For this example, I've forked the datapower-labs project into my own repository. You can download the sample code from there.
A common non-functional requirement when deploying DataPower is to forward log events to a centralized log server, so all the logs from a set of DataPower gateways can be consolidated into a single spot, greatly easing development and troubleshooting.
The usecase we're going to demonstrate here is using syslog to capture log events from a Docker-deployed DataPower gateway. The code samples can be downloaded from the syslog branch of the main DataPower docker project.
We assume you have a Docker server you can work with, and have cloned the code from my GitHub repository. Execute the customer-optimized sample to create the customer-commit Docker image. This will be the starting point for adding syslog support and testing to the DataPower image we use during development.
When we're finished, we'll have a set of Docker images and running containers that include DataPower, test backends, and a syslog server. The DataPower image will have a syslog LogTarget defined to capture debug-level messages from our test domain, and send them to the syslog-ng container. The syslog-ng container will accept the incoming log messages and write them to its /var/log/datapower file.
You've already executed the pre-requisites, and should have a <user>/customer-commit image. This is our starting point.
This shell script follows the pattern in the other DataPower Docker samples by generating DataPower configurations based on the Docker setup. Just as datapower/start/loadbalancer-group.sh builds a LoadBalancerGroup based on the environment variables Docker creates in the running DataPower container, logTargets.sh will create a DataPower LogTarget object in the foo domain with the correct networking information to connect to the syslog server in the syslog-ng image. This file will execute in the DataPower image before the main DataPower process starts, and will create the datapower/config/foo/syslogTarget.cfg configuration file.
We've added a set of commands to foo.cfg, the main configuration file for the foo domain, to create the new LogTarget. We simply include and execute the syslogTarget.cfg file generated for us by the above shell script.
The base Makefile in the customer-build directory of the main branch has been updated to create and run a syslog-ng image alongside our DataPower and test images. The run, stop and rm targets have added lines to do the right things as we execute our development cycle.
The default /etc/syslog-ng/syslog-ng.conf that comes in the base image downloaded from hub.docker.com does not have support for receiving syslog messages from remote hosts. We have made the requisite changes to allow this function to this file, and the Makefile has been updated to mount our changed file on our local filesystem to the correct place in the syslog-ng running container.
That's it. Two new files and changes to a few lines of existing files, and we're ready to go.
Running the sample
From the customer-build directory in our cloned repository, we simply `make rundev`. This will start the default 3 sample backend Node.js servers, start the syslog-ng image, and then start our customer-commit:latest image.
Note: the first time you run this, the syslog-ng image will not be locally available, and will need to be downloaded from GitHub. This will take a minute or two, and will require connectivity to GitHub. This will only be required the first time you run the sample, the image will be local and won't need to be downloaded again. Subsequent executions will run in just a few seconds.
Execute `make test`, to ensure the sample code is running successfully. If you see errors, you'll need to correct them before proceeding.
Now, let's check to make sure our log target is correctly forwarding logs to our syslog server. In another command window, run `docker exec -it syslog-ng /bin/bash`, which puts you in a bash shell in the running syslog-ng container. Run `tail -f /var/log/datapower` to watch the log file. In the first command window, re-run `make test` to drive some requests through our MPGW. After a few seconds, you'll see the log entries from DataPower show up in /var/log/datapower, showing that we've successfully configured and deployed our DataPower log target, and correctly configured DataPower to send syslog messages to our syslog-ng container.
This small sample shows the flexibility and power of Docker in your DataPower development cycle. Adding functionality, including non-function requirements, is a snap. We've successfully created a co-deployed syslog server, along with the DataPower configuration to forward log entries to the syslog server. The DataPower configuration is flexible, using the built-in Docker ability to map files to local filesystems, and reading Docker-generated environment variables to create DataPower configurations. Using the other samples in the DataPower GitHub, you can generate test-ready images that will always have a syslog server available for all downstream environments. For those environments that already have a deployed syslog server, the configuration samples shown here can be used to generate the environment-specific configurations to seamlessly connect from the Docker-deployed syslog server container to the pre-existing servers.