Deployment

This documentation section explains how to deploy the release of a Platform application to a new environment. The scope of this page is limited to docker-compose deployment on a single machine environment.

This documentation will guide you on the key points of the architecture and deployment, but this documentation is not about networking, rights management nor system administration.

Overview

The following diagram describe the workflow of development and deployment.

Development and deployment workflow

We assume that the development phase is already done and that the Platform version is previously finalized with all the artifacts built and available.

The following architecture diagram is standard and might be adapted to your needs.

Development and deployment workflow

In this architecture diagram we can see that the environment need some access to external resources like the Platform docker registry, and your enterprise docker registry. Your enterprise registry can be public or private, this as no impact on the deployment but the target deployment machine need a network access to it. Depending on your needs and your target architecture, you may want to put in place for example:

  • Some external storage system to secure your application data.

  • A directory system to easily give access to the application for your users.

  • A data integration system that give access to your business data to the application.

We also can see that a deployed platform application is mainly some configuration and data files with a micro services deployed as running docker containers. The application also generate log files to facilitate the exploitation of the application. At the end of the process, users will access the application through a service gateway (often it is a reverse proxy).

You will have many environments depending on the topology of your deployments and your organisation. We recommend at least to have so-called integration, acceptance testing and production environments.

Prerequisites

In order to achieve a deployment, you need:

  • Docker images of your application available on a (private) docker registry.

  • According to requirements, a linux machine to deploy on, with docker and docker-compose installed.

  • The machine to have a network access to the (private) docker registry, the platform registry and all the external resources (such as LDAP or ERP).

Walk-through

The main steps to deploy a new environment are summarize below and each step will be described later.

  1. Copy the necessary files on the target machine.

  2. Configure the environment variables.

  3. Start docker containers.

  4. Run basic checks on application services.

Copy the necessary files on the target machine

Compress the deployment files

The deployment related files are gathered in a single place: the folder deployment/docker in your Platform application source folder.

It is basically structured in 3 main folders: app, dbos and infra, cf the code extract below.

platform_src ~> tree -L 1 deployment/docker
deployment/docker
├── app     # A dedicated docker compose for all the Platform micro services.
├── dbos    # A dedicated docker compose for Optimization server infrastructure.
└── infra   # A dedicated docker compose for general infrastructure services.

The first thing to do is to copy these files on the target machine.

We can zip them with the following command, for example:

platform_src ~> (cd deployment/docker && zip -r deployment_docker.zip .)
platform_src ~> # We now have a deployment_docker.zip file 
platform_src ~> # in deployment/docker folder

Move them to the target machine

In order to decompress the deployment files on the target machine, you have to move the zip file created before on the target machine.

You may use the tool of your choice to move the file, scp for example.

Let us assume: that the target machine answers to the host name environment_host; and the unix user on the target machine that will host you Platform application is platform.

platform_src ~> scp deployment/docker/deployment_docker.zip  platform@environment_host:~/deployment/deployment_docker.zip

Decompress deployment files

You can now decompress the deployment files:

         ~ ~> cd deployment
deployment ~> unzip deployment_docker.zip -d .

Configure the environment variables

You now have a deployment folder in the home folder of your target machine, cf the code extract below.

deployment ~> tree -L 1 .
.
├── app
├── dbos
└── infra

For this section, we assume that, you have released a version 1.0.0 of your project with the Platform and all your docker images are available on your docker (private) repository. And the (private) repository is available at the url docker-registry.internal.some-company.com.

Configure a Platform application environment is basically two things: define where is the docker registry that hosts the application docker images; and define the version of your Platform application. These pieces of information are held in a .env file in the app folder (app/.env).

Edit this file and ensure the first lines are as follows:

APP_DOCKER_REGISTRY=docker-registry.internal.some-company.com
DOCKER_PULL_REGISTRY=dbgene-registry.decisionbrain.cloud
PROJECT_VERSION_DOCKER_TAG=1.0.0

Configure the allowed origin for the WebSocket notifications

The Gene platform rely on WebSocket for notifications, and in order to works, WebSocket endpoints needs to specify what are the allowed web origin. Additional information on cross-origin resource sharing CORS.

For the example, we assume that your Gene application is available with the url https://my-application.internal.some-company.com/home

The allowed origin is also configured in the .env file in the app folder (app/.env).

Edit this file and ensure the last lines are as follows:

# Allow origin '*' is generally speaking a bad practice.
# You should not use it for deployed environment,
# and you should change it for the public url of your Gene application.
# Ex: - WEBSOCKET_ALLOWEDORIGIN=https://my-gene-app
#
WEBSOCKET_ALLOWEDORIGIN=https://my-application.internal.some-company.com

Notice the difference between an origin and an url, the origin is only composed from the url scheme, host and port but did not contains path information.

Start docker containers

We recommend not to have any docker container already running on the target machine, as it may interfere with your application.

You can display running container using the following command:

deployment ~> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

First, start the infrastructure docker containers using docker-compose with the following command:

deployment ~> (cd infra && docker-compose up -d)
deployment ~> # The following command allow to ensure that every infrastructure 
deployment ~> # services are up and running.
deployment ~> (cd infra && docker-compose ps)
~/deployment/infra
           Name                         Command               State                                                          Ports                                                        
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gene-sample-keycloak         /opt/jboss/tools/docker-en ...   Up      0.0.0.0:9090->8080/tcp, 8443/tcp, 9990/tcp                                                                          
gene-sample-mongo            docker-entrypoint.sh mongod      Up      0.0.0.0:27017->27017/tcp                                                                                            
gene-sample-postgres         container-entrypoint run-p ...   Up      0.0.0.0:5432->5432/tcp                                                                                              
gene-sample-rabbitmq         docker-entrypoint.sh /opt/ ...   Up      15671/tcp, 0.0.0.0:15672->15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:61613->61613/tcp

It creates and starts all the infrastructure services and the Platform application docker internal network.

In a second step, you should start Optimization server by typing the following command:

deployment ~> (cd dbos && docker-compose up -d)
deployment ~> # The following command allow to ensure that every Optimization server 
deployment ~> # services are up and running.
deployment ~> (cd dbos && docker-compose ps)
~/deployment/dbos
             Name                           Command               State               Ports             
--------------------------------------------------------------------------------------------------------
gene-sample-dbos-documentation   nginx -c /home/optimserver ...   Up      80/tcp, 0.0.0.0:1313->8080/tcp
gene-sample-dbos-master          sh -c java $JAVA_OPTS -jar ...   Up      0.0.0.0:8088->8080/tcp        
gene-sample-dbos-web-console     sh -c envsubst < /home/das ...   Up      80/tcp, 0.0.0.0:8089->8080/tcp

And finally, you should start your Platform application using docker-compose, typing the following commands:

deployment ~> (cd app && docker-compose up -d)
deployment ~> # The following command allow to ensure that every applicative 
deployment ~> # services are up and running.
deployment ~> (cd app && docker-compose ps)
~/deployment/app
              Name                            Command               State           Ports         
--------------------------------------------------------------------------------------------------
gene-sample-backend-service        java -jar /app.jar               Up      8080/tcp              
gene-sample-data-service           java -jar /app.jar               Up      8080/tcp              
gene-sample-execution-service      java -jar /app.jar               Up      8080/tcp              
gene-sample-gateway-service        java -jar /app.jar               Up      0.0.0.0:8080->8080/tcp
gene-sample-scenario-service       java -jar /app.jar               Up      8080/tcp              
gene-sample-web                    sh -c envsubst < /home/web ...   Up      80/tcp, 8080/tcp      

This final step makes the whole system available.

Run basic checks on application services.

This section will be available soon.