Backing up and recovering non-SSH images by using BYOI

IBM Hyper Protect Virtual Servers can backup and recover non-SSH images by using the Bring Your Own Image (BYOI) function. The primary and recovery virtual servers reside on different Secure Service Container LPARs on the same IBM Z/LinuxONE (i.e., s390x architecture) management server.

Setting up the backup and recovery environment is explained here with an example application and PostgreSQL database deployed by using BYOI. PostgreSQL has its own backup tool, for example Bucardo, and does not require a separate file synchronize function like rsync. Bucardo can monitor the primary virtual server and synchronize with the recovery virtual server as needed. A separate virtual server built by using the 'hpvs-op-ssh' image is deployed in the recovery Secure Service Container LPAR to host Bucardo.

A client connects to the primary virtual server application, and the primary application uses the application URL to connect to the load balancer, and the load balancer redirects the PostgreSQL IP to the primary PostgreSQL virtual server. If the primary PostgreSQL is down, the app URL will disconnect and reconnect to the recovery PostgreSQL virtual server. This is supported by using passthrough or non-passthrough quotagroups.

This procedure is intended for users with the role cloud administrator and infrastructure administrator.

Before you begin

  • The cloud administrator acquires the public and private IP addresses of the primary and backup virtual servers from the infrastructure administrator. These are required when you create the virtual server.
  • Deploy the application server.
  • The client is able to connect to the application server.
  • The application server configures and connects to the Postgre database via the URL provided by the load balancer.

Backup procedure

Complete the following steps.

  1. Create a Dockerfile which includes Bucardo related configuration. The following is an example Dockerfile.

    FROM test4hpvsop/hpvsop-base:
    COPY --chown=root:root scripts/ /usr/bin/
    COPY --chown=root:root config/iptables.conf /etc/iptables/
    COPY --chown=root:root scripts/initdb.sql /etc/
    RUN apt-get update && \
        apt-get install -y postgresql-10 postgresql-contrib libpq-dev postgresql-server-dev-10 postgresql-plperl-10 \
        libdbix-safe-perl libtest-simple-perl libboolean-perl libextutils-makemaker-cpanfile-perl \
        libextutils-modulemaker-perl libcgi*-perl libdbd-pg-perl libencode-locale-perl libpod-parser-perl \
        libsys-syslog-perl vim sudo iputils-ping net-tools netcat bucardo && \
        echo "listen_addresses = '*'" >> /etc/postgresql/10/main/postgresql.conf && \
        cat /etc/postgresql/10/main/postgresql.conf && \
        echo "host all bucardo trust" >> /etc/postgresql/10/main/pg_hba.conf && \
        cat /etc/postgresql/10/main/pg_hba.conf && \
        mkdir -p /var/run/bucardo && mkdir -p /var/log/bucardo && \
        echo '*:5432:*:bucardo:bucardobucardo' > /root/.pgpass && chmod 600 /root/.pgpass && \
        chmod a+x /usr/bin/
    CMD ["/usr/bin/"]

    The following is an example of the file.

    /etc/init.d/postgresql restart
    echo "ALTER USER postgres WITH PASSWORD '$PASSWD';" | sudo -u postgres psql
    sudo passwd -d postgres
    echo -e "$PASSWD\n$PASSWD" | sudo -u postgres passwd
    echo "Init Account..."
    sudo -i -u postgres psql -c "create user bucardo with superuser password '$BUCARDO_PASSWD';"
    echo "Init Database..."
    sudo -i -u postgres psql -c "create database bucardodb with owner = bucardo;"
    echo "Init Table"
    sudo -i -u postgres psql -d bucardodb -f /tmp/initdb.sql
    echo "Verify Database..."
    sudo -i -u postgres psql -d bucardodb -c "select * from tmp_t0;"
    exec /sbin/init

    The following initdb.sql file is an example initialization of the Postgre database

    create table tmp_t0(c0 bigint,c1 varchar(100));
    alter table tmp_t0 add primary key(c0);
    insert into tmp_t0
    select id, md5(id::varchar) from generate_series(1,10) as id;

    The following is an example for configuring iptables.

    # originally generated by iptables-save
    # modifications for basic networking protection while maintaining typical access avenues
    :INPUT DROP [4:180]
    :FORWARD DROP [0:0]
    :OUTPUT DROP [0:0]
    -A INPUT -i lo -j ACCEPT
    -A INPUT -s -j DROP
    -A INPUT -p tcp -m state --state ESTABLISHED -j ACCEPT
    -A INPUT -p udp -m state --state ESTABLISHED -j ACCEPT
    -A INPUT -p icmp -m state --state ESTABLISHED -j ACCEPT
    -A OUTPUT -o lo -j ACCEPT
    -A OUTPUT -p tcp -m state --state NEW,ESTABLISHED -j ACCEPT
    -A OUTPUT -p udp -m state --state NEW,ESTABLISHED -j ACCEPT
    -A OUTPUT -p icmp -m state --state NEW,ESTABLISHED -j ACCEPT
    # Open postgreSQL Port (5432)
    -A INPUT -p tcp -m tcp --dport 5432 -j ACCEPT
  2. Build an image for Postgre database and deploy two IBM Hyper Protect Virtual Servers instances by using the image that you built. For more information, see Deploying your applications securely.

  3. Create a IBM Hyper Protect Virtual Servers instance by using the 'hpvs-op-ssh' image. For more information, see Creating a Hyper Protect Virtual Server instance.

  4. Access the IBM Hyper Protect Virtual Servers instance you created in the previous step via an SSH terminal, and deploy Bucardo on the IBM Hyper Protect Virtual Servers instance. For more information, see Bucardo.

  5. Always access the application that should be recoverable by using a URL that points to the virtual server IP address, and never access the IP address directly. You can then adjust the URL to point to the recovery virtual server. The access point is input from a load balancer (example CIS, Cloudflare, F5) or a Domain Name System (DNS).

Procedure for recovery

To recover from a disaster using the backup environment as described in the section above, complete the following steps.

  1. Connect to the recovery virtual server instance and verify whether the database application is up and the backup data is available.
  2. Reconfigure the DNS to map the application URL to the recovery Postgre database instance via the load balancer.
  3. Test whether the application is accessible externally, as expected.
  4. Test the recovery procedure periodically to ensure its effectiveness.