In the context of the development activites for the SmartCloud Orchestrator content pack for Web Applications we have been experiencing the differences between public and private IBM clouds, carving out a set of practices for deploying the same application seamlessly on both SmartCloud Orchestrator and IBM BlueMix.
We collected what we learned from this experience in this just published article, where out J2EE case study application is implemented via a Virtual Application Pattern in SmartCloud Orchestrator and via the Libery profile + SQLDB services in BlueMix.
Code, deployment automation and design choices are explained in detail, keeping the focus on how to avoid tying your application to the target deployment infrastructure. Check it out at https://ibm.biz/dwjavabmsco.

Cloud/Virtualization Management

New Article on DeveloperWorks: Build a portable Java EE app across Bluemix and private cloud patterns |
||||||||||||||||||||||||||||||||||||||||
Hybrid usage of Virtual Server &Bare Metal server on SoftlayerIn this topic, I will talk about how to setup a Virtual Server and Bare Metal server mixed environment on Softlayer. It could provide the compute-bound or I/O-bound service to customer. I will describe how to use the hybrid environment in development and production phase. The Virtual Machine and Bare Metal server are both hosted by Softlayer. The mixed environment provide flexible capability for calculation, are easy to scale up and save the infrastructure cost. Softlayer supports to migrate the env from Virtual Server to Bare Metal server or vice versa easily via flex image. So it is easy to migrate the change from Dev env to production env. I. Introduction to Bare Metal Server on Softlayer
|
|
CPU # |
CPU Speed |
Max Mem Size |
Storage |
HDD drive # |
Storage I/O speed |
Provision Time cost |
Virtual Server |
1~ 16 vCPU |
2.00Ghz |
16G |
Virtual Server Local Storage, SAN |
Max 5 |
Virtual Server Local storage: ~100 MB/s |
Short |
BM server |
4 ~ 40 core |
2.00 ~ 2.93G* |
512G** |
SATA, SCSI, SSD with RAID enabled |
Max 36*** |
Much faster than Virtual Server local storage |
Longer than Virtual Server |
*The CPU speed depends on the CPU model
**The max mem size supported varies with the server configuration.
***The max HDD drive number varies with the server configuration
****Depends on the storage media, the storage I/O speed varies.
The cost of Bare Metal server is varied according to its hardware configuration.
b. How to apply a Bare Metal server on Softlayer
Once an Softlayer account is available, you can login Softlayer and go to https://manage.softlayer.com/Sales/serverList to choose your preferred hardware configuration. In this case, as I will run a compute-bound task on the server, I selected the Bare Metal server as
32 Core 2.7GHz + 128G Mem + 2T SATA *2 with RAID 0 enabled.
Select the Operating system, Network bandwidth, mandatory System Add-on, VPN setting. Set the hostname and domain name. Finally, submit the order. Then an Softlayer ticket will be generated by Softlayer support guys to trace the Bare Metal server provision process. The Softlayer support guys will update thee server provision progress on the ticket and will close this ticket once the server is provisioned. When the server is up, the IP address and login credential is available on the server info page.
c. The billing of Bare Metal server
The billing type for Virtual Server and Bare Metal server is different. Virtual Server supports hourly billing and monthly billing. The Bare Metal server mainly supports monthly billing (Calendar Month). Also it provides 4 specified configurations to support hourly billing. See http://www.softlayer.com/hourly-bare-metal-servers.
When de-provision the Bare Metal server, you’d better to submit the request before 29th of each calendar month to save the cost. Softlayer cannot process the Bare Metal server de-provision request on the last day of every month.
2. How to use Flex image in Softlayer.
a. What is the flex image and its difference from Virtual Server template.
Softlayer provides two types of image templates. One is the image template and the other is the flex image. The image template is only for Virtual Server. Virtual Server could be saved as image template and Virtual Server could also be provisioned from the image template. The flex image provides a bridge between Virtual server and Bare Metal server. Virtual Server could also be saved as flex image. The saved flex image could be provisioned to the Bare Metal server. Also Bare Metal server could be saved as flex image as a snapshot.
There a question may come as the flex image is so flexible, why do we need the image template?
Here is the reason that flex Image has limitations, it is not a viable solution to sunset Standard Images at this time. Existing Standard Images will still be available and new Standard Images may still be created on Virtual Servers, even if they are eligible for Flex Image
The flex image has the same cost as image template.
b. Provision the flex image to Bare Metal server
To provision the flex image to Bare Metal server is very similar as to provision image template to Virtual Server. Click the Order button behind the flex image, there will be popup menu for you to choose. Just select Bare Metal server, Monthly virtual server or Hourly virtual server. Like below:
Select the Bare Metal server and it will redirect to the Bare Metal server order page as we described in step 1.b.
c. Save flex image against the Bare Metal server.
In the Bare Metal server detailed info page, click the “create flex image” link to generate the flex image against the Bare Metal server. During the image capture process, the server will be rebooted.
d. Provision a Virtual Server via the Flex image.
To provision a Virtual server based on the flex image is similar as described in step 2.b. The difference is that you need to select Monthly virtual server or Hourly virtual server instead of Bare Metal server. Then the page will direct you to Virtual Server order page. Then you can select the virtual server configuration and submit the order. There is one thing to pay attention to. As the 1st disk of Virtual server is no larger than 100GB, before provisioning to Virtual server, we need to check the disk size from the flex image info. If it is less than 100GB, the virtual server provisioning could be successful.
3. Introduce a real scenario of using a hybrid bare metal server and Virtual Server
a. The challenge we met
- Development on the cloud needs secured and separated workspace.
- Development on the cloud needs to control the cost.
- Development need to migrate the change delivered to production environment quickly.
- The customer case requires powerful parallel calculation capability.
- The customer case has high disk I/O requirement.
- Bare Metal server has higher cost with higher CPU performance and disk I/O.
According to the challenges above, we’d like to setup a Virtual Server & Bare Metal server mixed environment. Use Virtual Server as development environment for development to develop the project, which could provide a separated workspace with a relative cheap solution. Use Bare Metal server to support customer case, which provides the powerful parallel calculation and high disk I/O. We use the flex image as the bridge to migrate the change from Virtual server to Physical server.
We used the following options to transfer the data between Bare Metal server and Virtual server. Softlayer provides the 1000Mbps LAN to link the Bare Metal server and Virtual Server. Also Softlayer provides NAS(Network Attached Storage) for multiple servers to access. So servers can talk to each other via network and share the content via the NAS. Here is a topology chart which described how Bare Metal server is used in the hybrid scenario.
b. How we use bare metal server and Virtual Server in development and production phase
Regarding to requirement in Dev phase, we need to setup several branch for different component development. The development env for different Dev branch will be provisioned from the same image. Each development branch is separated from each other. Developer will deliver the validated output to the main tree. After validation on the main tree, image will be captured and could be provisioned as next baseline or production env.
Below is a flow about how to use the bare metal server and virtual server in development and production phase.
Conclusion
Softlayer provides the powerful Bare Metal server is a great option to those customers who need heavy calculation or high disk I/O. The hybrid usage of Virtual server and Bare Metal server is flexible and powerful. It takes the advantage of Virtual Server and Bare Metal Server. Flex image is a great feature to setup a bridge to link the Virtual server and Bare Metal server.
How to choose best storage device for you on softlayer
How to choose best storage device for you on softlayer
Cui Li quan(cuiliq@cn.ibm.com), Technical lead of IBM Smarter Cities Cloud operations team, expert on operation framework and automation for IBM Smarter Cities SaaS products.
Zi Xuan Zhang(zhangzx@cn.ibm.com), Team lead of IBM Smarter Cities SaaS Development and Customer engagement, has rich expeirence on SaaS development, operation and customer support.
Rong Zhao (zhaorong@cn.ibm.com): Release & Project Manager of Smarter Cities Cloud Team, managing SaaS releases and projects, with rich experience for Continuous Delivery, DevOps, Project & Release Management.
1. What’s storage type you can choose on Softlayer
Softlayer provide several kind of storage and you have a choice of local disk, SAN disk, iscsi SAN disk, NAS, object storage to satisfy your all kinds of requirement.
Below table compare different storages from raid level, I/O rate, price, maximum size which softlayer provides perspective, give you an brief impression for those storages.
Note: All the I/O Rate isn't official data, just tested by me, just for your reference
Storage Type |
Raid Level |
I/O Rate |
Price(2TB except local disk) |
Maximum Size |
Local Disk |
Raid10 |
About 150M/s |
12.24$/Month |
The maximum size: 100GB+300GB |
SAN Disk |
Raid 5 |
About 100M/s |
85.68$/Month |
The maximum size: 100GB+ 8TB |
Iscsi SAN Disk |
Raid 5 |
About 100M/s |
500$/Month |
The maximum size: 2TB |
NAS |
|
About 55M/s |
810$/Month |
The maximum size: 3TB |
Object Storage | About 20M/s | 260$/month | No limition |
Table 1
2. How to choose different storage
2.1 Local Disk
You can choose local disk when you provision cci intance if you need more higher I/O speed, but no need too many disk requirement
How many size the cci instance with local disk?
As show in the table 1, the first disk is 25GB or 100GB, the secondary disk is 25GB to 300GB, so the maximum disk can be provided is 400GB
2.2 SAN Disk
You can choose SAN disk if you need more disk space, A virtual server cannot be ordered with both local disks and SAN disks, your disk selection to contain only local disks or only SAN disks.
For the CCI Instance with SAN disk, virtual server have five virtual disk, the first disk is 25GB or 100GB, for second disk to fifth disk, you can choose 10GB to 2TB for each virtual disk, so the maximum disk is 8TB
2.3 Migrate to SAN based virtual server from Local Disk
We have know that virtual server with local disk, which only have maximum 400GB disk, with your business grow up, you run out of all the disk space, what should you do? Provisioning a refresh cci instance, then install and deploy your applications, all from the beginning? The answer is NO.
Softlayer can migrate the virtual server with local disk to SAN based virtual server.
Find the cci instance from Softlayer UI https://manage.softlayer.com/, click “View”, then choose “Miscellaneous”->”Migrate to SAN based Virtual Server”
If you use https://control.softlayer.com/devices , you can choose “Device”->”Device List”->find the cci instance ->”Action”-> “Migrate to SAN”
2.4 iscsi SAN disk
Iscsi SAN disk is a good supplementary if you need more than 8TB san disk, you can attach a new iscsi SAN disk to the cci instance, the maximum size of iscsi SAN disk is 3TB.
Iscsi also provides replication and snapshot function, you can replicate your data which saved in iscsi SAN disk to another data center, ࠄThe replication feature runs on a schedule and is not an immediate process at this time.
2.5 NAS Storage
SoftLayer offers Network Attached Storage (NAS) as one of our basic storage solutions for users looking for quick, cost-efficient backup for their SoftLayer devices. NAS is compatible with each of our offered operating systems and may also be used via File Transfer Protocol (FTP) with both Parallels Plesk Panel and cPanel&WHM. NAS and FTP storage is billed monthly to users and is available in a variety of storage sizes. Users interact with their NAS and FTP storage primarily within the command line or terminal within the OS or through point-and-click interaction for control panels. Within the Customer Portal, NAS details and usage can be used, but the service itself may not be manipulated outside of the command line, kernel or control panel.
3. Storage Usage Scenario
3.1 Create shared disk using iscsi SAN disk
This section will introduce how to connect Multiple Servers to a Single iSCSI LUN
The filesystem you use on the iSCSI mount must be cluster ware, for example, ocfs or gpfs, or other supported cluster ware
Your applications must also be cluster aware which use this filesystem.
We will introduce IBM cluster software GPFS, or ocfs, you can refer to http://knowledgelayer.softlayer.com/procedure/connect-multiple-servers-single-iscsi-lun
-
Connect two linux VM node to an iSCSI Volume
Please find the device file using below command after you configure iscsi-initiator to connect to a iscsi LUN.
find /sys/devices/platform/host* -name block\* -exec ls -la '{}' \; | sed s#^.*../block/#/dev/#g
You maybe find a iscsi disk /dev/sda
-
Install the GPFS rpms file sets
Extract gpfs rpm packages
export DISPLAY=Your workstation IP:0.0
./gpfs_install-4.1.0-0_x86_64
cd /usr/lpp/mmfs/4.1
rpm -ivh gpfs*.x86_64.rpm rpm -ivh gpfs*.x86_64.rpm
Check the GPFS rpms installed on each node as shown
-
Configure SSH Key for root on two nodes
Generate ssh keypair using ssh-keygen -t rsa -b 2048 -C "GPFS KEYPAIR" -f /root/.ssh/id_rsa -N '', then add /root/.ssh/id_rsa.pub to authorized_keys on two nodes
-
Cluster Nodes Configuration
Nodes Properties:
manager or client, quorum or nonquorum
[root@ClsterDB2Node1 GPFS]# cat /etc/GPFS/nodes
ClsterDB2Node1:manager-quorum
ClsterDB2Node2:manager-quorum
[root@ClsterDB2Node2 GPFS]# cat /etc/GPFS/nodes
ClsterDB2Node1:manager-quorum
ClsterDB2Node2:manager-quorum
Accept License
/usr/lpp/mmfs/bin/mmchlicense server --accept -N ClsterDB2Node1,ClsterDB2Node2
Note:
if some nodes as client, the command as below:
/usr/lpp/mmfs/bin/mmchlicense client --accept -N ClsterDB2Node3,ClsterDB2Node4
-
Create GPFS Cluster
/usr/lpp/mmfs/bin/mmcrcluster -N /etc/GPFS/nodes -p ClsterDB2Node1 -s ClsterDB2Node2 -r /usr/bin/ssh -R /usr/bin/scp -C cls_iocdb2 -U smartercitiescloud.com
-p,-s means primary and slave cluster server
-r /usr/bin/rsh -R /usr/bin/rcp
means manage cluster using rsh,rcp mode
-
Check cluster status
/usr/lpp/mmfs/bin/mmlscluster
GPFS cluster information
========================
GPFS cluster name: cls_iocdb2.ClsterDB2Node1
GPFS cluster id: 732630102487133478
GPFS UID domain: cls_iocdb2.ClsterDB2Node1
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
GPFS cluster configuration servers:
-----------------------------------
Primary server: ClsterDB2Node1
Secondary server: ClsterDB2Node2
Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------------------
1 ClsterDB2Node1 xx.xx.xx.x50 ClsterDB2Node1 quorum-manager
2 ClsterDB2Node2 xx.xx.xxx.51 ClsterDB2Node2 quorum-manager
-
Create the NSD disk
cat /etc/GPFS/nsd
/dev/sda:ClsterDB2Node1,ClsterDB2Node1::dataAndMetadata:1:::
/dev/sda is a iscsi SAN disk
[root@ClsterDB2Node1 GPFS]# /usr/lpp/mmfs/bin/mmcrnsd -F /etc/GPFS/nsd -v no
mmcrnsd: Processing disk sda
mmcrnsd: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
-
Start the GPFS cluster
/usr/lpp/mmfs/bin/mmstartup -a
Note: Stop GPFS cluster
/usr/lpp/mmfs/bin/mmshutdown -a
-
Create the GPFS file system
/usr/lpp/mmfs/bin/mmcrfs /dev/gpfs-db2data -F /etc/GPFS/nsd -A yes -B 256K -M 2 -n 8 -R 2 -T /db2data
The following disks of gpfs-db2data will be formatted on node ClsterDB2Node1:
gpfs1nsd: size 1048581120 KB
Formatting file system ...
Disks up to size 8.6 TB can be added to storage pool 'system'.
Creating Inode File
79 % complete on Sun Nov 17 07:13:21 2013
100 % complete on Sun Nov 17 07:13:22 2013
Creating Allocation Maps
Creating Log Files
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool 'system'
Completed creation of file system /dev/gpfs-db2data.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
-
Mount the file system on all nodes
/usr/lpp/mmfs/bin/mmmount all -a
3.2 Using File Storage NAS to backup DR data
Softlayer provide a kind of File storage, it’s NAS, NAS storage can cross different data center, for example, you can mount NAS storage which located at Amsterdam DC to the VM in Dallas 5, so you can backup or archive some Disaster Recovery data from one DC to another DC.
NAS storage is monthly charged storage, it’s a little expensive storage, you can order 20GB to 2TB storage.
How to connect to NAS Storage
Before you use NAS storage, you must install cifs-utils samba-client samba.
Install Instruction:
CentOS:
yum install samba3x samba3x-client
RHEL:
yum install cifs-utils samba-client samba
Three method to mount NAS storage in linux
Mount in Lnux
mount-t cifs //nas151.service.softlayer.com/IBMXXXXX-1 -o username=NAS_USERID,password=NAS_PASSWOR /drcopydata
Smbmount in Linux
If you have not already done so, create the directory you want the NAS account mapped to
Example:
mkdir /drcopydata
smbmount //nas01.service.networklayer.com/XXXXXXX-X /drdata/ -o lfs,username=XXXXXXX-X,password=password
ftp Access
Example:
ftp nas01.service.networklayer.com
FTP Username: XXXXXXX-X
FTP Password: password
Usually we use the first way Mount in Lnux
Then we use rsync to sync our backup data to NAS Storage
rsync -avzP /drdata/* /drcopydata/ --delete --exclude=logs/
How to backup data on Softlayer and setup Disaster&Recovery across data center
How to backup data on Softlayer and
setup Disaster&Recovery across data center
Cui Li quan, Technical lead of IBM Smarter Cities Cloud operations team, expert on operation framework and automation for IBM Smarter Cities SaaS products.
Jun Xia Zhou, Comes from China Development Lab, she is working in Smarter Cities Cloud Delivery team, has rich operation experience of Smarter Cities products, like Intelligent Operations Center on Cloud, Intelligent Transportation on Cloud, and Intelligent Water on Cloud.
Zi Xuan Zhang, Team lead of IBM Smarter Cities SaaS Development and Customer engagement, has rich expeirence on SaaS development, operation and customer support.
1. The importance of backup data
Information is a valuable asset to a company, those valuable information usually derived from historical data, so data is more and more important to a company, an abundant of data generated when a company operate each day.
We can not tolerate any data lost; it means you lost money and customer trust when lost some important data.
So you need to protect your data via all kinds of data backup software, IBM TSM(Tivoli Storage Manager) is a popular enterprise-level backup software
2. eVault Service on Softlayer
Introduce EVault service on softlayer and introduce how to use evault to backup customer data
What's Evault Backup?
SoftLayer has partnered with Evault, which provides reliable, easy-to-use, enterprise-class backup and recovery solutions.
The backup solution utilizes Evault's Infostage line of product. Infostage is a full-automated server-to-agent, disk-to-disk backup technology. Some of the many features include compression, customizable encryption schemes, and Evault's delta technology. The backup agent can be managed from a downloadable desktop agent or through a webserver hosted in a SoftLayer data center. Backups can be completely customized on what to backup, how long to keep the data, when the backups run, and encryption schemes. All backups are done over SoftLayer's true out-of-band private network.
How EVault Backup works?
Evault Backup is an automated agent-based backup system that is managed through the Evault WebCC browser-based management utility, performs backups on full systems, targeted directories and individual files.
WebCC is short for WebCentralControl, a web-based tool that allows Evault users to interact with their Evault backups service on all levels.
How to order evault
Login https://control.softlayer.com/devices/, then find the cci instance you want to attach evault agent.
Choose “Storage”->”Other Storage”->”Evault”, the click add button, then select which datacenter you want to backup your data, you can choose local or remote data center to save you data.
3. TSM (Tivoli Storage Manager) backup for DB2
Softlayer don't provide db2 evault agent to backup db2 database, so we couldn’t backup online backup db2 database, so we need to install tsm agent to backup DB2 database.
Install TSM agent on each of IOC nodes
Upload tsm agent rpm to each of IOC node
gskcrypt32-8.0.13.3.linux.x86.rpm
gskssl32-8.0.13.3.linux.x86.rpm
gskcrypt64-8.0.13.3.linux.x86_64.rpm
gskssl64-8.0.13.3.linux.x86_64.rpm
TIVsm-API64.i386.rpm
TIVsm-BA.i386.rpm TIVsm-API.i386.rpm
rpm -iv gsk*.rpm and rpm -iv TIVsm*
Configure TSM agent
Create dsm.opt and dsm.sys configuration file under /opt/tivoli/tsm/client/ba/bin and /opt/tivoli/tsm/client/api/bin64/
below is the sample how to create dsm.opt and dsm.sys, the configuration which marked with red need to be configured.
mkdir -p /opt/tivoli/tsm/client/logs
chmod 777 -R -f /opt/tivoli/tsm/client/logs
cd /opt/tivoli/tsm/client/api/bin64/
cp dsm.opt.smp dsm.opt
cp dsm.sys.smp dsm.sys
vi dsm.opt
SErvername tsmsvrhostname
vi dsm.sys
SERVERNAME tsmsvrhostname
NODENAME clientnodename
TCPSERVERADDRESS xxx.xxx.xxx.xxx(tsm server ip)
COMMMETHOD TCPIP
TCPPORT 1500
PASSWORDAccess Generate
schedlogname /opt/tivoli/tsm/client/logs/dsmsched.log
errorlogname /opt/tivoli/tsm/client/logs/dsmerror.log
dirmc backupfile_mgmtclass
copy dsm.opt and dsm.sys to /opt/tivoli/tsm/client/ba/bin/ folder
Enable DB2 database archive mode for online backup
Follow below sample command to enable db2 archive mode for online backup
#su – db2instx
#db2 update db cfg for dbname using TRACKMOD YES
#db2 update db cfg for TSM_MGMTCLASS backupdb2_mgmtclass
#db2 update db cfg for LOGARCHMETH1 TSM:backupdb2_mgmtclass
You must execute one time offline backup, otherwise db2 database will be pending mode, don't allow any application connection.
db2 backup db dbname
Implement DB2 Deep Compression on softlayer cloud environment
The storage and network bandwidth usually is limited and charged in cloud environment, no traditional tap library was provided for backup, no dedicated network used for backup network, usually customer service and backup shared the same bandwidth, so storage and network bandwidth will be more treasurable.
DB2's Deep Compression beats out the competition and routinely saves in the order of 40-70% of the total space usage. That's very good savings for the cloud given that storage costs money. If you backup data to the cloud or a storage service, you will be paying by space usage. DB2 compressed backups are simple to use and will save money in storage costs.
DB2 compressed backup using below command.
db2 backup db incremental compdb online use tsm compress
From db2 v10, db2 also support archive log compressed, it's simple to enable compression for archive log.
#db2 update db cfg for LOGARCHCOMPR1 ON
4. DR solution on Softlayer
This section we will introduce how to design Disaster recovery solution for TSM, we will combined NAS storage and rsync to realize DR for our customer environment.
We assume that two customer environment was setup in different data center on Softlayer, for example, IOC environment 1 located at Dallas 5, IOC environment 2 located at London data center, we setup one TSM servers for each data center, as show below diagram 1
The two TSM servers as backup for each other, we create two TSM instance on each data center.
TSM Instance |
DataCenter |
Port |
Comment |
tsminst1 |
Dallas 5(DC 1) |
1500 |
Backup IOC Env 1 Data |
tsminst2 |
London 5(DC 2) |
1501 |
Backup IOC Env 2 Data |
tsminst1 |
London 1 (DC 2) |
1500 |
As DR of IOC Env 1 Data |
tsminst2 |
Dallas 5 (DC 1) |
1501 |
As DR of IOC Env 2 Data |
Diagram 1
We use SAN disk storage for our customer.
Disk Layout for two TSM servers, one cci instance can support 8TB disk, depends on your actual requirement
FileSystem |
Size |
TSM on DC |
Comment |
/home/tsmimst1 /n1_tsmdata /n1_drdata /n1_backupdata |
100G 200G 1000G |
tsminst1 on DC1 |
Sync /home/tsmimst1, n1_tsmdata and n1_drdata using drbd between DC1 and DC2
Sync the data under /n1_backupdata using rsync between DC1 and DC2 |
/home/tsmimst2 /n2_tsmdata /n2_drdata /n2_backupdata |
100G 200G 800G |
tsminst2 on DC1 |
Sync /home/tsmimst2, n2_tsmdata and n2_drdata using drbd between DC1 and DC2
Sync the data under /n2_backupdata using rsync between DC1 and DC2 |
/home/tsmimst1 /n1_tsmdata /n1_drdata /n1_backupdata |
100G 200G 1000G |
tsminst1 on DC |
Sync /home/tsmimst1, n1_tsmdata and n1_drdata using drbd between DC1 and DC2
Sync the data under /n1_backupdata using rsync between DC1 and DC2 |
/home/tsmimst2 /n2_tsmdata /n2_drdata /n2_backupdata |
100G 200G 800G |
tsminst2 on DC2 |
Sync /home/tsmimst2, n2_tsmdata and n2_drdata using drbd between DC1 and DC2
Sync the data under /n2_backupdata using rsync between DC1 and DC2 |
You can also apply a NAS storage on data center 3(Amsterdam 1) to save the two tsm server's backup image, we use “rsync” to copy all the backup image to NAS storage, so you can have three copys for your data, to make your data more safe.