One of my basic experiment on the BlueMix was to create a database and see if I can move my local database to the cloud and free up my resources. I knew that I can create my own database and work with it in Blu Acceleration offering, But than for an existing database users, chances are high that there is already a database in local system. So the first thing one has to do it to move the existing database into the cloud and free his local system/infrastructure. I decided to try it out.
I didn't have a working database in my system so I thought of creating one for this experiment. I have a DB2 10.5 installed on my system so I decided to create sample database using db2sampl tool which comes along with the server installation. This tool create a sample database with some tables, indexes, views, stored procedure and many more objects and load data. So once can say that it is similar to a production database but of minimum size.
After logging into the BlueMix portal and adding Blu Acceleration offering, I get a console to manage the service. Below is the snapshot of the Blu Acceleration Web console
So I have the option of designing my database or using existing data models. Under “design your database”, it do give option to use product like InfoSphere Data Architect to model your database locally and deploy it back on the cloud. I also have option to load data from various sources to the existing table (which I can do only when I have my tables created).
Being a public offering, whenever you add Blu Acceleration service to cloud application, a database get created or get shared with you with a user id. You are given access to a schema and you can create object in this schema. This means that one doesn't have the full control over the database. This prohibits user to do a recovery of the whole database as this might disrupt other users. BlueMix gives you the following options to actually move your database to cloud.
Create your DDLs using tools like db2look and run these DDLs on the cloud database connection. This also means that you may need to individually load the tables. This process seems quite complex for big size databases
Once you have the DDL extracted from my local system, Go to the “Work with Table “option as shown in the below snapshot
One you select the option, You see the following screen
So the next step was to run the DDLs, Click on + sign gives a box to run the DDLs
While using this option, one may need to make sure that you create tables and objects in a way that objects with dependencies are created only after the object on which they are depend on. So you may need ti divide your DDL in that way.
It also gives you an option to reverse engineering your database to create database design using InfoSphere Data Architect and deploy it on the database in the cloud. This is something which one may need to follow in case database has many objects. Loading of data will be a separate tasks in this case too.
I will share the details on this option in my next entry