Contents


Autoscale your database with ScaleBase for zero downtime of cloud applications

High performance and elasticity on IBM SoftLayer

Comments

Overview

There are many positive trends in the hot IT arena. Slim and sleek devices help us with daily chores. Purpose-specific and generic sensors, actuators, robots, machines, instruments, and wares are burgeoning. The user populace is growing exponentially and the volume of data is exploding. Zero downtime of applications is becoming mandatory through horizontal scalability of databases.

Custom and packaged applications are moving to cloud environments where highly synchronized IT platforms automate and accelerate a flurry of activities that are handled manually by IT administrators. Connectivity and service enablement are now of prime importance. We have a stimulating foundation for knowledge-filled services, but the direct fallout of advancement is a huge increase in the generation, capture, transmission, and storage of data. The many and varied data sources create critical challenges.

ScaleBase can be a major part of your strategy for high performing, elastic data, and process-intensive applications in the public cloud. ScaleBase has many innovations on the proven and highly matured MySQL database and provides benefits for big and fast data. The much-needed horizontal scalability (scale-out) feature is implicitly incorporated in ScaleBase with the sharding technique.

Historically, sharding a database required that you manually code data distribution policies directly into your applications. Developers wrote code that stipulated exactly where specific data should be placed and found. In essence, they created work-around code to solve a database scalability problem so applications could handle more users, more transactions, and more data. With ScaleBase, applications can cost-effectively use multiple MySQL instances working together.

When you try to create a distributed, or sharded, database, the sharding conflict is an especially challenging situation. If not solved appropriately, the ramifications of a sharding conflict are quite bad. You could receive incorrect results to queries without even realizing it. Thankfully, ScaleBase takes care of sharding conflicts for you. ScaleBase:

  • Centralizes the data distribution policy management and database rebalancing
  • Provides an easy-to-manage, horizontally scalable database cluster built on MySQL to dynamically optimize workloads across a distributed database cluster
  • Centrally manages initial migration from a single MySQL instance to a distributed database, as well as extra scaling to stay ahead of increased database workloads—with no downtime for the application

In this article, learn to migrate the ScaleBase solution to IBM SoftLayer. You will make the necessary configuration changes and use a small sample application to check how ScaleBase functions in an online, off premise, and on-demand cloud environment.

About ScaleBase

ScaleBase, a dynamic database cluster that is built on MySQL, is optimized for the cloud. It provides the relational data integrity of MySQL and lets you scale out to an unlimited number of users, data, and transactions. ScaleBase's powerful, 24/7 features include:

  • Database scalability and elastic scale out, which continuously increase the database size and throughput and stay ahead of application workload requirements.
  • Database availability, reliability, and resiliency. Users are protected from downtime and delays; they (and their applications) remain connected to their documents, data files, and business systems.
  • Geo-distribution, so subsets of the database are closer to where that data is needed.
  • Hybrid private/public cloud, so subsets of the database can be split across multiple private or public sites.

ScaleBase provides a horizontally scalable database cluster that is built on MySQL, unlike other database systems that forgo ACID, SQL, and joins; rely on in-memory persistence and durability; and bank on risky asynchronous replication to achieve scalability and availability. It dynamically optimizes workloads across multiple nodes to reduce costs, increase database elasticity, and drive development agility.

Reliability and richness of MySQL

ScaleBase is the only distributed database management system that uses real and reliable MySQL storage engines including MySQL InnoDB, MariaDB, Percona, and AWS RDS. ScaleBase enhances the MySQL engines by adding scale-out, availability, and performance analytics capabilities. All original features are still operational, but are enhanced to scale out in a distributed environment. This includes:

  • Two-phase commit and roll-back
  • ACID compliance
  • SQL query model, including cross-node joins and aggregations

With ScaleBase, data distribution, intelligent load balancing, replication aware read-write splitting, transaction management, concurrency control, and two-phase commit are all 100% transparent. Applications continue to interact with your distributed database as if it were a single MySQL instance. Figure 1 shows the ScaleBase architecture.

Figure 1. ScaleBase architecture
mysql, controllers, client, transaction layer
mysql, controllers, client, transaction layer

ScaleBase is built for the cloud with a very simple deployment for even the most complex production applications. No code changes are required because ScaleBase uses a Layer 7 pass-thru model and policy-based data distribution.

Preparing to install ScaleBase in IBM SoftLayer

This section covers the prerequisite software and hardware for ScaleBase and the steps to configure the MySQL DB before installing ScaleBase in SoftLayer.

Software

Currently, the supported platforms for ScaleBase are:

  • Ubuntu X86-64 (tested with 12.04)
  • CentOS X86-64 (tested with 5.8)

ScaleBase has been tested with hypervisor VMWare ESXi 4 and 5. ScaleBase can run on Xen hypervisor (though we did not test that). The solution in this article will be able to run in other prominent hypervisors and platforms in the near future.

The Firefox, Internet Explorer (IE), and Google Chrome web browsers are supported.

Hardware

Table 1 shows the hardware requirements for physical hosts.

Table 1. Hardware requirements for physical hosts
PurposeMinimum specification to run ScaleBase
Functional tests and development environments
Note: Not recommended for production!
CPU: 4 cores, 1.5Ghz
RAM: 8 GB
HDD: 2GB recommended
Production and load testsCPU: 16 cores, 2.5Ghz
RAM: 32 GB
HDD: 2GB recommended

Configuring the MySQL DB

Before installing ScaleBase on a virtual instance in SoftLayer, you need to check whether MySQL is already installed by issuing the mysql command. It is assumed that you have MySQL installed, as shown in the following example.

commands end with ; or \g, MySQL connection ID #
commands end with ; or \g, MySQL connection ID #

The next task is to configure MySQL with the prerequisites by entering the following commands from the MySQL monitor:

  1. CREATE DATABASE `scalebaseconfig`; Query OK, 1                             row affected (0.00 sec)
    Query OK, 1 row affected (0.00 sec)
  2. CREATE USER 'scalebase' IDENTIFIED BY 'password';Query OK, 1                             row affected (0.00 sec)
    Query OK, 1 row affected (0.00 sec)
  3. GRANT ALL PRIVILEGES ON `scalebaseconfig`.* TO 'scalebase'@'%' identified by 'password';Query OK, 1                             row affected (0.00 sec)
    Query OK, 1 row affected (0.00 sec)
  4. GRANT ALL PRIVILEGES ON `scalebaseconfig`.* TO 'scalebase'@'localhost' identified by 'password';Query OK, 1                             row affected (0.00 sec)
    Query OK, 1 row affected (0.00 sec)
  5. GRANT ALL PRIVILEGES ON `scalebaseconfig`.* TO 'scalebase'@'198.11.235.67' identified by 'password';Query OK, 1                             row affected (0.00 sec)
    Query OK, 1 row affected (0.00 sec)

Installing ScaleBase

To install ScaleBase:

  1. Move the product distribution file, ScaleBase_3.2.2.tar.gz, into the SoftLayer VM using WinSCP. The target directory is /var/tmp on the target host.

    Give root permission to that file.

  2. Unzip the distribution file. From the VM command line, enter the commands in Listing 1.
    Listing 1. Unzip the distribution file
    cd /var/tmp
    tar zxfv ScaleBase_3.2.2.tar.gz
    Enter: tar zxfv ScaleBase_3.2.2.tar.gz
    Enter: tar zxfv ScaleBase_3.2.2.tar.gz
    # tar zxfv ScaleBase_3.2.2.tar.gz
    # tar zxfv ScaleBase_3.2.2.tar.gz
  3. Run the installation script, which performs operations that require root permissions. From the command line, enter the commands in Listing 2.
    Listing 2. Run installation script
    cd /scalebase-install_3.2.2.0
    ./install.pl –-help
    list of                             files to install
    list of files to install
    Install script, options outlined in                             Listing 3
    Install script, options outlined in Listing 3
  4. Carefully read the installation procedures on the screen, which are summarized in Listing 3, before installing ScaleBase.
    Listing 3. Installation options
    Simplified Mode:
    --simplified  Simplified installation with a built-in configuration database.
                  Does not require local mysql client, not ready for failover.
                  Cannot be upgraded to a standard installation later.
                  This mode is not recommended for production installations.
    -- help       Displays additional instructions for the simplified mode.
    
    --upgrade     Upgrades existing installation in the installation directory.
                  Parameter values are taken from existing installation. 
    
    STANDARD OPTIONS for normal installation mode:
    --user        Required. User name for connection to configuration database.
    --password    Password for connection to configuration database.
    --host        Address of the configuration database 1. Default=127.0.0.1
    --host2       Address of the configuration database 2. Default=127.0.0.1
    --port        Port of the configuration database 1. Default=3306
    --port2       Port of the configuration database 2. Default=3306
    --configdb    Name of schema with config. data in cfg. db 1. Default=scalebaseconfig
    --configdb2   Name of schema with config. data in cdg. db 2. Default=scalebaseconfig
    --installConfDB  Prepare configuration tables in config. dbs (y/n). Default=y
    --xm          Controller JVM Heap Size in GB. Minimum value is 5. Default value depends   
                  on available RAM.
    --nokey       Do not wait for ENTER.
    --customUser  Flag for installation with different user (Not root), parameter doesn't have
    value.
    
    Examples:
    ./install.p1 --user=scalebase --password=password --host=127.0.0.1
    ./install.p1 --simplified

    There are two types of installation:
    • Installation of a simplified instance (not for production) by using the command in Listing 4.
      Listing 4. Install simplified instance
       ./install.pl --simplified

      The script prints out installation information and waits for the user to continue. Review the installation variables, then press Enterto continue. The script will complete the installation process.

    • Installation of the first standard instance for production.

    The example in this article uses the second standard procedure, as shown in Listing 5.

    Listing 5. Standard procedure with password
    ./install.pl --user=scalebase --password=password --host=198.11.235.67

    After entering the command in Listing 5, Stage 1 of the installation script provides messages about the installation directory, user, installation parameters, available RAM, controller heap size, and configuration settings. Stage 2 of the installation script installs ScaleBase components, extracts files, and installs the ScaleBase crontab. Stage 3 starts the controller, services, and web console processes, as shown below.

    Install stage 1, review initial configuration
    Install stage 1, review initial configuration
    Install stages 2, 3, install complete msg
    Install stages 2, 3, install complete msg

    To complete the initial configuration, finish the ScaleBase initial configuration wizard by opening the wizard in your web browser.

  5. Follow the instructions from the putty tool and open the browser using http://198.11.235.67:2701, which is the public IP for the SoftLayer VM. You should see the ScaleBase configuration wizard, as shown in Figure 2.
    Figure 2. ScaleBase configuration wizard
    configuation wizard                     with 5 steps
    configuation wizard with 5 steps
  6. Read the trial license agreement, select I Agree, then click Continue, as shown in Figure 3.
    Figure 3. ScaleBase trial license agreement
    text of license                     agreement
    text of license agreement
  7. Enter your Web Console Username and Password, then click Continue, as shown in Figure 4.
    Figure 4. Create first user account
    web console username,                     pw to create DTM user
    web console username, pw to create DTM user
  8. As shown in Figure 5, click Upload File to provide your ScaleBase license file.
    Figure 5. Upload a ScaleBase license file
    upload a scalebase license file
    upload a scalebase license file
  9. As shown in Figure 6, enter:
    • Friendly Name: admin
    • Address: 198.11.235.67
    • Port: 3306
    • DBA Username: scalebase
    • DBA Password: password
    Figure 6. Connection information
    connection                     information for first database node
    connection information for first database node
  10. Add connection information for any additional database nodes, then click Finish, as shown in Figure 7.
    Figure 7. Connection information for additional database nodes
    connection                     information for additional nodes
    connection information for additional nodes

Configuring write node failover

Follow the instructions in this section to add the replicated slave nodes to the cluster and configure the application failover policy.

Add replicated slave nodes to the cluster

From the ScaleBase console, select the Administer tab. Click Apps, then select the application and cluster where the slave needs to be added, as shown in Figure 8.

Figure 8. Node details
node, database                     host, port, read or write enabled
node, database host, port, read or write enabled
  1. Click Add to open the Add/Modify Database Node window, as shown in Figure 9.
    Figure 9. Add or modify database node
    add or modify database node
    add or modify database node
  2. You can either select an existing database server if you previously entered the data in the system (this is an infrequent case), or add a new server.

    To add a new server, select Add New Server and complete the fields:

    • Friendly Name: Server 0.2
    • Address: localhost
    • DB port: 3307
    • DBA Username: root
    • DBA Password: password

    Click Verify and save, as shown in Figure 10.

    Figure 10. Add new server
    add or modify database server
    add or modify database server
  3. Adjust the read load-balancing and failover policies for the new server node. To enable the Failover option, select ON for Failover Enabled and modify the Failover Priority of the node (smaller value = higher priority), as shown in Figure 11. Then click Verify and save.
    Figure 11. Adjust options
    adjust load balancing and failover
    adjust load balancing and failover
  4. To review the newly added node, click the Monitor tab and select Application > Cluster > Node. Review the replication status of the slave, as shown in Figure 12.
    Figure 12. Review new node
    review replication status of slave
    review replication status of slave
  5. To modify the node's properties, go to Administer > Application > Cluster > Node and modify its properties.

Configure the application failover policy

  1. Switch to the Administer tab, go to the Apps tab, select your application, then click the Configuration tab, as shown in Figure 13.
    Figure 13. Configuration tab
    Replication, query manipulation, advanced options
    Replication, query manipulation, advanced options
  2. Enter the value for the Replication lag threshold for failover (seconds). In our example, it is 2.
  3. Specify the Database replication method, which is MYSQL in our example. It is used to replicate data from the write node to all its slaves.

    Note that this option only tells ScaleBase what method to expect and test for; it will not modify or configure the actual replication between the databases.

  4. Scroll down and click Save when finished. This configuration change does not require a ScaleBase controller restart.

Load balancing of reads

To improve the scalability of database operations for read-intensive applications, ScaleBase can distribute reads to multiple replicated slave nodes inside each cluster. Select ON for Read Enabled, as shown in Figure 14, then click Save.

Figure 14. Read enabled
distribute reads to multiple slave nodes
distribute reads to multiple slave nodes

ScaleBase distributes reads to all the available replicated nodes in a round-robin fashion as long as the nodes are well replicated and the replication lag stays under the defined threshold. Should the replication lag cross the threshold for a node, the node is excluded from the read load-balancing until its lag drops below the threshold again.

Configure the read balancing policy

To load-balance the reads for a selected application, go to the application's configuration screen to modify the threshold settings and node affinity.

  1. Go to the Administer tab, click the Apps tab, select your application, then click the Configuration tab, as shown in Figure 15.
    Figure 15. Set up replication
    set replication threshold for reading, failover
    set replication threshold for reading, failover
  2. Enter the new value for the Replication lag threshold for reading (seconds). In our example, it is 0.
  3. Specify the Database replication method, which is MYSQL in our example. It is used to replicate data from the write node to the replicated slaves.

    Note that this option only tells ScaleBase what method to expect and test for; it will not modify or configure the actual replication between the databases.

  4. Scroll down and set the Load balancing affinity parameter, as shown in Figure 16. It is STATEMENT in our example.
    Figure 16. Set load balancing affinity
    select from pulldown, load balancing affinity field
    select from pulldown, load balancing affinity field

    Click Save when finished. This configuration change does not require a ScaleBase controller restart.

Distributing data

Updating an existing distribution policy is a complex task because it has a direct impact on existing and incoming data in clusters (shards). With ScaleBase you can maintain data consistency and integrity during the change by implementing a two-step process that makes the operation error-proof, consistent, and transparent.

At any moment, there is exactly one "active configuration" that dictates the actual physical distribution for existing (current) data in clusters. After an installation, the active configuration consists of a single cluster; all the data are stored in a single cluster and there is no actual distribution in effect.

  1. Modify the sharding configuration. For example, add new clusters, mark clusters to be deleted, add a new or update an existing distribution policy, add new schemas or tables to the policy, update or split ranges in distribution rules, and so on.

    These pending changes are not active until manually committed by a ScaleBase operator (in the next step). Indicators on the screen also help you, as follows.

    • The user interface (UI) has an instant visual indication of pending changes in the policy and cluster configuration that have not been committed yet.
    • Clusters with a change (added or removed) are decorated with icons indicating their pending status.
    • Updated policy elements are marked with comments indicating the nature of the change such as NOT ACTIVE, FROM GLOBAL, and so forth.

    Note: You can undo all the pending changes in policy with the administrator action Revert Scheduled Policy Changes from the console.

  2. Commit all the pending configuration and policy changes by using the Commit Policy Changes action. When a ScaleBase operator commits not-yet-active changes, ScaleBase performs a set of tasks that ensure:
    • Schemas are uniformly created on all clusters
    • Auto-increment columns offsets and increments get adjusted to prevent duplicated values on independent clusters
    • Global tables data is uniformly redistributed among all clusters
    • All master tables data is migrated to the master cluster
    • All distributed tables data is migrated to their target clusters, as prescribed by the updated distribution policy
    • For hash-based rules, data distributed tables are uniformly redistributed among all clusters
    • For updated ranges, data with new homes are moved to their new clusters

    The redistribution can take a long time, especially when large data has to be relocated to conform to the updated policy. Operators can tweak the redistribution process, called CoRD, as discussed in Committing policy changes and redistributing data.

Administering clusters

ScaleBase cluster is a master node with replicated slaves that store a fragment of data (a shard) in a highly available fashion. The description of what records get stored on which cluster is defined in a data distribution policy. Thus, the first requirement for a functioning data distribution is a physical configuration of the clusters. Each cluster must contain at least its master database node. This DB instance must be ready when adding a new cluster to an application configuration.

To add a new cluster:

  1. From the Administer tab, select the application, then click Add, as shown in Figure 17.
    Figure 17. Add a cluster
    add cluster from Administrator tab
    add cluster from Administrator tab
  2. In the Add Cluster Wizard, complete the fields about the location and DBA credentials for the master node of the cluster as follows and as shown in Figure 18.
    • Enter a new database server: You can either select an existing record of this DB server in the system if it was added previously (this is not a common situation), or you can add a new server directly in the window by selecting this option.
    • Friendly Name: Cluster: 1
    • Address: scalebase.softlayer.com
    • Port: 3306
    • DBA Username: scalebase
    • DBA Password: password
    Figure 18. Add location and credentials
    add cluster wizard
    add cluster wizard

    Click Finish, and the cluster will be created with a single read/write master node, as shown in Figure 19.

    Figure 19. Create cluster
    created cluster with single read/write master node
    created cluster with single read/write master node
  3. To review the new cluster and its master node, go to the Monitor tab, select the application, and select the new cluster at the bottom of the list of clusters, as shown in Figure 20. Until you commit the configuration changes, the cluster's icon will display an arrow symbol indicating that the cluster requires data redistribution. Because the redistribution has not been run yet, its data distribution status will show that the cluster is not yet active.
    Figure 20. Review new cluster
    commit changes to activate cluster
    commit changes to activate cluster

To add additional clusters to the system, repeat the previous steps.

Data distribution policy

We recommend that you start this step after the database schema has been created on the master node of the first cluster and all clusters have been added to the system.

This section describes how to configure the application's distribution policy using the UI. It is outside the scope of this article to discuss the policy design, which would typically happen during your analysis phase.

You can quickly import or export policy definitions from a file. From the Administer tab, select the application. At the top right of the screen is a choice of Actions to Import Distribution Configuration and Export Distribution Configuration, as shown in Figure 21. Select an action to import or export a distribution policy from a file.

Figure 21. Import or export distribution policy
Actions: import/export distribution configuration
Actions: import/export distribution configuration

Getting started with distribution rules

This section explains how to set your hash rules and range rules.

Hash-based rules

Hash rules are the best fit for maintenance-free and uniform distribution of records among multiple clusters. The main drawback is that users have no control over the location of data in clusters. And, increasing or decreasing the number of clusters results in redistribution of significant amounts of data.

To add a table to a hash rule:

  1. From the console, go to the Administer tab, select your application, then click the Distribution Policy tab.
  2. Select Rules, then click Add to add a new hash rule with the type INTEGER or STRING, as shown in Figure 22.
    Figure 22. Add distribution rule
    add hash rule, type hash, data type integer
    add hash rule, type hash, data type integer

    Click Verify and save.

  3. Select the hash rule of the INTEGER or STRING type in the tree, then click Add. As shown in Figure 23, in the Add Distributed Table Definition window select the following:
    • Schema: scalebaseconfig
    • Table: customer
    • Data Distribution Key: id
    Figure 23. Add distributed table definitions
    add schema, table, data distribution key
    add schema, table, data distribution key

    Click Verify and save.

After adding the tables, you should see the window showing NOT ACTIVE, as shown in Figure 24. Any new rules and tables are not active until you commit the policy changes.

Figure 24. Distributed tables
new rules and tables, not active
new rules and tables, not active

Range and list rules

Use range rules when you want control over distribution of records into individual clusters. Careful planning of range-based rules also helps eliminate the need to redistribute data when new clusters are added to the system for increased capacity. With range rules, operators are responsible for the definition of range-cluster mapping. Range rules can only reference clusters that are already entered in the system.

To add a new range and list rule:

  1. From the Administer tab, select your application, then click the Distribution Policy tab.
  2. Click the Distributed Tables tab. You should see a node called Rules in the navigation tree near the center of the screen. Click Rules, then click Add.
  3. In the Add Distribution Rule window shown in Figure 25, enter the Description, change Type to RANGE, and select the Data Type, which is INTEGER in our example.
    Figure 25. Select type
    change rule type to Range
    change rule type to Range

    Click Verify and save. You will see a warning message that you have to specify the value ranges and their clusters.

  4. Specify the value ranges and select your range rule in the tree to see a form for its definition.
  5. Click Add, as shown in Figure 26, and next to the Data Distribution Keys table you should see a window to define a new range.
    Figure 26. Define new range
    add range next to data distribution keys field
    add range next to data distribution keys field

    In the Add Data Distribution Key Range window, define your range, as shown in Figure 27.

    Figure 27. Set the range upper and lower bound
    specify upper bound, target cluster for new range
    specify upper bound, target cluster for new range

    Click Verify and save.

After adding the range, you should see the screen shown in Figure 28. The range value represents an upper bound of an interval of values that will go to this cluster. The lower bound is defined by the previous (smaller) range value definition. The list of range values does not need to be sorted in the UI, as it gets sorted automatically by ScaleBase.

Figure 28. Range values
Modified rule with range values
Modified rule with range values

Adding a table to a range or list rule

As shown in Figure 29, select your rule, then click Add. In the Add Distributed Table Definition window, select the Schema, Table, and Data Distribution Key for this table.

Click Verify and save.

Figure 29. Add distributed table definition
add schema, table, data distribution key
add schema, table, data distribution key

Repeat this procedure for all the tables to be distributed with the desired rule before proceeding to the next step.

Modifying distribution rules that relocate data between clusters

For hash-based rules, redistribution of data between clusters occurs when clusters are added or removed from the system. There is no way for operators to control what data goes where. The system uses hash and module algorithms for the best possible distribution. The ranges are: adding, splitting, or merging.

For range-based rules, ScaleBase provides the following capabilities:

  • With the initial sharding of a single database with existing data, when the policy changes get committed all existing data gets migrated to their destination clusters according to the newly defined ranges of values.
  • When a policy with ranges is active, you can relocate each range to a different cluster or split an existing range into two ranges and specify a new location for both sub-ranges. When relocating two or more neighboring ranges to the same cluster, they will effectively be merged during the policy change commit.

Figure 30 shows how to modify distribution rules for the cluster.

Figure 30. Modify distribution rules
relocate range to new cluster
relocate range to new cluster

Committing policy changes and redistributing data

During the commit process, the modified policy is checked against the current schemas on the master cluster and against existing data. New clusters may not contain schemas that are present in the policy and new clusters should always be blank. Then, all schemas in use are created on all new clusters, all policy changes get activated, and the new policy becomes active. The last step of the commit process is an automated execution of CoRD (continuous redistribution of data) that will only move data that has to be moved to keep it in sync with the updated policy.

Typical changes that require data to be redistributed during a commit include:

  • Conversion of tables (for example, distributed to global)
  • Listing of new schema (globalization of its unlisted tables)
  • Adding or removing a cluster when hash rules are used
  • Relocating or splitting of ranges

From the Administer tab, select your application. In the upper right of the screen, select Commit Policy Changes next to Actions, then click Run.

You should see the Data Re-distribution window, as shown in Figure 31. Select Offline, then click Run data re-distribution.

Figure 31. Data redistribution warning
specify # of data chunks for rules, offline or online
specify # of data chunks for rules, offline or online

Online redistribution versus offline

In many situations, it is important to keep the database operational for clients during the redistribution procedure. The online mode is designed for such situations. While most of the data in clusters remains available for reads and writes, only a subset of data currently being relocated is locked for writes until its relocation is finished.

Offline redistribution omits the locking of rows and can be significantly faster (up to 10x, depending on actual data and its size). It requires that applications be switched offline. ScaleBase operators are responsible for ensuring that no application will attempt to connect while the offline CoRD is running.

Adding a new application user to ScaleBase

From the Administer tab, select the defaultApp application, as shown in Figure 32, then click the Users tab.

Click Add above the list of users, then enter the new user credentials.

Figure 32. Add users
select users, then add
select users, then add

Configure the user with the information in Listing 6.

Listing 6. Configure user
CREATE USER 'dbt2' IDENTIFIED BY 'dbt2';
GRANT ALL PRIVILEGES ON `dbt2`.* TO 'dbt2'@'%' identified by 'dbt2';
GRANT ALL PRIVILEGES ON `dbt2`.* TO 'dbt2'@'localhost' identified by 'dbt2';
GRANT ALL PRIVILEGES ON `dbt2`.* TO 'dbt2'@'198.11.235.67' identified by 'dbt2';

System administration

This section briefly covers a few system administration tasks.

Starting and stopping ScaleBase service

When the installation completes, ScaleBase is automatically started. Log in as a superuser (root) or run the following commands with sudo.

  • To stop a running ScaleBase instance, run service scalebase stop.
  • To start a stopped instance, run service scalebase start.
  • To restart ScaleBase, run service scalebase restart.

Resetting the admin password

To reset the password of the admin user to admin, go to the ScaleBase installation directory / bin on the ScaleBase host and run the adminAccountReset.pl script, as shown in Listing 7.

Listing 7. Reset password
cd /usr/scalebase/bin
./adminAccountReset.pl

Creating a report with logs

ScaleBase staff might routinely ask you to provide logs when troubleshooting or examining a specific behavior. To create a report with logs:

  1. Run /usr/scalebase/bin/CreateErrorLogZip.pl.
  2. Send the report file called /var/tmp/ScalebaseErrorReport.tar.gz.

Maintenance mode for clusters

A whole cluster with all its databases can be brought offline for maintenance without triggering a costly failover. In maintenance mode, all incoming statements requiring access to the disabled cluster (and any of its databases) will be rejected with an error until the cluster is brought back online.

To put a cluster in maintenance mode, go to the Administer tab and select the cluster. Select Turn On Maintenance Mode from Actions, then click Run, as shown in Figure 33.

Figure 33. Put cluster in maintenance mode
Actions, turn on maintenance mode
Actions, turn on maintenance mode

Clusters in the maintenance mode are marked with a special icon, as shown in Figure 34.

Figure 34. Maintenance mode icon
wrench icon shows maintenance mode

Maintenance mode can be turned off at any time by going back to Actions and selecting Turn Off Maintenance Mode.

Starting and stopping installed and running services

The installation script prepares and starts a new ScaleBase service in the target OS called ScaleBase, which should always be used to start and stop a ScaleBase instance. The script is in the /etc/init.d/scalebase file.

To start and stop the ScaleBase service, log in as a superuser (root) or run the following commands with sudo.

  • To stop a running ScaleBase instance, run service scalebase stop.
  • To start a stopped instance, run service scalebase start.
  • To restart ScaleBase, run service scalebase restart.

Conclusion

IT infrastructures are being optimized and organized in a meticulous manner to host and run next-generation software applications and services. Autoscaling is becoming indispensable for the cloud world and is being reviewed at every layer—especially the database layer.

ScaleBase deployment is simple, even for most complex production applications, and it is architected for the cloud. No code changes are required, as ScaleBase uses a Layer 7 pass-thru model and policy-based data distribution. You can focus on innovation while ScaleBase provides:

  • The reliability and feature richness of MySQL database engines
  • Powerful capabilities for effective distributed database management
  • Simplicity in deployment and operation

This article explained how to use the unique capabilities of ScaleBase when developing new, big-data applications from the ground up. We also explored how to modify existing applications to be highly scalable with zero downtime.


Downloadable resources


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Big data and analytics, Cloud computing
ArticleID=1017325
ArticleTitle=Autoscale your database with ScaleBase for zero downtime of cloud applications
publish-date=10152015