1. Welcome
1.1. Product overview
Big Replicate for IBM Db2 Big SQL provides enhanced functionality for the Big Replicate for Hadoop platform, with replication facilities for users of IBM’s Big SQL product. Big SQL is a hybrid SQL on Hadoop engine that allows extensive SQL support against information held in a variety of stores, including Hadoop. It leverages DB2 technologies for query planning and optimization, and integrates with the broader IBM analytics platform.
Big Replicate for IBM Db2 Big SQL extends the functionality of Big Replicate for Hadoop with Live Hive to support active replication of operations performed via Big SQL, deployed in multiple clusters. This includes all DDL operations that modify Big SQL metadata.
In this guide, Big Replicate for IBM Db2 Big SQL refers to a Big Replicate for Hadoop deployment with the Big SQL Plugin installed.
1.2. Documentation guide
This guide contains the following:
- Welcome
-
This chapter introduces this user guide and provides help with how to use it.
- Release Notes
-
Details the latest software release, covering new features, fixes and known issues to be aware of.
- Concepts
-
Explains how the Big SQL Plugin functions, and how it interacts with the Big Replicate platform.
- Installation
-
Covers the steps required to install and set up the Big SQL Plugin into an Big Replicate for Hadoop deployment.
- Operation
-
The steps required to run, reconfigure and troubleshoot Big Replicate for IBM Db2 Big SQL.
1.2.1. Symbols in the documentation
In the guide we highlight types of information using the following call outs:
The alert symbol highlights important information. |
The STOP symbol cautions you against doing something. |
Tips are principles or practices that you’ll benefit from knowing or using. |
The i symbol shows where you can find more information, such as in our online Knowledge Center. |
1.3. Contact support
See our online Knowledge Center which contains updates and more information.
1.4. Give feedback
If you find an error or if you think some information needs improving, please Contact us.
2. Release Notes
2.1. IBM Big Replicate Plugin for Db2 Big SQL Release 5.0.0 Build 62
20 December 2019
The IBM Big Replicate Plugin for Db2 Big SQL extends IBM Big Replicate in environments where IBM Db2 Big SQL is used along with Apache Hive. With it, IBM Big Replicate maintains consistent Big SQL catalog information among multiple deployments.
The 5.0.0 release of the IBM Big Replicate Plugin for Db2 Big SQL is the first release of this new capability, extending Big Replicate for Hadoop.
2.1.1. Available Packages
This release supports deployment into IBM Big Replicate 2.15.1 for HDP Hadoop clusters, Java 8, and IBM Db2 Big SQL 5:
-
HDP 2.6.0 - 2.6.5
-
Java 8
-
IBM Db2 Big SQL 5.0.3+
Installation packages are provided for RedHat 7 and CentOS 7 versions.
2.1.2. Installation
The IBM Big Replicate Plugin for Db2 Big SQL supports an integrated installation process that allows it to be added to an existing Big Replicate deployment. Consult the Installation Guide for details.
The product should be installed into an existing IBM Big Replicate deployment using the installer provided:
bigreplicate-live-bigsql-installer.5.0.0.sh
Please find detailed installation instructions in the installation section.
2.1.3. System Requirements
Before installing, ensure that your systems, software and hardware meet the requirements found in the Pre-requisites section.
2.1.4. Known Issues
The initial release includes a small set of known issues.
-
Non-selective replication
Current functionality performs replication of operations against all Big SQL constructs, rather than limiting that to those for which Big Replicate performs replication of Hive metadata.
-
No support for zone-specific translation
Operations performed against the Big SQL catalog that reference cluster-specific information such as host names are not translated into equivalent operations in other clusters, but the originating zone’s operation will be applied as-is.
3. Concepts
3.1. Product concepts
3.1.1. Big Replicate for IBM Db2 Big SQL
IBM Db2 Big SQL is an enterprise-grade, hybrid ANSI-compliant SQL on Hadoop engine, that delivers massively parallel processing (MPP) and advanced data query capabilities. Big SQL offers a single database connection or query for disparate sources such as HDFS, RDMS, NoSQL databases, object stores, and WebHDFS.
3.1.2. Big Replicate
Big Replicate for Hadoop enables active-active field system data replication between Hadoop clusters, including across distributions. Big Replicate for Hive extends this capability to Hive metadata.
Big Replicate for IBM Db2 Big SQL
A standard Big SQL deployment allows SQL queries to access information from disparate data sources but does not extend to data sets that are replicated among Big Replicate HDFS clusters. The Big SQL Plugin extends the abilities of Big Replicate to support active replication of operations performed via Big SQL when deployed in multiple clusters. Use it to perform catalog changes and queries that reference data replicated by IBM Big Replicate on any of the replicated Big SQL instances.
3.1.3. Capabilities
-
Create and modify Big SQL table definitions that refer to information held in HDFS, while Big Replicate ensures that both the HDFS-resident data and the Big SQL catalog information that refers to that data can be created, used and modified in any of the replicated instances.
-
Replicate Big SQL metadata changes without any modification to Big SQL services by using a proxy interface to the Big SQL catalog.
-
Maintain compatibility with standard Db2 protocols.
-
Integrate with Big Replicate for Hive to support Hive Metastore and Big SQL Catalog replication.
3.2. Architecture
The following diagram runs through the sequence of operations that make up a basic Big SQL Plugin replication cycle and illustrate the interactions between Big Replicate for Hadoop, the Filter Proxy, the Big SQL Plugin and the Big Replicate for Hive Plugin.
-
IBM Db2 Big SQL Client request sent to the Filter Proxy.
-
Filter Proxy forwards the write request to the Big SQL proxy.
-
Big SQL proxy forwards the request to the Big Replicate Server for coordination.
-
The Big Replicate Server coordinates the proposal with the remote Big Replicate Server.
-
Big Replicate forwards the replicated request to the Big SQL Plugin.
-
The Big SQL Plugin responds to the Filter Proxy.
-
In remote context, the Big SQL plugin forwards the request to IBM Db2 Big SQL Head Node via JDBC driver.
-
The Big SQL Proxy responds to Filter Proxy
-
The Filter Proxy sends the request to IBM Db2 Big SQL Head Node.
-
The Filter Proxy receives the response from IBM Db2 Big SQL Head Node.
-
The Filter Proxy forwards the response to the IBM Db2 Big SQL Client.
No Change to Big SQL Behavior The Big SQL Plugin does not change the way client applications interact with Big SQL. Queries and operations that operated correctly prior to its introduction will operate correctly after it is introduced. |
3.3. Supported Functionality
3.3.1. Features
- Pre-execution coordination
-
Changes made to Big SQL catalog information are replicated among multiple deployments. The Big SQL Plugin coordinates operations among clients between Big SQL catalogs, with changes coordinated before, rather than after execution.
- No scheduled replication
-
While Big SQL provides facilities to schedule updates of the Big SQL catalog based on changes made in the Hive metastore, Big Replicate for IBM Db2 Big SQL makes continuous updates to all points, supporting active modifications in any instance with no potential for conflict.
- Low overhead
-
Adding replication capabilities to a Big SQL deployment does not impose a significant overhead compared to regular operation for the system.
- Compatibility
-
Big Replicate for IBM Db2 Big SQL is compatible with current versions of Big SQL. See the release notes for more details on the versions supported.
- Cross-version replication
-
The Big SQL Plugin is not tightly coupled to the schemas in the Big SQL catalog, and will accommodate mixed versions of Big SQL. See the release notes for more details on the versions supported.
- Minimal administrative overhead
-
An administrator doesn’t need to spend additional time maintaining a system in which replication functionality has been enabled.
- Scalability
-
The Big SQL Plugin can scale to deployments of any size, data or of any complexity and large rates of change of data.
3.3.2. Limitations
Big SQL Plugin has the following known limitations.
-
No data replication for Db2 or HBase tables. The data replication of Hadoop (Hive tables) is handled by the Big Replicate product.
-
The Big SQL Plugin should be used with Big Replicate for Hive to interact with the Hive metastore.
-
Federated queries are not completely supported. See Federated Queries Support below.
-
High Availability is supported for the Filter proxy/Datasunrise, but not for the Big SQL Plugin and the Big SQL database.
Federated Queries support
In this release, federated queries are not completely supported.
Fields which differ between zones, such as PASSWORD
, are not mapped to correct values across deployments.
-
If the query is executed from zone1, then zone2 will execute the query without converting the
PASSWORD
,HOST
andPORT
values. -
The replication path in both zones must be same.
For example, consider the federated queries below.
CREATE SERVER MYDB1
TYPE DB2/UDB
VERSION 11
AUTHORIZATION "<bigsql-username>" PASSWORD "<bigsql-password>"
OPTIONS (HOST '<Big Replicate-FQDN>', PORT '32051', DBNAME 'bigsql');
DROP SERVER MYDB1;
CREATE USER MAPPING
FOR USER
SERVER MYDB1
OPTIONS (
REMOTE_AUTHID 'userMap',
REMOTE_PASSWORD 'bigsql'
)
4. Installation
Installing the Big SQL Plugin requires the following stages.
-
Check prerequisites - Ensure that your cluster is ready for installation, for example check that Big Replicate for Hive is installed and working.
-
Install Filter Proxy - The Filter Proxy is used to parse the incoming DDL/DML and only forwards DDL to the Big SQL proxy.
4.1. Assumptions
Root user access or equivalent privileges will be required for all commands unless specified otherwise.
4.2. Pre-requisites
This is a list of the software applications that must be installed on a cluster
-
HDP 2.6.0 - 2.6.5
-
Big Replicate for Hadoop running with a minimum of 2 inducted zones and a replication rule
You must create a rule using the steps in the Live Hive documentation. The replication rule must be using the path /sqlproxy, and remember to use your rule path instead of /sqlproxy. Also, note guidelines for file and folder permissions also apply to this path. -
ODBC driver for DB2 - (v11.1.3fp3a_linuxx64_odbc_cli.tar.gz )
Only use version v11.1.3fp3a_linuxx64_odbc_cli.tar.gz |
-
Big Replicate for Hive v2.x
-
[Not required but useful for client machines] Db2 client (e.g. dbeaver, jsqsh)
All Big SQL Client connections must be changed to reflect where the Big Replicate proxy is located. |
4.2.1. Supported Operating Systems
The Big SQL Plugin supports the following operating systems.
-
CentOS 7
-
Red Hat Enterprise Linux 7
4.2.2. Server OS
There is the need for strict consistency between Big Replicate for Hadoop nodes running the Big SQL Plugin. Your nodes should, as a minimum, be running with the same versions of:
-
Hadoop/Manager software
-
Linux 7.x
-
Check to see if you are running a niche variant, e.g. Oracle Linux is compiled from RHEL but it is not identical to a RHEL installation.
-
-
Java
-
Version 8
Only Java 8 is supported. -
Ensure you are running the same version, on consistent paths.
-
4.2.3. Hadoop Environment
Confirm that your clusters have a status of healthy.
-
All nodes must have a
fusion
system user account for running Big Replicate services. -
Check the Hadoop daemon log files for any errors that might cause problems with your installation.
Folder Permissions
When installing the Big Replicate for Hive proxy or plugin, the permissions of /etc/wandisco/fusion/plugins/xxxx/ is set to match the fusion user (FUSION_SERVER_USER) and group (FUSION_SERVER_GROUP), which are set in the Big Replicate node installation procedure. Permissions on the folder are also set such that processes can write new files to that location as long as the user associated with the process is the FUSION_SERVER_USER or is a member of the FUSION_SERVER_GROUP. No automatic fix for permissioning
Changes to the fusion user/group are not automatically updated in their directories. You need to manually fix these issues, following the above guidelines. |
4.2.4. Firewalls and Networking
-
If iptables or SELinux are running, you must confirm that any rules that are in place will not block the Big SQL Plugin communication.
-
If any nodes are multi-homed, ensure that you account for this when setting which interfaces will be used during installation.
-
Ensure that you have hostname resolution between clusters, if not add suitable entries to your hosts files.
-
Check your network performance to make sure there are no unexpected latency issues or packet loss.
4.3. Installation Steps
If you are using Kerberos, please see the additional steps required for installation in the Configure Kerberos section. If using Kerberos, the screenshots shown below may not be representative.
4.3.1. Install DataSunrise and Filter Proxy
The Filter Proxy is bundled with the Big SQL Plugin installer.
-
Obtain the Big SQL Plugin installer from Download IBM Big Replicate for Hadoop and open a terminal session on your Big Replicate node.
-
Ensure the downloaded file is executable e.g.
chmod +x live-bigsql-installer-<version>.sh
-
Run the installer using an account with appropriate permissions:
./live-bigsql-installer-<version>.sh install-ds ds-key=<license-key>
This will install DataSunrise and update the license key.
-
Connect to the DataSunrise Server and navigate to
/opt/datasunrise/cmdline
. -
Execute the following command:
./executecommand.sh connect -host 127.0.0.1 -login admin
-
Navigate to
/opt/datasunrise/
and set the admin password using the command:./AppBackendService set_admin_password=<new_password>
The AppBackendService
is the system process name for the Filter Proxy GUI, the command script is found in/opt/datasunrise/
. -
Connect to the DataSunrise Server, and navigate to
/opt/datasunrise/cmdline
. Execute:./executecommand.sh connect -host 127.0.0.1 -login admin -password <password>
If the password is not set then only provide the username.
-
Add the database parameters i.e. database instance
./executecommand.sh addInstancePlus -database BIGSQL -dbType db2 -dbHost <IBM Big SQL Host> -dbPort <IBM Big SQL Port> -proxyHost 0.0.0.0 -proxyPort 50000 -login bigsql -name bigsql -password bigsql
In the above command:
- BIGSQL
-
Alias Name for database (-database)
- 0.0.0.0
-
Datasunrise Host (-proxyHost)
- 50000
-
DatasunrisePort for listening request from client (-proxyport)
-
Update the SQL Proxy host:
./executecommand.sh changeParameter -name EDServerDefaultHost -value <Big SQL Proxy Server host>
-
Update the SQL proxy port:
./executecommand.sh changeParameter -name EDServerDefaultPort -value <Big SQL Proxy Server port>
-
Restart the core process:
./executecommand.sh restart
-
Add all the Rules in the Filter Proxy to filter and send DDL statements to the proxy server. Only add the rules for the DDL statements you wish to replicate.
-
Connect to Filter Proxy server from command line.
By default, the session is active for 10 minutes from the moment of the last activity. If no activity is detected within 10 minutes, a reconnect will be required. ./executecommand.sh connect -host 127.0.0.1 -login admin
-
If the Filter Proxy is installed with a password then run:
./executecommand.sh connect -host 127.0.0.1 -login admin -password <password>
-
Add query group.
./executecommand.sh addQueryGroup -name <query-group>
- <query-group>
-
Name of queryGroup to be created. The QueryGroup is used to filter out the SQL statements. If we add a rule with a statement then only that statement would replicate, the other statements would only be executed locally, through filter proxy.
-
Add a rule for a Create statement.
./executecommand.sh addQueryOfGroup -sql "(?i)CREATE .*" -name <query-group> -regExp true
-
Add a rule for a Drop statement.
./executecommand.sh addQueryOfGroup -sql "(?i)DROP .*" -name <query-group> -regExp true
-
Add a rule for a Alter statement.
./executecommand.sh addQueryOfGroup -sql "(?i)ALTER .*" -name <query-group> -regExp true
-
Since a use statement is required to track the current schema in the proxy server, a rule is added to filter 'USE' statements to the proxy server.
./executecommand.sh addQueryOfGroup -sql "(?i)USE .*" -name <query-group> -regExp true
-
-
Add a rule for an Analyze statement. See here for more information on replicating Analyze statements.
./executecommand.sh addQueryOfGroup -sql "(?i)ANALYZE .*" -name <query-group> -regExp true
-
Add a rule for a Rename statement.
./executecommand.sh addQueryOfGroup -sql "(?i)RENAME .*" -name <query-group> -regExp true
-
Add a rule for a Comment statement.
./executecommand.sh addQueryOfGroup -sql "(?i)COMMENT .*" -name <query-group> -regExp true
-
After adding all the queries to QueryGroup, add a create query based rule.
./executecommand.sh addExtDispRule -name extDispQueryBased -instance <db-alias> -dbType DB2 -filterType group -intercSqlGr <query-group>
- <query-group>
-
Name of queryGroup to be which was created in Step 12c (Add query group).
- <db-alias>
-
Alias Name for database (-database) created in Step 8 (Add database parameters).
4.3.2. Ambari Installation
CLI installation
-
Using the installer downloaded above, run the installer using an account with appropriate permissions:
./live-bigsql-installer-<version>.sh
The installer will now start.
Verifying archive integrity... All good. Uncompressing WANdisco Big SQL................ :: :: :: # # ## #### ###### # ##### ##### ##### :::: :::: ::: # # # # ## ## # # # # # # # # # ::::::::::: ::: # # # # # # # # # # # # # # ::::::::::::: ::: # # # # # # # # # # # ##### # # # ::::::::::: ::: # # # # # # # # # # # # # # # :::: :::: ::: ## ## # ## # # # # # # # # # # # :: :: :: # # ## # # # ###### # ##### ##### ##### You are about to install IBM Big SQL version <version> Do you want to continue with the installation? (Y/n) Y Installing requested plugin components... fusion-ui-bigsql-plugin-<version>-dist.tar.gz ... Done All requested components installed. Running additional post install actions... Full installation of this plugin currently requires that the appropriate 'Management Pack' stack is installed through your ambari server node. This installer package includes the supporting stack package for this. If you have not already done so, you can either * extract the stack and manually run ambari-server commands to install, or * request that this installer runs the commands for you. For the first option: 1) run this installer with the 'extract-stack' sub-command 2) if necessary copy the extracted stack to the ambari-server node 3) run the following commands on the ambari-server node: ambari-server stop ambari-server install-mpack --mpack=<extracted-stack.tar.gz> -v ambari-server start For the second option: 1) copy this installer to the ambari-server node 2) run the installer with the 'install-stack' sub-command. After this you will be able to use the Ambari UI to Add the service. For further guidance and clarifications, go to https://docs.wandisco.com/
The installer will perform an integrity check and confirm the product version that will be installed. Enter Y to continue the installation.
Two options for installation will now be displayed. The instructions in this section document the
install-stack
option. Note this method stops and starts the Ambari server automatically. For the alternative method see Manually extract the stack. -
Copy the installer to the Ambari server node e.g.
scp live-bigsql-installer-<version>.sh <your-ambari-node>:/tmp
-
On your Ambari server node run:
/tmp/live-bigsql-installer-<version>.sh install-stack
-
Now go to your Ambari UI and follow the steps below.
Installation via the Ambari UI
-
Click on Actions → + Add Service
-
Check Big Replicate - Big SQL and then click Next.
-
In Assign Masters for Big Replicate - Big SQL, select the node where you want to deploy the Big SQL Proxy Server. In this example we are installing it on siva01-vm1 and click on Next.
-
In Assign Slaves and Clients, select the node where your Big Replicate for Hadoop server is being installed, in this example, siva01-vm1.
Make sure you check only those options where the Big Replicate for Hadoop server is installed and click Next. -
Wherever there are alerts, provide proper values for the configuration and click Next.
-
Complete all required entry fields for the Plugin and Server configuration.
- JDBC Connection String Url
-
The string that your DBMS JDBC driver uses to connect to a database. This of consists of your Big SQL host, port and database name.
Syntax:jdbc:db2://<IBM Big SQL host>:<IBM Big SQL port>/<databasename>
Example:
jdbc:db2://siva01-vm0.bdfrem.wandisco.com:32051/bigsql`
- Filter Proxy DB Username
-
The username to connect to Big SQL, for example bigsql.
- Filter Proxy DB Password
-
The password required for the Big SQL login.
- Zone Name
-
Big Replicate cluster zone name.
-
ANALYZE statement replication is disabled by default. To enable replication, set the
analyze.command.replicate
property in theproxy-server-site.properties
file toYES
. See Analyze statements for more information. -
Review the configuration and click Deploy.
-
On the Install, Start and Test window, click Next once installation is successful.
-
Perform any restarts advised by Ambari.
-
Restart the Big Replicate servers.
-
Now continue with the next section.
Manually extract the stack
This is an alternative method to the install-stack
option described above.
Once you have completed the steps here you will then need to complete the Installation via the Ambari UI section.
-
Perform steps 1-3 above.
-
Run:
./bigreplicate-live-bigsql-installer-<version>.sh extract-stack
-
Transfer the stack file to the
/tmp
folder on the Ambari server node e.g.scp fusion-bigsql-proxy_<version>-centos.stack.tar.gz root@<Ambari-host>:/tmp
-
Connect to Ambari Server node and stop Ambari Server.
service ambari-server stop
-
Install the stack for Ambari.
ambari-server install-mpack --mpack=/tmp/fusion-bigsql-proxy_<version>-centos.stack.tar.gz
-
Start the Ambari Server.
service ambari-server start
-
Now follow the steps in the Installation via the Ambari UI section to complete your installation.
4.3.3. Set Hive Configuration for IBM Big SQL
In order to set the Hive configuration directory, the configuration script must be copied from the Big SQL Plugin node:
Example rpxy02.vm1.bdfrem.wandisco.com
To the IBM Db2 Big SQL Head Node:
Example rpxy02.vm2.bdfrem.wandisco.com
-
Create the directory
/opt/wandisco
on the node where IBM Db2 Big SQL Head Node is installed.mkdir /opt/wandisco
-
Copy the file from Big SQL Plugin node to the IBM Db2 Big SQL Head Node.
scp /etc/wandisco/live-bigsql-proxy/set_hive_conf_big_sql.sh root@rpxy02-vm2:/opt/wandisco/
-
Go to directory
/opt/wandisco/
on the Big SQL head node and run the configuration script../set_hive_conf_big_sql.sh thrift://rpxy02.vm0.bdfrem.wandisco.com:9083
Here, give the connection for the Native Hive connection. If you are using HA, provide a comma separated list when running the script.
-
After successfully executing the script, restart the IBM Big SQL Service in order for it to take effect.
Verification
The following verification steps are to be performed on the node where IBM Big SQL is installed.
-
Check the log file
/var/log/replace_conf_log
for a successful execution of the Hive configuration script. -
Check that the
/etc/hive/conf
folder has been copied into/opt/wandisco/hive/
. -
Check the property
hive.metastore.uris
is present in/opt/wandisco/hive/conf/hive-site.xml
as this replaces the underlying Hive metastore URI. -
Check the script has replaced the export
HIVE_CONF
andHIVE_CONF_DIR
to/opt/wandisco/hive/conf
in the file/opt/wandisco/hive/conf
in/usr/ibmpacks/current/bigsql/bigsql/libexec/bigsql-hdpenv.sh
.
4.3.4. Configuring Kerberos
Before enabling Kerberos for Big SQL Plugin, you must first:
-
Have Kerberos enabled for Big Replicate
-
Have a Kerberos Principal username which is less than 8 bytes in length. The Kerberos Principal is in the form:
<username>/<hostname>@REALM.
Connect DataSunrise to Big SQL
If using Kerberos you need to connect DataSunrise to Big SQL using the following additional steps.
Parameters used:
<DataSunrise Host>
- a host where DataSunrise is installed
<DataSunrise Admin Password>
- DataSunrise admin password
<Big SQL Headless Admin Keytab>
- Headless keytab for Big SQL admin user
<Big SQL Headless Admin Principal>
- Headless principal for Big SQL admin user
<Big SQL Head Host>
- a host where Big SQL Head is installed
<Big SQL DB Name>
- Big SQL database instance name
<Big SQL Admin User>
- Big SQL admin user name
<Big SQL Admin Password>
- Big SQL admin user password
<Realm>
- Big SQL realm name
-
On the host where DataSunrise installed, modify the permissions of the Big SQL headless admin keytab, usually located at /etc/security/keytabs/bigsql.headless.keytab:
chmod 440 <Big SQL Headless Admin Keytab>
-
Add the DataSunrise user to a Hadoop group:
usermod -a -G hadoop datasunrise
-
Log in to the DataSunrise user e.g.
su -s /bin/bash datasunrise
-
Authenticate the Big SQL headless admin user:
kinit -kt <Big SQL Headless Admin Keytab> <Big SQL Headless Admin Principal>
-
In the directory
/opt/datasunrise/cmdline
, connect to a DataSunrise server./executecommand.sh connect -host <DataSunrise Host> -port 11000 -login admin -password <DataSunrise Admin Password>
-
Ensure there are no other proxy instances configured, and remove any if found. Instances can be listed using
./executecommand.sh showInstances
and removed using./executecommand.sh delInstance <NAME>
. -
Add kerberized database proxy instance to DataSunrise:
./executecommand.sh addInstancePlus -name BSKRB -dbType db2 -dbHost <Big SQL Head Host> -dbPort 18051 -database <Big SQL DB Name> -login <Big SQL Admin User> -password <Big SQL Admin Password> -proxyHost 0.0.0.0 -proxyPort 50000 -connString "Driver=IBMkrb;Authentication=KERBEROS;Database=<Big SQL DB Name>;Protocol=TCPIP;Port=18051;Hostname=<Big SQL Head Host>;TargetPrinciple=<Big SQL Admin User>/<Big SQL Head Host>@<Realm>;KRBPlugin=IBMkrb5;"
-
Bind existing filtering rules to the kerberized proxy instance:
./executecommand.sh addExtDispRule -name extDispQueryBased1 -instance BSKRB -dbType DB2 -filterType group -intercSqlGr gr1
Configure the Big SQL Plugin for Kerberos
To configure Kerberos, add securityMechanism=11 to the JDBC connection. For example:
jdbc.connection.string.url=jdbc:db2://<IBM Big SQL host>:<IBM Big SQL port>/bigsql:securityMechanism=11;
The JDBC connection must contain securityMechanism=11.
Troubleshooting
Following configuration, if you get the following error, a possible cause is a missing domain realm in krb5.conf
.
Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception: ]
In Big Replicate, ensure that the /etc/krb5.conf
file contains:
[domain_realm] .<domain name of cluster> = <kerberos realm> <domain name of cluster> = <kerberos realm>
For example:
[domain_realm] .wandisco.com = WANDISCO.HADOOP1 wandisco.com = WANDISCO.HADOOP1
4.3.5. Set up the connection using jsqsh
-
Run jsqsh command on the IBM Db2 Big SQL Head Node. If you are running in HA mode, this just needs to be done on one Head Node.
# jsqsh
-
Run setup, e.g.
# \connect --setup
-
Add a new connection that will connect to the filter proxy by entering
A
.JSQSH CONNECTION WIZARD - (edits $HOME/.jsqsh/connections.xml The following connections are currently defined: Name Driver Host Port --- -------------------- ---------- ------------------------------ ------ 1 bigsql db2 <bigSql-host> 32051 Enter a connection number above to edit the connection, or: (B)ack, (Q)uit, or (A)dd connection: A
-
Enter
1
to select the db2 driver, e.g.JSQSH CONNECTION WIZARD - (edits $HOME/.jsqsh/connections.xml) Choose a driver for use by your new connection Name Target Class --- ---------------- -------------------- -------------------------------------------------- 1 *db2 IBM Data Server (DB2 com.ibm.db2.jcc.DB2Driver 2 *db2zos IBM DB2 z/OS com.ibm.db2.jcc.DB2Driver 3 *hive2 Apache Hive org.apache.hive.jdbc.HiveDriver 4 *netezza IBM Netezza org.netezza.Driver 5 derby Apache Derby Server org.apache.derby.jdbc.ClientDriver 6 derbyembed Apache Derby Embedde org.apache.derby.jdbc.EmbeddedDriver 7 firebird Firebird JayBird org.firebirdsql.jdbc.FBDriver 8 hive Apache Hive org.apache.hadoop.hive.jdbc.HiveDriver 9 informix IBM Informix com.informix.jdbc.IfxDriver 10 jdbcodbc JDBC ODBC Bridge sun.jdbc.odbc.JdbcOdbcDriver 11 mssql MS SQL Server com.microsoft.jdbc.sqlserver.SQLServerDriver 12 mssql-jtds MS SQL Server jTDS net.sourceforge.jtds.jdbc.Driver 13 mssql2k5 MS SQL Server 2005+ com.microsoft.sqlserver.jdbc.SQLServerDriver 14 mysql MySQL com.mysql.jdbc.Driver 15 oracle Oracle oracle.jdbc.OracleDriver 16 oracleoci Oracle OCI oracle.jdbc.driver.OracleDriver 17 pgsql PostgreSQL org.postgresql.Driver 18 sybase Sybase ASE com.sybase.jdbc3.jdbc.SybDriver 19 sybase-asa Sybase ASA com.sybase.jdbc2.jdbc.SybDriver 20 sybase-jtds Sybase ASE jTDS net.sourceforge.jtds.jdbc.Driver * = Driver is available. If a driver is unavailable you may choose (D) below to jump to the driver wizard to provide a classpath Enter the driver number, (D)river wizard, (B)ack or (Q)uit: 1
After selecting
1
, you will see the Connection URL variables, e.g.JSQSH CONNECTION WIZARD - (edits $HOME/.jsqsh/connections.xml) The following configuration properties are supported by this driver. Connection name : _temp_ Driver : IBM Data Server (DB2, Informix, Big SQL) JDBC URL : jdbc:db2://${server}:${port}/${db} Connection URL Variables ------------------------ 1 db : <db-alias> 2 port : 50000 3 server : <proxyHost> 4 user : root 5 password : 6 Autoconnect : false JDBC Driver Properties ------------------------ None Enter a number to change a given configuration property, or (T)est, (B)ack, (Q)uit, Add (P)roperty, or (S)ave:
-
Enter
1
to change the db to the Big SQL database name (i.e., bigsql). -
Enter
2
to change the port to point to the filter proxy (i.e., 50000) -
Enter
3
to edit the server entry to point to the Filter Proxy node hostname. -
Enter
4
to change user and5
for password to connect to the database. -
After changing, enter
S
in order to save and will ask the connection name.Below is the connection URL, after changing its properties.
JSQSH CONNECTION WIZARD - (edits $HOME/.jsqsh/connections.xml) The following configuration properties are supported by this driver. Connection name : bigsql_filter Driver : IBM Data Server (DB2, Informix, Big SQL) JDBC URL : jdbc:db2://${server}:${port}/${db} Connection URL Variables ------------------------ 1 db : bigsql 2 port : 50000 3 server : <proxyHost> 4 user : bigsql 5 password : ****** 6 Autoconnect : false JDBC Driver Properties ------------------------ None Enter a number to change a given configuration property, or (T)est, (D)elete, (B)ack, (Q)uit, Add (P)roperty, or (S)ave:
-
-
Enter
T
to test whether the connection details are connect.ExampleEnter a number to change a given configuration property, or (T)est, (D)elete, (B)ack, (Q)uit, Add (P)roperty, or (S)ave: T Attempting connection... Succeeded! Hit enter to continue:
4.3.6. Installer Help
./bigreplicate-live-bigsql-installer-<version>.sh help
Running the following commands on the installer will generate a number of helpful outputs:
General options:
help Print this message and exit
Component installation control:
only-fusion-ui-server-plugin Only install the plugin's fusion-ui-server component
only-fusion-server-plugin Only install the plugin's fusion-server component
only-proxy-server Only install the plugin's proxy component
skip-fusion-ui-server-plugin Do not install the plugin's fusion-ui-server component
skip-fusion-server-plugin Do not install the plugin's fusion-server component
skip-proxy-server Do not install the plugin's proxy component
extract-ds Extract the DataSunrise component
extract-ds=<target> Extract the DataSunrise component to specified <target>
Note that if a full path is not given, the artifact will be placed in /tmp/<target>
install-ds Run the DataSunrise installer component
Validation control:
skip-java-version-checks Do not run tests to check the java version meets current criteria
Component configuration control:
Post Install service restart control:
These options if not set will result in questions in interactive script use.
restart-fusion-server Request fusion-server restart (if fusion-server component installed)
restart-proxy-server Request proxy-server restart (if proxy component installed)
skip-restart-fusion-server Skip fusion-server restart
skip-restart-proxy-server Skip proxy-server restart
restart-all-servers Request all servers where relevant components have been installed
4.4. Validation
Once installation and configuration is completed, complete the following steps.
-
Login to the Big Replicate UI and confirm that Big SQL Plugin is displayed as Active on the Plugins screen or dashboard.
Figure 2. Big SQL Plugin Status - Dashboard -
Use a database client (dbeaver or jsqsh, etc) to open a command shell. You can type specific Big SQL commands or statements into this shell and view output from Big SQL queries.
4.4.1. Validate the connection using jsqsh
Validate the connection set up in Set up the connection using jsqsh.
-
Run jsqsh command on the node where Big SQL is installed, e.g.
# jsqsh
-
To see the connections that have been created, enter the following command.
Execute - \connect --list
Example# \connect --list +--------+--------+--------------+-------+----------+--------+----------+----------+--------+--------+--------+------------+ | Name | Driver | Server | Port | Database | SID | Username | Password | Domain | Class | URL | Properties | +--------+--------+--------------+-------+----------+--------+----------+----------+--------+--------+--------+------------+ | bigsql | db2 | 10.6.120.102 | 50000 | BIGSQL | [NULL] | bigsql | ******* | [NULL] | [NULL] | [NULL] | [NULL] | +--------+--------+--------------+-------+----------+--------+----------+----------+--------+--------+--------+------------+
-
Connect to the database using the created Filter Proxy connection (e.g. bigsql_filter) and create a test table.
Example# \connect bigsql #create hadoop table #product_cat(Id INTEGER,Name CHAR(25),Price DECIMAL(8,2)); 0 rows affected (total: 10.543s) # insert into # product_cat values(1, 'abc', 10.5); # row affected (total: 14.960s)
-
Display a record from the test table.
# select * from # product_cat; +----+------+-------+ | ID | NAME | PRICE | +----+------+-------+ | 1 | abc | 10.50 | +----+------+-------+ 1 row in results(first row: 2.822s; total: 2.827s) #
4.5. Upgrade
The following steps provide a basic sequence for upgrading an existing Big SQL Plugin installation.
-
Stop the Big Replicate server
service fusion-server stop
-
Stop the Big SQL Plugin proxy server
service bigsql-server stop
-
Upgrade the plugin RPM.
rpm -U fusion-bigsql-plugin-<version>.noarch.rpm
-
Upgrade the proxy RPM.
rpm -U fusion-bigsql-proxy-<version>.noarch.rpm
-
Start the Big Replicate server
service fusion-server start
-
Start the Big SQL Plugin proxy server
service bigsql-server start
4.6. Uninstallation
There are 4 parts to uninstalling the Big SQL Plugin.
4.6.1. Uninstall the Filter Proxy
Please perform these tasks in each zone and node that you wish to uninstall from. |
-
Navigate to /opt/datasunrise/cmdline and run the commands:
./executecommand.sh connect -host 127.0.0.1 -login admin
./executecommand.sh delRule -name "extDispQueryBased"
(where 'extDispQueryBased' is the name/alias of the rule created during Filter Proxy installation)
-
To validate the removal, run the following command on each node where the Filter Proxy is installed, on each zone or cluster in your environment.
./executecommand.sh showRule -name "extDispQueryBased"
-
To validate <querygroup> is removed successfully, run the following command on each node where the Filter Proxy is installed, on each zone or cluster in your environment.
./executecommand.sh showQueryGroup -name <querygroup>
-
To uninstall DataSunrise, perform the following on the node where Filter Proxy is installed.
./DataSunrise_Suite_4_4_0_49719.linux.64bit.run remove
4.6.2. Uninstall the Big SQL Plugin service
Uninstall the Big SQL Plugin service using the Ambari UI.
-
Go to Services > Big Replicate - Big SQL > Service Actions > Stop.
Figure 3. Stop service -
After the service has stopped, click Service Actions > Delete Service.
-
Confirm that you wish to delete the service.
4.6.3. Uninstall the Big SQL Plugin stack
-
On the node where the Ambari Server is installed:
service ambari-server stop ambari-server uninstall-mpack --mpack-name=livebigsql-mpack service ambari-server start
-
Also remove the following:
rm -rf /var/lib/ambari-server/resources/common-services/LIVE_BIGSQL rm -rf /var/lib/ambari-server/resources/mpacks/livebigsql-mpack-<version> rm -rf /var/lib/ambari-server/resources/stacks/HDP/<version>/services/LIVE_BIGSQL rm -rf /var/lib/ambari-server/resources/stacks/HDP/<version>/services/LIVE_BIGSQL rm -rf /var/lib/ambari-server/resources/mpacks/cache/fusion-bigsql-proxy_<version>* rm -rf /var/lib/ambari-server/resources/mpacks/livebigsql-mpack-<version>*
-
On the node where the Big SQL Plugin Service was installed:
rm -rf /var/lib/ambari-agent/data/LIVEBIGSQL_SERVER_config.json rm -rf /var/lib/ambari-agent/cache/common-services/LIVE_BIGSQL
4.6.4. Remove the Big SQL Plugin packages
When the Big SQL Plugin service is deleted from Ambari, the RPM for Big SQL Plugin needs to be uninstalled manually. |
Please perform these tasks on each zone and node that you wish to uninstall from. |
-
Stop the Big Replicate server on the node where the Big Replicate Server is installed, e.g.
service fusion-server stop
Check the status of the server to confirm it has been stopped.
service fusion-server status
-
Check the version of the installed rpm on the node where the plugin and proxy Server resides with the following command:
rpm -qa | grep fusion-bigsql* fusion-bigsql-plugin-<version>.noarch fusion-bigsql-proxy-<version>.noarch
-
Run
yum remove
command on the node where the plugin and the proxy server respectively are installedyum remove fusion-bigsql-proxy-<version>.noarch yum remove fusion-bigsql-plugin-<version>.noarch
To verify whether the RPM has been successfully uninstalled or not, check it again with the below command, again on the node where the plugin and the proxy server respectively are installed.
rpm -qa | grep fusion-bigsql
-
Manually remove the UI directory, where the Big SQL Plugin is installed.
rm -rf "/opt/wandisco/fusion-ui-server/plugins/fusion-ui-bigsql-plugin-<version>/"
5. Operation
5.1. Configuration
Replication of Big SQL catalog information is driven through Big Replicate for Hive’s metadata replication. Currently, there’s no Big SQL Plugin configuration available through the Big Replicate UI.
For setting up replication rules, see Setting up Hive metadata replication.
5.1.1. Configure DB2 CLP CLI
If you are using the DB2 CLP CLI client provided with Big SQL then you need to configure it to work with Big Replicate for IBM Db2 Big SQL. The client is configured to redirect to the Filter Proxy Server so that the queries, the replication of which is supported by the Filter Proxy Server, take place and the data is in sync.
To configure the DB2 CLP CLI client for the Filter Proxy Server, follow the steps below:
-
On a node where the
bigsql
user is installed, for example the Big SQL head node, switch user usingsu bigsql
. -
Move to the folder
/usr/ibmpacks/bigsql/5.0.4.0/db2/bin
. -
To connect to the Filter Proxy Server do the following.
-
Firstly, create a database instance:
db2cli writecfg add -database bigsql -host <Filter Proxy node hostname - FQDN> -port <Filter Proxy Server Port>
-
Create a DSN for the Filter Proxy Server:
db2cli writecfg add -dsn <DSN instance name> -database bigsql -host <Filter Proxy node hostname - FQDN> -port <Filter Proxy Server Port>
-
Validate the DSN connectivity:
db2cli validate -dsn <DSN instance name> -connect -user bigsql -passwd bigsql
Example output:
=============================================================================== Client information for the current copy: =============================================================================== Client Package Type : IBM DB2 Enterprise Server Edition Client Version (level/bit): DB2 v11.1.9.0 (s181101/64-bit) Client Platform : Linux/X8664 Install/Instance Path : /usr/ibmpacks/bigsql/5.0.4.0/db2 DB2DSDRIVER_CFG_PATH value: <not-set> db2dsdriver.cfg Path : /home/bigsql/sqllib/cfg/db2dsdriver.cfg DB2CLIINIPATH value : <not-set> db2cli.ini Path : /home/bigsql/sqllib/cfg/db2cli.ini db2diag.log Path : /var/ibm/bigsql/diag/DIAG0000/db2diag.log =============================================================================== db2dsdriver.cfg schema validation for the entire file: =============================================================================== Success: The schema validation completed successfully without any errors. =============================================================================== db2cli.ini validation for data source name "democlient": =============================================================================== Note: The validation utility could not find the configuration file db2cli.ini. The file is searched at "/home/bigsql/sqllib/cfg/db2cli.ini". =============================================================================== db2dsdriver.cfg validation for data source name "democlient": =============================================================================== [ Parameters used for the connection ] Keywords Valid For Value --------------------------------------------------------------------------- DATABASE CLI,.NET,ESQL bigsql HOSTNAME CLI,.NET,ESQL spx01-vm2.bdauto.wandisco.com PORT CLI,.NET,ESQL 50000 =============================================================================== Connection attempt for data source name "democlient": =============================================================================== [SUCCESS] =============================================================================== The validation is completed. ===============================================================================
-
-
Once the instance is created, connect to the database by executing
db2cli
on the CLI. -
Now, connect to the DSN using the username and password.
db2 connect to democlient USER bigsql using bigsql Database Connection Information Database server = DB2/LINUXX8664 11.1.9.0 SQL authorization ID = BIGSQL Local database alias = BIGSQL
Once the connection is established, execute the DDL commands that will be replicated on the remote cluster via the Big SQL Plugin.
-
Create Schema
db2 create schema school DB20000I The SQL command completed successfully.
-
Use Schema
db2 use school DB20000I The SQL command completed successfully.
-
Create Table
db2 create hadoop table teacher\(emp_no int, sname varchar\(25\)\) DB20000I The SQL command completed successfully.
-
Alter Table
db2 alter hadoop table teacher add column pincode int DB20000I The SQL command completed successfully.
-
Drop Table
db2 drop hadoop table teacher DB20000I The SQL command completed successfully.
-
Describe Table
db2 describe table student; Data type Column Column name schema Data type name Length Scale Nulls ------------------------------- --------- ------------------- ---------- ----- ------ ROLL_NO SYSIBM INTEGER 4 0 Yes SNAME SYSIBM VARCHAR 25 0 Yes 2 record(s) selected.
5.1.2. Supported Big SQL Statements
The Big SQL Plugin replicates DDL, and not DML operations. Big Replicate is responsible for the replication of changes to data made by DML operations on Hadoop tables. The Big SQL Plugin is responsible for the replication of changes made to tables by DDL operations.
Example statements include:
-
INSERT statement: To insert one or more rows into a table
-
MERGE statement: To update a target table or view using data from a source
-
TRUNCATE statement: To delete all rows from a table
-
UPDATE statement: To update the values of one or more columns in one or more rows of a table.
In each case, the resulting modifications to file system content apply as normal, and where this content is replicated by Big Replicate for Hadoop, queries against this content will return the same results.
For more information about Big SQL DDL queries:
Create statements
Exactly which statements are supported are detailed in the reference section - Create statements.
Create table
In Big SQL, data is stored in tables.
Tables are composed of rows and columns as in a relational database.
The definitions of tables are created by using Create
statements, and those definitions are recorded in the Big SQL catalog.
The definitions can be changed or dropped.
These examples are using jsqsh commands.
- Hive Table
Create hadoop table test.hive_table(col1 bigint);
- Hive PartitionTable
Create hadoop TABLE test.partition_table (id smallint,name varchar(20),dept varchar(10),yoj integer) PARTITIONED BY (year integer);
- HBase View
Create view test.hbase_view as SELECT k0, k1, k2, k3 FROM test.hbase_table WHERE k3 > 1;
Alter statements
Exactly which statements are supported are detailed in the reference section - Alter statements.
For example:
ALTER TABLE employee ADD id int;
Drop statements
Exactly which statements are supported are detailed in the reference section - Drop statements.
For example:
Drop table employee;
Analyze statements
ANALYZE is used to gather statistics about the data in Hadoop and HBase tables.
ANALYZE replication is disabled by default.
If you would like to replicate ANALYZE content then set the analyze.command.replicate
property in the proxy-server-site.properties
file to YES
.
This can be done via the Ambari UI.
The following also needs to be added to the Filter Proxy as shown here:
./executecommand.sh addQueryOfGroup -sql "(?i)ANALYZE .*" -name <query-group> -regExp true
Both of this changes need to have been made for replication to occur.
Rename statements
Exactly which statements are supported are detailed in the reference section - Rename statements.
5.1.3. High Availability (HA)
High Availability is supported for the Filter Proxy, but not for the Big SQL Plugin and the Big SQL database.
To enable HA for the Filter Proxy you need to add properties to JSqsh. These properties need to be added on the client side only (with JSqsh), no changes need to be made to the Filter Proxy.
To do this follow the steps below.
-
First run JSqsh e.g.
# jsqsh --setup
-
Enter
C
for the Connection wizard. -
Select your Filter Proxy connection to edit.
JSQSH CONNECTION WIZARD - (edits $HOME/.jsqsh/connections.xml The following connections are currently defined: Name Driver Host Port --- -------------------- ---------- ------------------------------ ------ 1 bigsql db2 rpxy01-vm0.bdfrem.wandisco.com 18051 2 bigsql_datasunrise db2 10.10.214.122 50000 Enter a connection number above to edit the connection, or: (B)ack, (Q)uit, or (A)dd connection: *2
-
Enter
P
to add a property.JSQSH CONNECTION WIZARD - (edits $HOME/.jsqsh/connections.xml) The following configuration properties are supported by this driver. Connection name : bigsql_datasunrise Driver : IBM Data Server (DB2, Informix, Big SQL) JDBC URL : jdbc:db2://${server}:${port}/${db} Connection URL Variables ------------------------ 1 db : bigsql 2 port : 50000 3 server : 10.10.214.122 4 user : bigsql 5 password : ****** 6 Autoconnect : false JDBC Driver Properties ------------------------ None Enter a number to change a given configuration property, or (T)est, (D)elete, (B)ack, (Q)uit, Add (P)roperty, or (S)ave: *P
-
Enter
M
to manually add the properties.DRIVER PROPERTIES The following properties are published by the driver. Note that not all properties may be published by your driver, and you may manually enter a property if needed. 1 user 2 password Enter a property number to edit that property. A question mark after the property name (e.g. "2?") will display a description, if available: (M)anually enter, or (B)ack: *M
-
Add the following:
Property name: clientRerouteAlternateServerName Property value: <IP address for Filter Proxy Node 1>,<IP address for Filter Proxy Node 2> Property name: clientRerouteAlternatePortNumber Property value: <Filter Proxy Node 1 port>,<Filter Proxy Node 2 port> Property name: enableClientAffinitiesList Property value: 1 Property name: blockingReadConnectionTimeout Property value: <timeout in ms>
-
Enter
S
to save. -
Your set up is now configured for Filter Proxy HA.
Automatic failover - known issue
There is a known issue with automatic failover.
If executing an SQL command when the Filter proxy is stopped you may get an exception.
|
SQL Exception(s) Encountered: [State: 08506][Code: -4498]: [jcc][t4][2027][11212][3.72.24] A connection failed but has been re-established. Special register settings have been replayed if necessary. Host name or IP address of the connection: 10.10.214.121. Service name or port number of the connection: 50,000. Reason code: 1. Failure code: 2, Error code: zero byte read from the server.. ERRORCODE=-4498, SQLSTATE=08506
Execute the command again and it will work.
5.2. Troubleshooting
-
Check the Release notes
-
Check log Files
6. Reference
6.1. Supported Statements
The following tables list the supported Big SQL Statements. For more information about these see the Supported Big SQL Statements section.
6.1.1. Create statements
The following create statements are supported.
Statement type | Description |
---|---|
CREATE ALIAS |
Defines an alias for a module, nickname, sequence, table, view, or another alias. |
CREATE AUDIT POLICY |
Defines an auditing policy at the current server. |
CREATE DATABASE PARTITION GROUP |
Defines a new database partition group within the database, assigns database partitions to the database partition group, and records the database partition group definition in the system catalog. |
CREATE EVENT MONITOR |
Defines a monitor that records certain events that occur when you use the database. The definition of each event monitor also specifies where the database records the events. |
CREATE EVENT MONITOR (activities) |
Defines a monitor that will record activity events that occur when using the database. |
CREATE EVENT MONITOR (change history) |
Creates an event monitor that can record events for changes to configuration parameters, registry variables, and the execution of DDL statements and utilities. |
CREATE EVENT MONITOR (locking) |
Creates an event monitor that will record lock-related events that occur when using the database. |
CREATE EVENT MONITOR (package cache) |
Creates an event monitor that will record events when the cache entry for a section is flushed from the package cache. |
CREATE FUNCTION (external scalar) |
Registers a user-defined external scalar function |
CREATE FUNCTION (external table) |
Registers a user-defined external table function. |
CREATE FUNCTION (sourced or template) |
Registers a user-defined sourced function or a function template. |
CREATE FUNCTION (SQL scalar, table, or row) |
Is used to define a user-defined SQL scalar, table, or row function. A scalar function returns a single value each time it is invoked, and is generally valid wherever an SQL expression is valid. A table function can be used in a FROM clause and returns a table. A row function can be used as a transform function and returns a row |
CREATE FUNCTION MAPPING |
Defines a mapping between a federated database function or function template and a data source function, or disable a default mapping between a federated database function and a data source function. |
CREATE GLOBAL TEMPORARY TABLE |
Creates a description of a temporary table at the current server. Each session that selects from a created temporary table retrieves only rows that the same session has inserted. When the session terminates, the rows of the table associated with the session are deleted. |
CREATE HISTOGRAM TEMPLATE |
Defines a template describing the type of histogram that can be used to override one or more of the default histograms of a service class or a work class. |
CREATE INDEX |
Used to define an index on a database table. |
CREATE MASK |
Creates a column mask at the current server. A column mask specifies the value to be returned for a specified column. |
CREATE MODULE |
Defines a module. |
CREATE PERMISSION |
Creates a row permission at the current server. |
CREATE PROCEDURE (SQL) |
Defines an SQL procedure at the current server. |
CREATE ROLE |
Defines a role at the current server. |
CREATE SCHEMA |
Defines a schema. It is also possible to create some objects and grant privileges on objects within the statement. |
CREATE SECURITY LABEL |
Defines a security label. |
CREATE SECURITY POLICY |
Defines a security policy. |
CREATE SEQUENCE |
Defines a Sequence |
CREATE SERVER |
Defines a data source to a federated database. |
CREATE SERVICE CLASS |
This statement can be embedded in an application program or issued through the use of dynamic SQL statements |
CREATE SYNONYM |
SYNONYM is a synonym for ALIAS. |
CREATE TABLE |
Defines a table. |
CREATE Temporary TABLESPACE |
Defines a table space |
CREATE THRESHOLD |
Defines a threshold. |
CREATE TRUSTED CONTEXT |
Defines a trusted context at the current server. |
CREATE TYPE |
Defines a user-defined structured type. |
CREATE TYPE (array) |
Defines an array type. |
CREATE TYPE (cursor) |
Defines a cursor type. |
CREATE TYPE (distinct) |
Defines a distinct data type. |
CREATE TYPE (row) |
Defines a row type. |
CREATE TYPE (structured) |
Defines a structured data type. |
CREATE USAGE LIST |
Defines a usage list. A usage list is a database object for monitoring all unique sections (DML statements) that have referenced a particular table or index during their execution |
CREATE VARIABLE |
Defines a session global variable. |
CREATE VIEW |
Defines a view on one or more tables, views or nicknames |
CREATE WORK CLASS SET |
Defines a work action set and work actions within the work action set |
CREATE WORKLOAD |
Defines a work class set |
CREATE WRAPPER |
Registers a wrapper with a federated server. A wrapper is a mechanism by which a federated server can interact with certain types of data sources. |
CREATE WORK ACTION SET |
Defines a work class set |
6.1.2. Alter statements
The following alter statements are supported.
Statement type | Description |
---|---|
ALTER AUDIT POLICY |
Modifies the definition of an audit policy at the current server. |
ALTER FUNCTION |
Modifies the properties of an existing function. |
ALTER HISTOGRAM TEMPLATE |
Used to modify the template describing the type of histogram that can be used to override one or more of the default histograms of a service class or a work class. |
ALTER INDEX |
Changes the definition of an index. |
ALTER MASK |
Alters a column mask that exists at the current server. |
ALTER MODULE |
Changes the definition of a module |
ALTER PACKAGE |
Alters bind options for a package at the current server without having to bind or rebind the package |
ALTER SEQUENCE |
Used to change a sequence. |
ALTER SERVICE CLASS |
Alters the definition of a service class |
ALTER TABLE |
Changes the definition of a table. |
ALTER TRIGGER |
Changes the description of a trigger at the current server |
ALTER TRUSTED CONTEXT |
Used to add or drop attributes or method specifications of a user-defined structured type. Properties of existing methods can also be altered. |
ALTER USAGE LIST |
Alters the definition of a usage list. |
ALTER WORKLOAD |
Alters a workload. |
ALTER WRAPPER |
Used to update the properties of a wrapper |
ALTER WORK CLASS SET |
Adds, alters, or drops work classes within a work class set. |
6.1.3. Drop statements
The following drop statements are supported.
Statement type | Description |
---|---|
DROP TABLE |
To drop a table |
DROP VIEW |
To drop View |
DROP ALIAS |
To drop Alias |
DROP FUNCTION |
To drop function in the BIGSQL schema |
DROP TRIGGER |
To drop trigger |
DROP TYPE |
To drop type |
DROP EVENT MONITOR |
To drop the event monitor. |
DROP SCHEMA |
Drop the schema under CREATE SCHEMA using RESTRICT. |
DROP PROCEDURE |
To drop procedure in the BIGSQL schema |
DROP INDEX |
Drop the index specification that was created to let the optimizer know about this index. |
DROP SEQUENCE |
Drop the sequence ORG_SEQ, with restrictions. |
DROP PERMISSION |
To drop permission |
DROP MASK |
To drop mask |
DROP USAGE LIST |
To drop usage List |