Db2 Big SQL operations that are managed by Ranger
After Ranger integration has been enabled for Db2® Big SQL, there are two authorization control systems: Ranger and native Db2 Big SQL. Operations that are managed by Ranger are no longer subject to Db2 Big SQL native authorization controls, which are used exclusively for operations that are not managed by Ranger.
For details on how to enable or disable Ranger integration with Db2 Big SQL, see Enabling and disabling Ranger security support for Db2 Big SQL.
Ranger policies that are defined in Db2 Big SQL provide access control rules for native Db2 tables, views, and nicknames. Db2 Big SQL obeys policies that are defined in the Hadoop SQL (Hive) plugin or the HBase plugin when performing access control for Hadoop or HBase tables, respectively.
The database operations that Ranger controls are listed in the following tables:
Operations on Db2 Big SQL tables1 | Permissions required in Ranger policy |
---|---|
Select from table
|
SELECT on schema or table
|
Select from view
|
SELECT on schema or view
The view definer must still have SELECT permission on each table that is referenced in the view definition.
|
Insert
|
INSERT on schema or table
|
Delete
|
DELETE on schema or table
|
Update
|
UPDATE on schema or table
|
Load (LOAD command)
|
INSERT on schema or table
DELETE is also required for the REPLACE option.
If an exception table is being used, INSERT on the exception table is also required.
|
Import
|
INSERT on schema or table
|
Export
|
SELECT on schema or table
|
Alter
|
ALTER on schema or table2
|
Rename
|
ALTER on schema or table
|
Create table
|
CREATE on schema or table
|
Create view
|
CREATE on schema or table
SELECT on schema or table for each table that is referenced in the view definition
|
Create index
|
INDEX on schema or table
|
Drop
|
DROP on schema or table
|
1 For any operation that is valid on both a table and a nickname, the permission requirements are the same for both object types.
2 Alter operations that require the REFERENCES privilege, CONTROL privilege, or the SECADM database authority are not managed by Ranger. Native Db2 Big SQL permissions must be granted to perform these operations.
|
Operations on Hadoop SQL tables | Permissions required in Ranger policy |
---|---|
Select from table
|
SELECT on schema or table
|
Insert
|
UPDATE on schema or table
|
Delete
|
UPDATE on schema or table
|
Update
|
UPDATE on schema or table
|
Truncate |
UPDATE on schema or table
|
Load (LOAD HADOOP statement)
|
UPDATE on schema or table
|
Alter
|
ALTER on schema or table1
|
Rename
|
ALTER on schema or table
|
Create table
|
CREATE on schema or table
CREATE on schema is required for HBase tables.
|
Drop
|
DROP on schema or table
|
Analyze
|
ALTER on schema or table
|
Msck Repair
|
ALTER on schema or table
|
1 Alter operations that require the REFERENCES privilege, CONTROL privilege, or the SECADM database authority are not managed by Ranger. Native Db2 Big SQL permissions must be granted to perform these operations.
|
Operations on HBase tables | Permissions required in Ranger policy |
---|---|
Select from table
|
Read
|
Insert
|
Write
|
Delete
|
Write
|
Update
|
Write
|
Load (LOAD HADOOP statement)
|
Write
|
Alter
|
Create1
|
Rename
|
Admin
|
Create table
|
Create
|
Create index
|
Create
|
Drop
|
Create
|
Analyze
|
Admin
|
Msck Repair
|
Admin
|
1 Alter operations that require the REFERENCES privilege, CONTROL privilege, or the SECADM database authority are not managed by Ranger. Native Db2 Big SQL permissions must be granted to perform these operations.
|
Note:
- After Ranger integration has been enabled for Db2 Big SQL, object owners are not automatically granted full access to the tables and views that they create. Policies for all users, including object owners, must be explicitly set up in the Db2 Big SQL Ranger plugin. A single policy to grant all users some level of access to objects that they own can be created by using the {OWNER} variable.
- In addition to an access policy in Ranger, operations against nicknames require the appropriate permissions on objects at the remote data source.