Interpartition parallelism with queries that reference nicknames - examples of access plans
You can use the Db2® Explain facility to view the access plan that the optimizer uses during query processing. The following examples shows how the optimizer accesses nickname data in an interpartition parallelism environment.
Example 1: Trusted mode
In this example, the nickname
uses a trusted wrapper. The database serially performs the join between the
local table and the nickname at the coordinator partition. The database brings
the local data, which is distributed over two partitions, to the coordinator
partition. The federated server then joins the local data with the nickname
data. The database serially joins nicknames that are defined by using a trusted
wrapper at the coordinator partition. The database cannot distribute the data
across multiple partitions to create a parallel join.
SELECT *
FROM ORDERS A, ITEMS B
WHERE A.ID1 = B.ID1 AND B.ITEM = 3
RETURN
( 1)
|
HSJOIN
( 2)
/----+---\
DTQ SHIP
( 3) ( 5)
| |
TBSCAN NICKNM: NEWTON
( 4) ITEMS
|
TABLE: NEWTON
ORDERS
Example 2: Fenced mode
In this example, the nickname
uses a fenced wrapper. The federated server distributes the nickname data
to the other partitions and performs the join with the local data in parallel.
The DTQ (Distributed Table Queue) operator above the SHIP indicates that the
nickname data is distributed to the local partitions using hash partitioning
to achieve a co-located parallel join. In a co-located parallel join, nickname
data is distributed to the local partitions in such a way that matching nickname
and local data for the join will always be located on the same
partition.
SELECT *
FROM ORDERS A, ITEMS B
WHERE A.ID1 = B.ID1 AND B.ITEM = 3
RETURN
( 1)
|
DTQ
( 2)
|
MSJOIN
( 3)
/---+---\
TBSCAN FILTER
( 4) ( 7)
| |
SORT TBSCAN
( 5) ( 8)
| |
TBSCAN SORT
( 6) ( 9)
| |
TABLE: NEWTON DTQ
ORDERS ( 10)
|
SHIP
( 11)
|
NICKNM: NEWTON
ITEMS
Example 3: Fenced mode without a computational partition group
In
this example, the two nicknames use a fenced wrapper, and a computational
partition group is not defined. The federated server performs the join at
the coordinator partition. The federated server does not distribute the data
to the other partitions for processing. The lack of TQ operators above any
of the SHIP operators indicates that the nickname data is not distributed
across the partitions.
SELECT *
FROM ITEMS A, LOCATIONS B
WHERE A.ID1 = B.ID1
RETURN
( 1)
|
MSJOIN
( 2)
/---+---\
TBSCAN FILTER
( 3) ( 7)
| |
SORT TBSCAN
( 4) ( 8)
| |
SHIP SORT
( 5) ( 9)
| |
NICKNM: NEWTON SHIP
LOCATIONS ( 10)
|
NICKNM: NEWTON
ITEMS
Example 4: Fenced mode with a computational partition group
In
this example, the nicknames use fenced wrappers, and a computational partition
group is defined. In this case, the optimizer selects a plan that distributes
the data from the coordinator partition to the other partitions in the computational
partition group. The DTQ operators above both nicknames hash-partition the
incoming remote data so that matching join keys are located on the same partition
of the computational partition group. The join takes place in parallel on
each partition, and the results are then collected at the coordinator partition.
SELECT *
FROM ITEMS A, LOCATIONS B
WHERE A.ID = B.ID
RETURN
( 1)
|
DTQ
( 2)
|
MSJOIN
( 3)
/---+---\
TBSCAN FILTER
( 4) ( 9)
| |
SORT TBSCAN
( 5) ( 10)
| |
DTQ SORT
( 6) ( 11)
| |
SHIP DTQ
( 7) ( 12)
| |
NICKNM: NEWTON SHIP
LOCATIONS ( 13)
|
NICKNM: NEWTON
ITEMS