LOG_ENTRY table function

When a Db2® Big SQL statement fails, the returned SQL message might contain a log entry identifier that you can use to find more details about the error in the bigsql.log file. By using the SYSHADOOP.LOG_ENTRY table function, you can get log file information for a particular log entry identifier.

Syntax

Read syntax diagramSkip visual syntax diagram LOG_ENTRY ( 'identifier' ,lines-before ,lines-after )

The schema is SYSHADOOP. The function returns a single-row table with one column.

Table 1. Information returned by the LOG_ENTRY function
Column name Data type Description
LINE VARCHAR(8192) Contains log file information for a particular log entry identifier.

Description

identifier
Specifies the log entry identifier that was reported in an error message. A log entry identifier is a three-part name. For more information about log entry identifiers, see Db2 Big SQL log management on CDP.
lines-before
Specifies the number of lines to be returned before the identifier. The root cause for an error condition might precede the specified identifier. The default is 10.
lines-after
Specifies the number of lines to be returned after the identifier. The default is 25.

Usage

  • If the contents of the log file have rolled over since the error occurred, an identifier might no longer be available and no data is returned.
  • You can use the PD_GET_DIAG_HIST DB2® table function to query the db2diag logs across the cluster. The following example shows you how to call this function in DB2 Big SQL:
    
    SELECT LEVEL, DBPARTITIONNUM, TIMESTAMP, IMPACT, FULLREC
      FROM TABLE (PD_GET_DIAG_HIST(
        'ALL', 'DI+EI', '', '2016-04-07-06.00.00.00000',  NULL)
      ) AS T
      WHERE level in ( 'S' ) AND
        dbpartitionnum <> 0
      LIMIT 3;

Example

Suppose that the definition of MY_TABLE prevents the inclusion of null values. However, the following SELECT statement running against MY_TABLE encounters a null value.
select * from my_table

C1
-----------
SQL5104N  The query failed because the data in HDFS cannot be mapped to the
Hadoop table definition.  Hadoop table name: "bigsql.my_table".  Column index:
"0".  Log entry identifier: "[BSL-2-732c5697b]".

The query fails and returns a log entry identifier (BSL-2-732c5697b). The default local log file location is /var/ibm/bigsql/logs/bigsql.log.

The following SELECT statement shows how to use the SYSHADOOP.LOG_ENTRY function to get error information from the bigsql.log file. The query returns the default number of lines before and after the identifier.
select cast (line as varchar(200))
  from table (syshadoop.log_entry('BSL-2-732c5697b'))
  where line like '%5104%'"
The output might look like the following text:

1                                                                                                                   
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Exception Label: DFS_UNEXPECTED_NULL sqlcode= -5104(NULL in non-nullable column. Index in proj list: 0)                            
kind=DFS_UNEXPECTED_NULL sqlcode= -5104 msg=NULL in non-nullable column. Index in proj list: 0 tokens=[bigsql.my_table, 0]         

  2 record(s) selected.