IBM Support

PI89179: BIGSQL JAVA READER HITS ARRAYINDEXOUTOFBOUNDSEXCEPTION WHEN THE NUMBER OF BYTES IN CHAR/VARCHAR COLUMN EXCEEDS COLUMN LENGTH

Subscribe

You can track all active APARs for this component.

 

APAR status

  • Closed as program error.

Error description

  • On a bigsql instance you have defined an hadoop table with  a
    CHAR or VARCHAR column of a given length N and have loaded it
    with data that exceedes the length in bytes of the column
    definition.
       This is likely to occurs when the data includes multibyte
    utf8 characters and was loaded via tools such as Mapreduce or
    Hive.
    If the table format calls for the BigSQL Java Readers to be
    used to read the files associated  to the table, then a select
    statement on the table might fail generating a message similar
    to the following in the bigsql.log
    
    2017-10-06 12:55:02,882 ERROR
    com.ibm.biginsights.bigsql.dfsrw.reader.DfsBaseReader [Master
    S:5.1001.1.0.0.2386] : [BSL-4-6f189f162]
    Exception raised by Reader at node: 4 Scan ID:
    S:5.1001.1.0.0.2386
    Table: my_schema.my_table Spark: false VORC: false Exception
    Label: UNMAPPED(java.lang.ArrayIndexOutOfBoundsException: Array
    index out of range: 113)
    java.lang.ArrayIndexOutOfBoundsException: Array index out of
    range: 113
            at
    com.ibm.biginsights.bigsql.dfsrw.reader.DfsRowBufferSerializer.u
    tf8TruncatedLength(DfsRowBufferSerializer.java:366)
            at
    com.ibm.biginsights.bigsql.dfsrw.reader.DfsRowBufferSerializer.t
    runcatetUTF8AndLog(DfsRowBufferSerializer.java:441)
            at
    com.ibm.biginsights.bigsql.dfsrw.reader.DfsHiveRowBufferSerializ
    er.serializeRow(DfsHiveRowBufferSerializer.java:567)
            at
    com.ibm.biginsights.bigsql.dfsrw.reader.DfsHiveReader.processDat
    a(DfsHiveReader.java:286)
            at
    com.ibm.biginsights.bigsql.dfsrw.reader.DfsHiveReader.access$400
    (DfsHiveReader.java:46)
            at
    com.ibm.biginsights.bigsql.dfsrw.reader.DfsHiveReader$ReaderRunn
    able.run(DfsHiveReader.java:374)
            at java.lang.Thread.run(Thread.java:785)
    

Local fix

  • truncate data before inserting it so that its length in bytes
    does not exceed the size of the CHAR/VARCHAR column.
    

Problem summary

  • See error description
    

Problem conclusion

  • The problem is fixed in Version 5.0.2.0 and later fix packs
    

Temporary fix

Comments

APAR Information

  • APAR number

    PI89179

  • Reported component name

    BIG SQL 4 BI

  • Reported component ID

    5725C09BQ

  • Reported release

    425

  • Status

    CLOSED PER

  • PE

    NoPE

  • HIPER

    NoHIPER

  • Special Attention

    NoSpecatt / Xsystem

  • Submitted date

    2017-10-20

  • Closed date

    2018-03-01

  • Last modified date

    2018-03-01

  • APAR is sysrouted FROM one or more of the following:

  • APAR is sysrouted TO one or more of the following:

Modules/Macros

  • Unknown
    

Fix information

  • Fixed component name

    BIG SQL 4 BI

  • Fixed component ID

    5725C09BQ

Applicable component levels

  • R425 PSY

       UP

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSCRJT","label":"IBM Db2 Big SQL"},"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"425","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
24 August 2020