Netezza Performance Server 11.2.2.4 release notes

Deployment options: Netezza Performance Server for Cloud Pak for Data SystemNetezza Performance Server for Cloud Pak for DataNetezza Performance Server for Cloud Pak for Data as a Service

Important:
  1. 11.2.2.4 is compatible with Netezza Performance Server for Cloud Pak for Data System 2.0.X, Netezza Performance Server for Cloud Pak for Data, and Netezza Performance Server for Cloud Pak for Data as a Service.
  2. Netezza Performance Server clients older than 11.2.2.0 do not work with version 11.2.2.4 as a strict TLS 1.2 cipher set is used to conform to security standards. If you want to use clients older than 11.2.2 (including PureData System for Analytics clients), apply the workaround that is specified for Netezza Performance Server 11.2.2.1 release notes.
  3. Ensure that you have Java 1.7 or later.

    For more information, see Installing and configuring JDBC .

  4. Ensure that you have the Netezza Performance Server Go driver 1.15 or later.

    For more information, see Installing and validating the Go Driver.

  5. Starting from Netezza Performance Server 11.2.2.0, the name of the host container is ipshost-0. You can access the host container by running the oc command instead of the docker command.

New features

Querying data from data lakes
You can use the technology preview of the Netezza Performance Server external tables to access and query parquet files that are stored outside of your database in data lakes (on AWS S3) with Netezza Performance Server for Cloud Pak for Data as a Service. For more information, see Querying data.
Kafka
You can use Netezza Performance Server as a datasource or data sink. For more information, see Using Netezza Performance Server for Cloud Pak for Data as a Service as a data source and Using Netezza Performance Server for Cloud Pak for Data as a Service as a data sink.
Backup and restore
Enabled enablesplitdelete by default.
For more information, see Backup and restore best practices.
Kerberos
Added Kerberos with SSL support for the nzsql client.

Resolved issues

  1. Fixed the issue with tables being broadcast on all SPUs from correlated subqueries, which resulted in the following error: SPU swap partition : Disk temporary work space is full.
  2. Fixed the issue with upgrade failing with the following error: Invalid UPGRADEKIT.
  3. Fixed the issue with frequent host node failover that was caused by an incorrect value of fs.file-max.
  4. Fixed the issue with the Kerberos user domain name getting removed after you logged in with the JDBC driver.
  5. Fixed the issue with user not being able to drop the default value for a column of a groom table.
  6. Fixed the issue with Postgres crashing, which was caused by go runtime not being able to handle signals.

Known issues

Running the CREATE EXTERNAL TABLE command through nzdumpschema
If you run the CREATE EXTERNAL TABLE command through nzdumpschema, the command does not work.
WORKAROUND:
Run CREATE EXTERNAL TABLE manually in the target database.
Adding columns on tables that were created from version tables
If you add a column on a table that was created from a version table, the command results in the following error: ERROR: Attribute 'C1' is repeated. Must have an appropriate alias.
WORKAROUND:
Run the groom command on the version table before you create a CTAS table.
datasource
Starting from 11.2.2.4, datasource is a reserved keyword. If you want to use datasource as an identifier, enclose it in double quotation marks ("datasource").
Example:
SYSTEM.ADMIN(ADMIN)=> create table t1 ( "datasource" int);
CREATE TABLE
nzrestore fails if backup is taken with BUCKET_URL having slashes.
Whenever the nzrestore failure happens with following error:
[Restore Server] : Starting the restore process
[Restore Server] : Operation aborted 
Error: Connector exited with error: 'The specified key does not exist. with address : xx.xxx.xx.xxx'.
Check the backup command and verify whether BUCKET_URL is having slashes as shown in the following example:
BUCKET_URL=bakcupbucket/subdir/
UNIQUE_ID=2024-01-12
WORKAROUND:
Modify BUCKET_URL and UNIQUE_ID as shown:
BUCKET_URL=bakcupbucket 
UNIQUE_ID=subdir/2024-01-12
Note: UNIQUE_ID is a combination of string that is taken from BUCKET_URL after the first slash and actual UNIQUE_ID. Pass these two arguments to the nzrestore command.