Exemple : déchargement des données avec la génération d'une commande de transfert pour une destination Amazon S3, avec utilisation de l'outil cURL

Dans cet exemple, Optim High Performance Unload déchargera les données en générant un commande de transfert pour une destination Amazon S3, avec utilisation de l'outil cURL, de la version de signature 4 et du chiffrement côté serveur 'AES256' .

Rapport d'exécution :

[i1058@lat111 ~]$ db2hpu -i i1058 -f sysin
INZM031I Optim High Performance Unload for Db2 06.01.00.001(180124) 
         64 bits 01/24/2018 (Linux 3.10.0-327.36.1.el7.x86_64 #1 SMP Wed Aug 17 03:02:37 EDT 2016 x86_64) 
INZI473I Memory limitations: 'unlimited' for virtual memory and 'unlimited' for data segment 
      ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8----+ 
000001 GLOBAL CONNECT TO SAMPLE; 
000002  
000003 UNLOAD  TABLESPACE 
000004 LOCK NO QUIESCE NO 
000005  
000006 SELECT * FROM EMPLOYEE; 
000007 OUTFILE("outfile") 
000008 LOADFILE("loadfile") 
000009 LOADDEST(OBJECT_STORAGE AWS_S3 "s3_v4_alias")
000010  
000011 FORMAT DEL; 

INZU462I HPU control step start: 01/24/2018 15:56:16.778. 
INZU463I HPU control step end  : 01/24/2018 15:56:16.797. 
INZU464I HPU run step start    : 01/24/2018 15:56:16.798. 
INZU410I HPU utility has unloaded 42 rows on lat111 host for I1058.EMPLOYEE in outfile. 
INZU684I HPU utility has generated an upload command for the AMAZON S3 destination in the loadfile file. 
INZU465I HPU run step end      : 01/24/2018 15:56:16.803. 
INZI441I HPU successfully ended: Real time -> 0m0.025078s 
User time -> 0m0.096511s : Parent -> 0m0.096511s, Children -> 0m0.000000s 
Syst time -> 0m0.024381s : Parent -> 0m0.024381s, Children -> 0m0.000000s
Section Amazon S3 associée dans le fichier db2hpu.dest :
[AWS_S3] 
alias=s3_v4_alias
bucket=dev-hpum-test 
encrypt=AES256
relative_path=folder1 
curl=yes 
accesskey=ACCESKEY_5YU2XNMDHQA
secretkey=SECRETKEY_B+yXklvtqW86gQmIgMuPMuyuzKuYUk
version=4
region=eu-west-1
Extrait du fichier de sortie généré :
[i1058@lat111 ~]$ cat outfile
"000010","CHRISTINE","I","HAAS","A00","3978",1995-01-01,"PRES    ",18,"F",1963-08-24,+0152750.00,+0001000.00,+0004220.00
...
"200340","ROY","R","ALONZO","E21","5698",1997-07-05,"FIELDREP",16,"M",1956-05-17,+0031840.00,+0000500.00,+0001907.00
Commande de transfert générée :
[i1058@lat111 ~]$ cat loadfile  
#!/bin/sh 

s3AccessKey="ACCESKEY_5YU2XNMDHQA" 
s3SecretKey="SECRETKEY_B+yXklvtqW86gQmIgMuPMuyuzKuYUk" 
s3Bucket="dev-hpum-test" 
relativePath="/folder1/" 
service="s3" 
contentType="text/plain; charset=UTF-8" 
curlBin="curl" 
s3Region="eu-west-1" 
s3Encrypt="AES256" 
s3EncryptHeader="x-amz-server-side-encryption" 
completeEncryptHeader="\n${s3EncryptHeader}:${s3Encrypt}" 
signedEncryptHeader=";${s3EncryptHeader}" 
curlEncrypt="-H \"${s3EncryptHeader}: ${s3Encrypt}\"" 

request="aws4_request" 
algorithm="AWS4-HMAC-SHA256" 
function _s3PrepareUpload 
{ 
   filePath=$1 
   fileErr=$2 
   fileName=`basename "$1"` 
   stat ${filePath} >/dev/null 2>"${fileErr}" 
   RC=$? 
   if [ $RC -eq 0 ] 
   then 
       targetFile="${relativePath}${fileName}" 
       dateFormatted=`date -u +%Y%m%dT%H%M%SZ` 
       dateStamp=`date -u +%Y%m%d` 
       fileContentHash=`openssl dgst -sha256 < "${filePath}" | sed 's/(stdin)= //'` 
       completeHeaders="content-type:${contentType}\nhost:${s3Bucket}.${service}.amazonaws.com\nx-amz-content-sha256:..." 
       signedHeaders="content-type;host;x-amz-content-sha256;x-amz-date${signedEncryptHeader}" 
       canonicalRequest="PUT\n${targetFile}\n\n${completeHeaders}\n\n${signedHeaders}\n${fileContentHash}" 
       canonicalRequestHash=`echo -en ${canonicalRequest} | openssl dgst -sha256 | sed 's/(stdin)= //'` 
       stringToSign="${algorithm}\n${dateFormatted}\n${dateStamp}/${s3Region}/${service}/${request}\n${canonicalRequestHash}"
       signingKey=`printf "${dateStamp}" | openssl sha256 -hmac "AWS4$s3SecretKey" -hex | sed 's/(stdin)= //'` 
       signingKey=`printf "${s3Region}" | openssl sha256 -mac HMAC -macopt hexkey:"${signingKey}" -hex | sed 's/(stdin)= //'` 
       signingKey=`printf "${service}" | openssl sha256 -mac HMAC -macopt hexkey:"${signingKey}" -hex | sed 's/(stdin)= //'` 
       signingKey=`printf "${request}" | openssl sha256 -mac HMAC -macopt hexkey:"${signingKey}" -hex | sed 's/(stdin)= //'` 
       signature=`printf "${stringToSign}" | openssl sha256 -mac HMAC -macopt hexkey:"${signingKey}" -hex | sed 's/(stdin)= //'` 

       curlArgs="--write-out "%{http_code}" -X PUT -T \"${filePath}\" \ 
                 -H \"x-amz-date: ${dateFormatted}\" \ 
                 -H \"content-type: ${contentType}\" \ 
                 -H \"x-amz-content-sha256: ${fileContentHash}\" \ 
                 ${curlEncrypt} \ 
                 -H \"Authorization: ${algorithm} Credential=${s3AccessKey}/${dateStamp}/${s3Region}/${service}/${request}, ...\" \ 
                 \"https://${s3Bucket}.${service}.amazonaws.com${targetFile}\"" 
   fi 
} 


echo Start uploading ... 
_s3PrepareUpload "outfile" "EMPLOYEE.msg.err" 
if [ $RC -eq 0 ] 
then 
   sh -c "${curlBin} ${curlArgs} -o "EMPLOYEE.msg" > "EMPLOYEE.msg.http" 2> \"EMPLOYEE.msg.err\"" 
   RC=$(cat "EMPLOYEE.msg.http") 
   rm -f "EMPLOYEE.msg.http" 
   if [ $RC -ge 300 -o $RC -lt 100 ] 
   then 
       RC=1 
   else 
       RC=0 
       rm -f "EMPLOYEE.msg.err" 
   fi 
fi 
if [ $RC -ne 0 ] 
then 
   echo "An error occurred while processing the 'outfile' file. The 
         'EMPLOYEE.msg' and 'EMPLOYEE.msg.err' files contain the associated execution reports." 
else 
   echo "The 'outfile' file has been processed successfully. The 
         'EMPLOYEE.msg' file contains the associated execution report." 
fi 
exit $RC