예: cURL 툴 사용을 기반으로, S3 호환 대상에 대한 업로드 명령을 생성하여 데이터 언로드

이 예제의 경우 Optim™ High Performance Unload 는 cURL 도구 사용 및 'aws:kms' 서버 측 암호화를 기반으로 S3 호환 대상에 대한 업로드 명령을 생성하여 데이터를 언로드합니다.

실행 보고서:

[i1150@lat117 ]$ db2hpu -f SYSIN -i i1150
INZM031I Optim High Performance Unload for Db2 06.05.00.004.01(240326) 
         64 bits 03/27/2024 (Linux  3.10.0-957.21.3.el7.x86_64 #1 SMP Fri Jun 14 02:54:29 EDT 2019 x86_64)
INZI473I Memory limitations: 'unlimited' for virtual memory and 'unlimited' for data segment
       ----+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8----+
000001 GLOBAL CONNECT TO SAMPLE;
000002 UNLOAD TABLESPACE
000003 DB2 NO
000004 LOCK NO
000005 FLUSH BUFFERPOOLS NO
000006 SELECT * FROM EMPLOYEE;
000007 OUTFILE("outfile")
000008 LOADFILE("loadfile")
000009 LOADDEST(OBJECT_STORAGE AWS_S3 "s3_storage_proxy")
000010 FORMAT DEL
000011 ;

INZU462I HPU control step start: 03/27/2024 16:57:35.742.
INZU463I HPU control step end  : 03/27/2024 16:57:35.935.
INZU464I HPU run step start    : 03/27/2024 16:57:36.550.
INZU410I HPU utility has unloaded 42 rows on lat117 host for I1150.EMPLOYEE in outfile.
INZU684I HPU utility has generated an upload command for the AMAZON S3 destination in the loadfile file.
INZU465I HPU run step end      : 03/27/2024 16:57:36.559.
INZI441I HPU successfully ended: Real time -> 0m0.816952s
User time -> 0m0.064613s : Parent -> 0m0.064613s, Children -> 0m0.000000s
Syst time -> 0m0.019486s : Parent -> 0m0.019486s, Children -> 0m0.000000s
db2hpu.dest 파일의 관련 AWS_S3 섹션:
[AWS_S3]
alias=s3_storage_proxy
bucket=upload_container
relativepath=files
curl=yes
accesskey=ARFYM6XCP2OGDZ1GLET1
region=eu-west-1
version=4
encrypt=aws:kms
url=http://lat117:8590
생성된 출력 파일에서 발췌한 내용:
i1150@lat117 ]$ cat outfile
"000010","CHRISTINE","I","HAAS","A00","3978",19950101,"PRES   ",18,"F",19630824,+0152750.00,+0001000.00,+0004220.0
...
"200340","ROY","R","ALONZO","E21","5698",19970705,"FIELDREP",16,"M",19560517,+0031840.00,+0000500.00,+0001907.00
생성된 업로드 명령:
i1150@lat117 ]$ cat loadfile
#!/bin/sh

s3AccessKey="ARFYM6XCP2OGDZ1GLET1"
echo "Enter secret key:"
read -s s3SecretKey
s3Bucket="upload_container"
relativePath="/files/"
service="s3"
contentType="text/plain; charset=UTF-8"
curlBin="curl"
s3Region="eu-west-1"
s3Encrypt="aws:kms"
s3EncryptHeader="x-amz-server-side-encryption"
completeEncryptHeader="\n${s3EncryptHeader}:${s3Encrypt}"
signedEncryptHeader=";${s3EncryptHeader}"
curlEncrypt="-H \"${s3EncryptHeader}: ${s3Encrypt}\""

request="aws4_request"
algorithm="AWS4-HMAC-SHA256"
function _s3PrepareUpload
{
    filePath=$1
    fileErr=$2
    fileName=`basename "$1"`
    stat ${filePath} >/dev/null 2>"${fileErr}"
    RC=$?
    if [ $RC -eq 0 ]
    then
        targetFile="/${s3Bucket}${relativePath}${fileName}"
        dateFormatted=`date -u +%Y%m%dT%H%M%SZ`
        dateStamp=`date -u +%Y%m%d`
        fileContentHash=`openssl dgst -sha256 < "${filePath}" | sed 's/(stdin)= //'`
        completeHeaders="content-type:${contentType}\nhost:lat117:8590\nx-amz-content-sha256:${fileContentHash}\nx-amz-date:${dateFormatted}${completeEncryptHeader}"
        signedHeaders="content-type;host;x-amz-content-sha256;x-amz-date${signedEncryptHeader}"
        canonicalRequest="PUT\n${targetFile}\n\n${completeHeaders}\n\n${signedHeaders}\n${fileContentHash}"
        canonicalRequestHash=`echo -en ${canonicalRequest} | openssl dgst -sha256 | sed 's/(stdin)= //'`
        stringToSign="${algorithm}\n${dateFormatted}\n${dateStamp}/${s3Region}/${service}/${request}\n${canonicalRequestHash}"
        signingKey=`printf "${dateStamp}" | openssl sha256 -hmac "AWS4$s3SecretKey" -hex | sed 's/(stdin)= //'`
        signingKey=`printf "${s3Region}" | openssl sha256 -mac HMAC -macopt hexkey:"${signingKey}" -hex | sed 's/(stdin)= //'`
        signingKey=`printf "${service}" | openssl sha256 -mac HMAC -macopt hexkey:"${signingKey}" -hex | sed 's/(stdin)= //'`
        signingKey=`printf "${request}" | openssl sha256 -mac HMAC -macopt hexkey:"${signingKey}" -hex | sed 's/(stdin)= //'`
        signature=`printf "${stringToSign}" | openssl sha256 -mac HMAC -macopt hexkey:"${signingKey}" -hex | sed 's/(stdin)= //'`

        curlArgs="--write-out "%{http_code}" -X PUT -T \"${filePath}\" \
                  -H \"x-amz-date: ${dateFormatted}\" \
                  -H \"content-type: ${contentType}\" \
                  -H \"x-amz-content-sha256: ${fileContentHash}\" \
                  ${curlEncrypt} \
                  -H \"Authorization: ${algorithm} Credential=${s3AccessKey}/${dateStamp}/${s3Region}/${service}/${request}, SignedHeaders=${signedHeaders}, Signature=${signature}\" \
                  \"http://lat117:8590${targetFile}\""
    fi
}


echo Start uploading ...
_s3PrepareUpload "outfile" "EMPLOYEE.msg.err"
if [ $RC -eq 0 ]
then
    eval ${curlBin} ${curlArgs} -o "EMPLOYEE.msg" > "EMPLOYEE.msg.http" 2> "EMPLOYEE.msg.err"
    RC=$?
    if [ $RC -eq 0 ]
    then
        RC=$(cat "EMPLOYEE.msg.http")
        if [ $RC -ge 300 -o $RC -lt 100 ]
        then
            RC=1
        else
            RC=0
            rm -f "EMPLOYEE.msg.err"
        fi
    else
        RC=1
    fi
    rm -f "EMPLOYEE.msg.http" >/dev/null 2>&1
fi
if [ $RC -ne 0 ]
then
    echo "An error occurred while processing the 'outfile' file. The 'EMPLOYEE.msg' and 'EMPLOYEE.msg.err' files contain the associated execution reports."
else
    echo "The 'outfile' file has been processed successfully. The 'EMPLOYEE.msg' file contains the associated execution report."
fi
exit $RC