Getting started with user accounts and buckets
In order to start using Deep Archive, you must first create user accounts, and then the S3 buckets. This topic shows the commands that you need to run in order to create a default admin account as well as other user accounts. It then covers commands to create S3 buckets, and other management activities.
Default "admin" account
First, you can log in to the system as the user "admin" using the IP address you told to IBM by
using SSH. The default password for this "admin" user account is adm1n@install.
Then, you need to change the password of the "admin" user account upon the first login. The
following example shows how you can change the
password.
$ ssh admin@node
admin@node's password:
Password expired. Change your password now.
Last login: Thu Apr 4 09:48:26 2024 from ::1
WARNING: Your password has expired.
You must change your password now and login again!
Changing password for user admin.
Current Password:
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Connection to node closed.After the password is successfully changed, the connection is closed to keep the system secure.
Then, you can log in to the system by using the new
password.
$ ssh admin@node
admin@node's password:
Last login: Tue Apr 4 17:14:22 2024 from ::1
[admin@node ~]$Create a self-signed certificate
Create a TLS certification to use S3 service using encrypted communication. <S3_FQDN> for SAN
(Subject Alternative Name) should be specified as an S3 end point FQDN. Please note that the certificate
is common to all nodes.
$ tcmgr cert create --san DNS:<S3_FQDN> --alg EC --pkeyopt ec_paramgen_curve:P-256 -v 3650Install the certification by entering the following
command.
$ tcmgr cert installCreate a user account who can manage data through the S3 interface
To send requests to the S3 interface, you need to create a new user account by using
tcmgr user create command. The programmatic access of the user account must be yes.
When it is created, an access key and a secret access key are generated for them. You can send
requests to the S3 interface with these keys specified. The following code block shows an example of
creating a user
"new_admin".$ tcmgr user create -n new_admin --management-access admin --programmatic-access yes
Date/Time: April 09, 2024, 05:09:24 PM +09:00; IBM TapeCloud Version: 1.0.0.0-00001
Enter password for new_admin:
Confirm password:
> Creating user account [ OK ]
User account new_admin created.
Access key = 52rdhI4EcBxZ1TwL5oL0
Secret access key = tzKneUDuxtj4yWoQih2884EDAfKIY98IAGKuzbS2
NOTE: Please record these keys. The keys can only be viewed when the account is created. You cannot recover it later. However, you can have an administrator reset your access keys at any time.Create an S3 bucket
You can create S3 buckets by using
tcmgr bucket create command. User accounts
with programmatic access enabled can create S3 buckets. The owner of the bucket is automatically set
to the user account who issues the command. This example shows that the creation of a bucket, which
is called
test-bucket.$ tcmgr bucket create -n test-bucket
Date/Time: April 09, 2024, 05:14:18 PM +09:00; IBM TapeCloud Version: 1.0.0.0-00001
> Creating bucket test-bucket [ OK ]
Bucket test-bucket created.Download the certificate
Download the TLS certification file /gpfs/tapecloud/tls.pem from the node.
Configure the default AWS profile and set the path to the
certification.
$ aws configure set default.ca_bundle /path/to/tls.pemRegister the generated AWS keys to a profile by using aws configure
command
In the following some examples in this page, the generated AWS keys are directly specified.
However, you can register AWS keys by using
aws configure command and can save the
effort to specify them again. By using the --profile option, you can specify a
profile that you want to register the keys to. The following example shows how to register AWS keys
for the new_admin profile. The Default region name and the
Default output format can be
None.$ aws configure --profile new_admin
AWS Access Key ID [None]: 52rdhI4EcBxZ1TwL5oL0
AWS Secret Access Key [None]: tzKneUDuxtj4yWoQih2884EDAfKIY98IAGKuzbS2
Default region name [None]:
Default output format [None]:You can check the configured profile by using
aws configure list command with
the --profile
option.$ aws configure list --profile user1
Name Value Type Location
---- ----- ---- --------
profile new_admin manual --profile
access_key ****************5oL0 shared-credentials-file
secret_key ****************zbS2 shared-credentials-file
region <not set> None None The following example shows how to check the list of S3 buckets by using the
new_admin profile after
configuration.$ aws --endpoint https://my-domain-name:443 s3 ls --profile new_admin
2024-04-09 05:14:18 test-bucketUpload a file to the S3 bucket
You can upload a file to the S3 bucket created in the Create an S3 bucket section by
using
aws s3 cp command. You also need to specify the keys that are created in the
Create a user account who can manage data through the S3 interface section at that
time. After a file is uploaded, the file will be backed up into tape storage within 10 minutes. The
following example shows how to upload a file to an S3
bucket.$ AWS_ACCESS_KEY_ID=52rdhI4EcBxZ1TwL5oL0 AWS_SECRET_ACCESS_KEY=tzKneUDuxtj4yWoQih2884EDAfKIY98IAGKuzbS2 aws --endpoint https://my-domain-name:443 s3 cp /home/my_laptop_user/file1G s3://test-bucket --storage-class GLACIER
upload: ../my_laptop_user/file1G to s3://test-bucket/file1GRestore and get a file from the S3 bucket
When you want to get a file from an S3 bucket, you need to call the
aws s3api
restore-object command first. Then, you can confirm the status by using aws s3api
head-object
command.$ AWS_ACCESS_KEY_ID=52rdhI4EcBxZ1TwL5oL0 AWS_SECRET_ACCESS_KEY=tzKneUDuxtj4yWoQih2884EDAfKIY98IAGKuzbS2 aws --endpoint https://my-domain-name:443 s3api restore-object --bucket test-bucket --key file1G --restore-request '{"Days": 1}'
$ AWS_ACCESS_KEY_ID=52rdhI4EcBxZ1TwL5oL0 AWS_SECRET_ACCESS_KEY=tzKneUDuxtj4yWoQih2884EDAfKIY98IAGKuzbS2 aws --endpoint https://my-domain-name:443 s3api head-object --bucket test-bucket --key file1G
{
"AcceptRanges": "bytes",
"Restore": "ongoing-request=\"true\"",
"LastModified": "Fri, 12 Apr 2024 03:02:57 GMT",
"ContentLength": 1073741824,
"ETag": "\"mtime-d186l7tdaww0-ino-5n30\"",
"ContentType": "application/octet-stream",
"Metadata": {},
"StorageClass": "GLACIER"
}After the restore operation completes, the target file will be read from tape storage. Then, you
can get the file to your S3 client by using
aws s3 mv or aws s3 cp
command.$ AWS_ACCESS_KEY_ID=52rdhI4EcBxZ1TwL5oL0 AWS_SECRET_ACCESS_KEY=tzKneUDuxtj4yWoQih2884EDAfKIY98IAGKuzbS2 aws --endpoint https://my-domain-name:443 s3 cp s3://test-bucket/file1G /home/my_laptop_user/
download: s3://test-bucket/file1G to ./file1GMonitoring events
When an issue happens on the system, it is recorded as an event. For example, if an uploaded file
fails to be archived to tape storage, a
Data Write Failure on Cartridge event is
created. These events can be checked by using tcmgr event list command. The
following example shows an output of tcmgr event list command and an output of
tcmgr event show
command.$ tcmgr event list
Date/Time: Apr 13, 2024, 11:12:19 AM +09:00; IBM TapeCloud Version: 1.0.0.0-00001
ID Severity Name Time Description
================================================================================================================================================
...
13 Warn Data Write Failure on Cartridge 2024-04-13T11:09:36.096461+09:00 Node node1 failed to write data to a cartridge.
$ tcmgr event show 13
Date/Time: Apr 13, 2024, 11:23:02 AM +09:00; IBM TapeCloud Version: 1.0.0.0-00001
ID 13
Severity Warn
Name Data Write Failure on Cartridge
Time 2024-04-13T11:09:36.096461+09:00
Description Node node1 failed to write data to a cartridge.
Fix description The write operation might succeed next time. If problem persists, contact IBM for support.
Event type ETCW002W