Getting credentials for High-Speed Transfer Server
Before you can transfer files with High-Speed Transfer Server (HSTS), you need to get the access key and secret from your HSTS instance.
You can get the preexisting credentials for your HSTS instance (Locating the HSTS Node credentials in the OpenShift web console) or create new, personalized Aspera HSTS Node credentials (Creating Aspera HSTS Node credentials). Then, you need to create an access key that is associated with your storage to allow transfers to it (Setting up access to your storage).
Locating the HSTS Node credentials in the OpenShift web console
When you deploy an HSTS instance, the operator automatically creates one set of Node API credentials. You can locate these credentials in the OpenShift web console. To locate the credentials:
- In the OpenShift console, in the left panel click .
- On the Secrets page, locate the user credentials that were automatically created during the HSTS deployment. The credentials have your deployment name and the asperanoded-admin as the name, for example: quickstart-test-asperanoded-admin.
- Get the user credentials for HSTS, which are the Node API username and password.
Click your_deployment_name-asperanoded-admin. Go to the Data section and click Reveal Values to display the username and password. Copy the values and save them according to your local security practices.
- Verify that you can connect with the HSTS Node API.
- Get the external URL for HSTS with one of the following methods:
- Run oc get routes.
- From the console, select project_name-http-proxy entry under Location. . The URL is listed to the right of your
- From the command line, use the following syntax, where user and
pass are the Node username and password, and hsts_url is the
HSTS external
URL.
curl -ik -u "user:pass" https://hsts_url:443/info
- Get the external URL for HSTS with one of the following methods:
Creating Aspera HSTS Node credentials
If you want to personalize your credentials, you can define your Node API username and password after the HSTS deployment, by using the CLI. Follow these steps to create your credentials:
- Log in OpenShift Container Platform, using the command oc login and your
token value.For example,
oc login --token=sha256~tagrhs --server=https://your-http-server-url
- Create an access key and secret. Run the following command and store the resulting secret according to your local security practices. If you copy and paste the command, make sure to edit the example values for NODE_USER and NODE_PASS.Important: This password is not secure, as it does not comply with security best practices.
oc create secret generic asperanoded-creds --from-literal=NODE_USER=nodeuser --from-literal=NODE_PASS=`uuidgen` -n aspera
If you skip this step, a default credential with a random password is created. The default secret is saved under the key<instancename>-asperanoded-admin.
You can assign your instance name to the variable $INSTANCE_NAME. To assign your instance name to the variable, run this command:
If you don't change your instance name, the name of the instance defaults toINSTANCE_NAME=`oc get IbmAsperaHsts -n aspera -o jsonpath='{.items[0].metadata.name}'`
quickstart
.
Setting up access to your storage
To enable a transfer to your storage, create an access key and secret to grant access to your storage by calling the POST /access_key end point of the Node API of HSTS. For more information about the Node API, see Node API Documentation.
curl -ik -u "user:pass" https://hsts_url:443/access_keys -X POST -d @my_access_key_config_file.json
The command output contains the value for the access key ID and the secret (store them according to your local security practices). Alternatively, instead of generating random values for the ID and secret, you can specify your own values in the JSON configuration file.
- To use the Local Storage, you must use the following
syntax:
{"storage": {"type":"local","path":"/pathname"}}
The
path
value must match your HSTS mount point. The default is/data
.
- To use Cloud Object Storage, specify the information about your storage type, along with your
credentials in the JSON configuration file.
The following are examples of the syntax that you must use in my_access_key_config_file.json configuration file for each of the supported Cloud Object Storage providers:
- IBM Cloud Object Storage (ICOS)
For ICOS, your JSON file must use this syntax:
{ "storage": { "type": "ibm-s3", "path": "/", "endpoint": "endpoint_name", "bucket": "bucket_name", "credentials": { "access_key_id": "ibm_cloud_id", "secret_access_key": "ibm_cloud_secret" } } }
Note: For ICOS, be sure to use private regional endpoints. For a list of regional endpoints, see Regional Endpoints.
- S3 Storage Object Storage
For S3 storage, your JSON file must use this syntax:
{ "storage": { "type": "aws_s3", "path": "/", "bucket": "bucket_name", "credentials": { "assume_role_arn": "arn:aws:iam::account_id:role/role_id", "assume_role_external_id": "external_id", "assume_role_session_name": "session_name" } } }
- Azure Storage Server
For Azure storage, your JSON file must use this syntax:
{ "storage": { "type": "azure", "path": "/", "api": "BLOCK", "container": "container_name", "credentials": { "storage_endpoint": "storage_endpoint", "account": "account_name", "key": "xxx" } } }
- Google Cloud Storage
For Google storage, your JSON file must use this syntax:
{ "storage": { "type": "google-gcs", "path": "/bucket-node-tests", "storage_endpoint": "storage.googleapis.com", "max_segments_per_compose": 10000, "credentials": { "type": "service_account", "project_id": "project_id-71649", "private_key_id": "58b1372et5e60f57652b0e7et16955ea857f7b15", "private_key": "-----BEGIN PRIVATE KEY-----\nXXXXXX\n-----END PRIVATE KEY-----\n", "client_email": "client_email@resource-71649.iam.gserviceaccount.com", "client_id": "client_id", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/-resource-71649.iam.gserviceaccount.com" } } }
- IBM Cloud Object Storage (ICOS)
Next
You are now ready to transfer files. See Testing transfers.
For more information about the Node API and access keys, see API for Aspera Node Management and Transfer Support.