Connect Amazon Web Services

You can connect Amazon Web Services (AWS) account to Cloudability to enable the ingestion of usage data.
Note:

It takes 4 to 24 hours before your initial cost and usage data appears in Cloudability. The length of time depends on how long it takes AWS to generate your first billing reports. In the meantime, you’ll see a message indicating the process is not yet complete.

Steps for integration

To connect your AWS account to Cloudability , perform the following steps:
  1. Configure consolidated account credentials
  2. Set up access to AWS API

Configure consolidated account credentials

Create an AWS Cost and Usage report
  1. From AWS Management Console , select your account name at the top right and select My Billing Dashboard .
  2. From the Billing & Cost Management Dashboard page, select Data Exports (under Cost Analysis) .
    1. Select Create .
    2. On the Export Type, choose one of the following.
      Select Legacy CUR export
      1. On the Export content, select the following.
        • Select Include resource IDs.

        • Select Refresh automatically.

      2. On the Data export delivery options, select the following.
        • Select Hourly for Report data time granularity.

        • Create new report version for Report versioning.

        • Ensure that the compression type is set to gzip or parquet.

      3. On the Data export storage settings, follow the steps below.
        • Select Configure.

        • Select an existing S3 bucket or create a new one.

        • Select an existing S3 bucket or create a new one.

        • Your Cost and Usage report files will reside in this bucket.

        • Select Create Bucket (or Select Bucket if already created)

        • Enter an S3 path prefix.

        • (Optional) Add any tags if necessary.

        • Click Create Report.

      4. Select either Overwrite existing data export file or Create new data export for file versioning

      Select Standard data export, CUR 2.0
      1. Provide your Export name .
      2. On the Data Table and Content setting, select the following.
        • Select Include resource IDs .

        • Select Refresh automatically .

        • Column Selection – Leave as default (all columns).

          Note:

          Do not select Split cost allocation data.

      3. On the Data export delivery options , select the following.

        • Ensure that the compression type is set to gzip

        • Select Overwrite existing data export file for file versioning.

      4. On the Data export storage settings, follow the steps below.
        • Select Configure .

        • Select an existing S3 bucket or create a new one.

        • Your Cost and Usage report files will reside in this bucket.

        • Select Create Bucket (or Select Bucket if already created)

        • Enter an S3 path prefix .

        • (Optional) Add any tags if necessary.

        • Click Create Report .

        Note:

        Existing customers on AWS CUR, can continue to remain on Legacy CUR and everything would work as is.

  3. From the Report content page, do the following:
    • In the Report name - required field, enter a name for your report.
    • Select Include resource IDs .
    • Select Refresh Automatically under Data Refresh settings .
  4. From the Data table delivery options page, do the following:
    • Keep the default settings, and
    • Ensure Hourly is selected.
      Ensure Create new report version is selected.
    • Ensure that the compression type us set to gzip .
    • Under Data export storage settings :
    • Select Configure .
      The Configure S3 Bucket dialog opens.
    • Select an existing S3 bucket or create a new one.
      Your Cost and Usage report files will reside in this bucket.
    • Select Create Bucket (or Select Bucket if already created) .
    • Enter an S3 path prefix .
    • (Optional) Add any tags if necessary.
    • Click Create Report .
Enable cost allocation tags in AWS Management Console
  1. From the Billing & Cost Management Dashboard , navigate to Cost Allocation Tags .
  2. Select the tags that you want to include.
  3. Select Activate .
Generate access credentials in Cloudability
Note:

You must have Cloudability administrator rights to complete this procedure. If you don’t have administrator rights, contact your organization’s primary Cloudability administrator for assistance.

Create and download the credential template in Cloudability
  1. In Cloudability , navigate to Settings > Vendor Credentials > AWS . The Add AWS Account panel opens.

    or

    In Cloudability , navigate to Settings > Vendor Credentials > AWS . Select Add a Credential .

  2. Under CREATE IAM ROLE , enter the following:
    • PAYER ACCOUNT ID
    • S3 BUCKET NAME : The bucket that contains your Cost and Usage report
    • COST AND USAGE REPORT NAME
    • COST AND USAGE REPORT PREFIX
    Note:

    To use Cloudability Automation features, “Include Automation permissions” checkbox needs to be checked. If you are not planning to use Cloudability Automation this can be unchecked.

  3. In case you prefer Automated credentialing of AWS linked accounts, select the checkbox “Automated credentialing of linked accounts”, otherwise leave it unchecked.
    Note:

    To automate credentialing of your AWS linked accounts, refer to AWS Creating and Managing an Organization .

  4. Select Generate Template .

    This option is only visible if you select the pencil icon of an account that has been verified (a green check mark with no background).

  5. Select Download .
  6. An AWS CFT template will get downloaded which needs to be uploaded into AWS console in the next step.
Upload the credential template to AWS Management Console
  1. In AWS Management Console , select the Services list at the top left.

  2. Search for 'CloudFormation' and select the result.
  3. From the AWS CloudFormation page, select Create stack

    The Create stack page opens. .

  4. Under Select template , do the following:
    • Select Upload a template file .
    • Select Choose file and upload the template you downloaded from Cloudability.
    • Select Next .
  5. From the Specify stack details page, do the following:
    • Enter a Stack name (for example, something beginning with ' Cloudability ').
    • Verify the populated Parameters .
    • Select Next .
  6. Proceed through the Configure stack options page.
  7. From the Review page, do the following:
    • Check the acknowledgment box.
    • Select Create stack .

Your new stack initially has a status of CREATE_IN_PROGRESS . When the status changes to CREATE_COMPLETE , you can verify your credential in Cloudability.

Verify the consolidated account credential
  1. In Cloudability , navigate to Settings > Vendor Credentials > AWS .
  2. Select Add a Credential . The Add a Credential panel opens.
  3. Under CREATE IAM ROLE , enter the following:
    • PAYER ACCOUNT ID
    • S3 BUCKET NAME : the bucket that contains your Cost and Usage report
    • COST AND USAGE REPORT NAME
    • COST AND USAGE REPORT PREFIX
    • AWS SCP ENABLED REGION : In case you have applied AWS service control policy at an AWS region, indicate the region such as us-east-2.
  4. In case you prefer Automated credentialing of AWS linked accounts, select the checkbox otherwise leave it unchecked.
    Note:

    To automate credentialing of your AWS linked accounts, refer to AWS Creating and Managing an Organization .

  5. At the bottom of the Add a Credential panel, select Verify Credentials .

Your payer account is displayed on the Vendor Credentials > AWS page.

Wait until the linked accounts are displayed.

Set up access to AWS API

After configuring your consolidated account for data ingestion, Cloudability displays linked accounts, but they don't contain any AWS API data.

To enable access to AWS API endpoints for each linked account, Cloudability generates CloudFormation templates for you to upload to AWS.

Download a CloudFormation template
  1. In Cloudability , navigate to Settings > Vendor Credentials > AWS .
    Note:

    This page requires admin permissions to access it.

  2. Hover your cursor over the icon of the account for which you want to download the template.

    Additional options are displayed.

  3. Select the icon to open the Edit a Credential panel.

    The Edit a Credential panel opens.

    Note:

    AWS SCP ENABLED REGION: In case you have applied AWS service control policy at an AWS region. Please indicate the region e.g. us-east-2.

  4. Select Generate Template.
  5. Select Download .

Regenerate a CloudFormation template
  1. In Cloudability , navigate to Settings > Vendor Credentials > AWS .
    Note:

    This page requires admin permissions to access it.

  2. Hover your cursor over the icon of the account for which you want to regenerate the CF template.

    Additional options are displayed.

  3. Select the icon to open the Edit a Credential panel.
  4. In the S3 BUCKET NAME (REQUIRED) field, enter the S3 bucket name of the payer account.
    Note:

    AWS SCP ENABLED REGION: In case you have applied AWS service control policy at an AWS region. Please indicate the region e.g. us-east-2.

  5. Select Generate Template to download your CF template.

Upload the CloudFormation template

Complete the steps in Upload the credential template to AWS Management Console.

Verify linked account credentials
  1. In Cloudability , navigate to Settings > Vendor Credentials > AWS .

    The linked account has a green check box under Advanced Features .

  2. Hover your cursor over the icon of the linked account for which you want to verify credentials.

    Additional options are displayed.

  3. Select the icon to open Edit a Credential .
  4. At the bottom of the panel, select Verify Credentials .
    Note:

    AWS SCP ENABLED REGION: In case you have applied AWS service control policy at an AWS region. Please indicate the region e.g. us-east-2.

On the AWS tab of the Vendor Credentials page, the linked account has a green check box under Advanced Features .

Repeat these instructions for each linked account you want to enable.

Troubleshooting

If you still don’t see any cost data after completing the account credentialing steps, here are several troubleshooting tips.

Verify the correct billing files are in your S3 bucket

Navigate to your Programmatic Billing Access S3 bucket on the AWS Billing Page , and make sure you see files similar to the following:
  • acctnumber-aws-cost-allocation-date.csv
  • acctnumber-aws-billing-detailed-line-items-with-resources-and-tags-date.csv.zip

If you don’t see files like these, then you need to ensure your Cost and Usage report is enabled for resource IDs and tags. For more information, see Configure consolidated account credentials .

Did you recently turn on the AWS Cost and Usage report?

It may take up to 24 hours for your first Cost and Usage report to generate in AWS.

Are you using the Consolidated Billing Master Payer account?

Programmatic Billing Access rolls up to the master payer account, so linked accounts don’t receive the necessary files in the S3 Bucket. You need to add the master payer account with Programmatic Billing Access enabled to Cloudability.

Still not working? Contact Support and we’d be happy to help you get up and running.

Migrating existing user credentials to Roles?

Notice the yellow line under the accounts that have existing user credentials. The setup process for migrating these credentials is the same as for adding new credentials, but you need to edit your existing credentials instead of creating new ones.
Note:

Migrate to Role is a one-way operation and cannot be undone. Make sure you are ready to complete all the setup steps for the account before proceeding.

Cloudability Manage AWS credentials
Keep your AWS credentials up to date in Cloudability with the following instructions.

Regenerate a Cloud Formation template To regenerate the Cloud Formation template to configure your AWS access, do the following:
  1. In Cloudability , navigate to Settings > Vendor Credentials > AWS .
    Note:

    The Vendor Credentials page requires admin permissions to access it.

  2. Go to stacks service.
  3. Find out which stack he has previously run in order to install cloudability's cloud formation template.
  4. Select the same stack and click the update button
  5. Select ' replace current template ' , and then upload the new CloudFormation template file.
  6. On successful updation, re-verify the account.

Update AWS IAM (Identity and Access Management) policy

This IAM user policy was last updated on November 27, 2019.

If you haven’t updated your AWS IAM policy for Cloudability since then, you’re not getting the full benefit of our cloud cost management capabilities.

Here’s how you can update to the latest version for all of your organization's payer and linked accounts.

For reference, here is the latest IAM policy for payer accounts (needs S3 bucket access):

{ 
					"Version": "2012-10-17", 
					"Statement": [{    
					"Sid": "masterpayerblock",     
					"Effect": "Allow",     
					"Action": ["s3:ListBucket", "s3:GetObject", 
					"s3:GetObjectVersion"],     
					"Resource": [       
					"arn:aws:s3:::add-your-s3-bucket",       
					"arn:aws:s3:::add-your-s3-bucket/*"     
					]  
					}, {     
					"Sid": "linkedaccountblock",     
					"Effect": "Allow",     
					"Action": [       
					"organizations:ListAccounts",       
					"cloudwatch:GetMetricStatistics",       
					"dynamodb:DescribeTable",       
					"dynamodb:ListTables",       
					"ec2:DescribeImages",      
					"ec2:DescribeInstances",       
					"ec2:DescribeRegions",       
					"ec2:DescribeReservedInstances",       
					"ec2:DescribeReservedInstancesModifications",       
					"ec2:DescribeSnapshots",       
					"ec2:DescribeVolumes",       
					"ec2:GetReservedInstancesExchangeQuote",       
					"ecs:DescribeClusters",       
					"ecs:DescribeContainerInstances",       
					"ecs:ListClusters",       
					"ecs:ListContainerInstances",       
					"elasticache:DescribeCacheClusters",       
					"elasticache:DescribeReservedCacheNodes",       
					"elasticache:ListTagsForResource",       
					"elasticmapreduce:DescribeCluster",       
					"elasticmapreduce:ListClusters",       
					"elasticmapreduce:ListInstances",       
					"rds:DescribeDBClusters",       
					"rds:DescribeDBInstances",       
					"rds:DescribeReservedDBInstances",       
					"rds:ListTagsForResource",       
					"redshift:DescribeClusters",       
					"redshift:DescribeReservedNodes",       
					"redshift:DescribeTags",       
					"savingsplans:DescribeSavingsPlans"     
					],     
					"Resource": "*"   
					}]  
					}
{  
					"Version": "2012-10-17",  
					"Statement": [{     
					"Sid": "linkedaccountblock",     
					"Effect": "Allow",       
					"Action": [       
					"cloudwatch:GetMetricStatistics",       
					"dynamodb:DescribeTable",       
					"dynamodb:ListTables",       
					"ec2:DescribeImages",       
					"ec2:DescribeInstances",        
					"ec2:DescribeRegions",       
					"ec2:DescribeReservedInstances",      
					"ec2:DescribeReservedInstancesModifications",       
					"ec2:DescribeSnapshots",       
					"ec2:DescribeVolumes",         
					"ec2:GetReservedInstancesExchangeQuote",       
					"ecs:DescribeClusters",       
					"ecs:DescribeContainerInstances",       
					"ecs:ListClusters",       
					"ecs:ListContainerInstances",       
					"elasticache:DescribeCacheClusters",       
					"elasticache:DescribeReservedCacheNodes",       
					"elasticache:ListTagsForResource",        
					"elasticmapreduce:DescribeCluster",       
					"elasticmapreduce:ListClusters",       
					"elasticmapreduce:ListInstances",       
					"rds:DescribeDBClusters",       
					"rds:DescribeDBInstances",      
					"rds:DescribeReservedDBInstances",       
					"rds:ListTagsForResource",       
					"redshift:DescribeClusters",       
					"redshift:DescribeReservedNodes",       
					"redshift:DescribeTags",       
					"savingsplans:DescribeSavingsPlans"     
					],     
					"Resource": "*"   
					}]  
				}
     


Cloudability: Important utilization metrics and how to leverage memory data Cloudability's Rightsizing Engine evaluates the underlying resource utilization for each EC2 instance and recommends instance types that are well matched to each utilization profile. The end goal is to keep your costs down while being mindful of operational risks.

To get the most accurate recommendations, there are four utilization metrics that need to be assessed: CPU, Disk IOPS (in the case of instances that have local disks), Network Bandwidth, and Memory Utilization. For a number of very good reasons, we’ve taken the approach of pulling this data directly from CloudWatch. The key reason being that hypervisor level metrics are saved to CloudWatch by default for each instance without you doing anything. This leaves Memory Utilization as the only metric that requires a little extra effort on your end to publish to CloudWatch. This is done using what AWS calls Custom Metrics which you can read about here .

The good news is that AWS has come up with standard formats for memory data publication and we’ve taken a streamlined approach to ingest that data. We require only one custom metric to be published using either the older perl based agent or the current unified agent (recommended).

Steps for integration

AWS Unified Agent

This is the preferred method for memory data collection going forward. Cloudability supports the standard location and naming conventions that the unified agent uses when writing custom metrics to Cloudwatch. These details are:
  • metric-name : "mem_used_percent" (for Linux)
  • metric-name : "Available Mbytes" (for Windows)
  • namespace : CWAgent
  • dimensions : InstanceId (it’s important to only add this one dimension)
  • unit : percent

Instructions

AWS provides a number of options for installing the agent which you can find at https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-EC2-Instance.html . The unified agent provides capabilities to publish many types of custom metrics.

The following is an example of a minimal Linux configuration to publish just the memory information:


						{   
						"metrics": {
						"metrics_collected": {
						"mem": {
						"measurement": [
						"mem_used_percent"
						],
						"resources": [
						"*"
						]
						}
						},  
						"append_dimensions": {
						"InstanceId": "${aws:InstanceId}"
						} 
						}
				} 
The following is an example of a minimal Windows configuration to publish just the memory information:

						{  
						"metrics": { 
						"metrics_collected": { 
						"Memory": { 
						"measurement": [ 
						{"name": "Available Mbytes", "unit":"Megabytes"} 
						], 
						"resources": [ 
						"*" 
						] 
						} 
						},  
						"append_dimensions": { 
						"InstanceId": "${aws:InstanceId}" 
						} 
						} 
						} 
				
Note:

If the Available Mbytes metric is not available, the legacy % Committed Bytes In Use will be used if available. For example, {"name": "% Committed Bytes In Use", "unit":"Percent"}

Note:

When the custom metric is published, it's important that it only has the InstanceId dimension. This is because as described in https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#dimension-combinations by AWS, each dimension combination constitutes a completely separate metric, one that must be queried with all dimensional data.

If you do wish to publish multiple dimensions you can rely on the aggregation_dimensions keyword as shown in the Linux example below. The extra dimension in this case is AutoScalingGroupName. In this example, you'll end up with three published memory metrics each with a different dimension combination 1) just InstanceID 2) just AutoScalingGroupName 3) both InstanceID and AutoScalingGroupName.

The following is a Linux example:

						{   
						"metrics": {
						"metrics_collected": {
						"mem": {
						"measurement": [
						"mem_used_percent"
						],
						"resources": [
						"*"
						]
						}
						},  
						"append_dimensions": {
						"InstanceId": "${aws:InstanceId}"
						} 
						}
				} 

Perl Based Agent

Please note that AWS has deprecated the perl based agent but it remains available for download. Cloudability will continue to support this method however we recommend you switch to the newer unified agent if possible.

As stated above only one custom metric is required. The perl based metric looks like this:

  • metric-name : MemoryUtilization
  • namespace : System/Linux or Windows/Default
  • dimensions : InstanceId (it’s important to only add this one dimension)
  • unit : percent

Instructions

Install the perl CloudWatch monitoring scripts as described on this AWS webpage. You'll find instructions on this page for installing the pre-requisite packages on machines running the Amazon Linux AMI and other popular operating systems such as Red Hat.

Here is the crontab configuration we use at Cloudability for Linux machines:

* * * * * ~/aws-scripts-mon/mon-put-instance-data.pl --mem-util --from-cron

This will publish exactly what Cloudability requires and nothing more. You should be able to confirm that the memory metric is reported at 5 min intervals.

Other options : Other options include creating your own agent which integrates with the AWS SDK. Here is an example using Golang that some of our customers have had success with.

How to confirm success

Normally within 24 hours of configuring your EC2 instance, this memory information will become available within Cloudability rightsizing.

Here is an example of the resultant memory data when viewing in the AWS Cloudwatch console. This can be helpful for debugging purposes.

Note:

This is example data from the perl based agent. Unified agent data will appear similar but with the relevant namespace and metric name.

Having this memory data within CloudWatch is going to provide benefits well beyond Cloudability and we’d highly recommend going down this path. For example, you could use the memory data to trigger autoscaling events or trigger alarms. Once you find what method works for you, it’d be a good idea to roll that up into your configuration.


Getting Recommendations Based on GPU Data Cloudability enables users to view recommendations based on GPU data from AWS EC2 instances.

Cloudability ingests GPU processing and memory utilization data from your AWS instances. This will allow Cloudability to make rightsizing recommendations that also consider this data. For example, if you are not using GPUs then you could be recommended to move to an instance type that does include GPUs so you can save money.

Cloudability provides 2 options for collecting GPU data:

Option 1: Use AWS CloudWatch Agent (Linux only)

In order to begin ingesting the GPU utilization data using AWS CloudWatch Agent, you must first setup your agent to provide this data: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-NVIDIA-GPU.html

The following is an example of a minimal Linux configuration to publish just the memory and GPU information:
{   
						"metrics": {
						"metrics_collected": {
						"mem": {
						"measurement": [
						"mem_used_percent"
						]       
						},
						"nvidia_gpu": {
						"measurement": [
						"utilization_gpu",
						"utilization_memory"
						]
						}    
						},  
						"append_dimensions": {
						"InstanceId": "${aws:InstanceId}"
						} 
						}
				} 

Option 2: Use GPU Monitoring Agent (both Linux & Windows)

In order to begin ingesting the GPU utilization data, you must use the GPU monitoring agent .

Before you start

Your VM should have the following:
  • GPU enabled
  • Nvidia drivers installed
  • Python 2.7 or higher installed

Steps for integration

Instructions for Linux VMs

Run the Python script
  1. Install Python 2.7 or higher if it is not available already and make it the default Python version.
  2. Install pip >= 20.3.4 if it is not available already.
  3. Given that your instance is already running on the GPU Enabled AMI, you need to create an IAM role that grants your instance the permission to push metrics to Amazon CloudWatch. Create an EC2 service role that allows for the following policy:

    { "Version": "2012-10-17", "Statement": [ { "Action": [ "cloudwatch:PutMetricData", ], "Effect": "Allow", "Resource": "*" } ] }

  4. Get the custom gpumon script from https://community.apptio.com/viewdocument/custom-gpumon-python-script-to-coll and place it in the targeted VM.
  5. Install all the dependencies.

    For Python 2.7:

    sudo pip2.7 install Nvidia-ml-py boto3

    For Python 3 or higher:

    sudo pip install Nvidia-ml-py boto3

  6. Run the given command in the background:

    python gpumon.py <Region> <log file path>

    For example nohup python gpumon.py <AWS-Region> <log file path> &

Install and run Datadog agent
  1. Follow the given doc to set up the Datadog agent at https://docs.datadoghq.com/integrations/nvml/ .

    Run the following Agent integration install command in place of the command provided in the Datadog documentation:

    sudo -u dd-agent datadog-agent integration install -t datadog-nvml==<INTEGRATION_VERSION>

Instructions for Windows VMs
  1. Install Python 2.7 or higher.
  2. Given that your instance is already running on the GPU Enabled AMI, you need to create an IAM role that grants your instance the permission to push metrics to Amazon CloudWatch. Create an EC2 service role that allows for the following policy:

    { "Version": "2012-10-17", "Statement": [ { "Action": [ "cloudwatch:PutMetricData", ], "Effect": "Allow", "Resource": "*" } ] }

  3. Set up the path of Python to Windows environment variables.
  4. Get the custom gpumon Python script from https://community.apptio.com/viewdocument/custom-gpumon-python-script-to-coll update the log file path according to the Windows file directory.
  5. Run the command in administrator mode to install all the dependencies.

    For Python 2.7:

    pip2.7 install Nvidia-ml-py boto3

    For Python 3 and higher:

    pip install Nvidia-ml-py boto3

  6. Run the script to collect the GPU utilization data and push it to cloud watch using the following command in administrator mode:

    python gpumon.py <AWS-Region> <log file path>.

Install and run the Datadog agent
  1. Install the Datadog agent in your Windows VM.
  2. Set the path ( C:\Program Files\Datadog\Datadog Agent\bin ) to the environment variables of Windows.
  3. Run this command:

    agent integration install -t datadog-nvml==<INTEGRATION_VERSION>

  4. Install pip in Windows in the given path of the Python executable which is used by Datadog ( C:\Program Files\Datadog\Datadog Agent\embedded3 )
  5. In the command prompt in administrator mode, navigate to the path C:\Program Files\Datadog\Datadog Agent\embedded3\Scripts , then run the following command to install the package.

    pip3 install grpcio pynvml

  6. Edit the nvml.d/conf.yaml file, in the conf.d/ directory at the root of your Agent’s configuration directory to start collecting your NVML performance data. See the sample nvml.d/conf.yaml for all available configuration options.
  7. Restart the Agent.