Installing Infrastructure Automation consists of the following steps:
Downloading the appliance for your environment as a virtual machine snapshot template.
Setting up a virtual machine based on the appliance.
Configuring the Infrastructure Automation appliance.
After you have completed all the procedures in this guide, you will have a working environment on which additional customizations and configurations can be performed.
Below are the two sets of requirements for installing Infrastructure Automation on Amazon EC2.
44 GB of space on the chosen datastore.
12 GB RAM.
4 vCPUs.
An Amazon S3 bucket to store the disk image that will be imported to AWS as a snapshot.
vmimport
.For information on creating an Amazon S3 bucket and a VM Import Service Role, see the Amazon EC2 documentation.
From your local file system, you can now upload the Infrastructure Automation appliance VHD image obtained in Obtaining the appliance to the Amazon S3 bucket, using your choice of tool, such as AWS client or directly upload the Appliance image from your S3 bucket on AWS Management Console.
Important:
These are the procedural steps as of the time of writing. For the latest information on importing a virtual machine as an image, see the Amazon EC2 documentation.
Install the AWS client on the computer you want to interact with the AWS API from.
$ pip install awscli
Configure and download your AWS secret/access key by following the steps in the Managing Access Keys for Your AWS Account documentation.
Configure the AWS client with your access/secret key. For example:
$ aws configure
AWS Access Key ID [******]: ACCESS_KEY
AWS Secret Access Key [******]: SECRET_KEY
Default region name [None]:
Default output format [None]:
Create the trust-policy.json
file for the vmimport role. For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "vmie.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals":{
"sts:Externalid": "vmimport"
}
}
}
]
}
Create the vmimport role using the trust-policy.json
file that you just created.
$ aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json
Note:
This user must have permissions to create and modify IAM roles.
Create the role-policy.json
file. Be sure to use the exact S3 bucket name. For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"s3:GetBucketAcl"
],
"Resource": ["arn:aws:s3:::BUCKET_TO_UPLOAD_IMAGE","arn:aws:s3:::BUCKET_TO_UPLOAD_IMAGE/*"]
},
{
"Effect": "Allow",
"Action": [
"iam:CreateRole",
"iam:PutRolePolicy"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:ModifySnapshotAttribute",
"ec2:CopySnapshot",
"ec2:RegisterImage",
"ec2:Describe*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CancelConversionTask",
"ec2:CancelExportTask",
"ec2:CreateImage",
"ec2:CreateInstanceExportTask",
"ec2:CreateTags",
"ec2:DeleteTags",
"ec2:DescribeConversionTasks",
"ec2:DescribeExportTasks",
"ec2:DescribeExportImageTasks",
"ec2:DescribeInstanceAttribute",
"ec2:DescribeInstanceStatus",
"ec2:DescribeInstances",
"ec2:DescribeTags",
"ec2:ExportImage",
"ec2:ImportInstance",
"ec2:ImportVolume",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:ImportImage",
"ec2:ImportSnapshot",
"ec2:DescribeImportImageTasks",
"ec2:DescribeImportSnapshotTasks",
"ec2:CancelImportTask",
"ec2:DescribeImages",
"ec2:DescribeSnapshots"
],
"Resource": "*"
}
]
}
Apply the vmimport role to the Infrastructure Automation appliance image you uploaded to the S3 bucket.
$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json
To import the appliance:
Create a containers.json file:
{
"Description": "NAME OF IMPORTED SNAPSHOT IN AWS",
"Format": "vhd",
"UserBucket": {
"S3Bucket": "BUCKET WITH UPLOADED .VHD IMAGE",
"S3Key": "PATH OF .VHD IMAGE"
}
}
See the AWS documentation on VM import and export requirements, such as image formats, instances, volume and file system types, and using regions.
Use the AWS-CLI tools to import a disk as a snapshot. See the AWS documentation on using VM Import/Export to import a disk as a snapshot.
Note:
You can either specify a region, or if not, ensure that the S3 bucket is in the same region where you want to import the snapshot.
$ aws ec2 import-snapshot --disk-container file://containers.json
Check the progress of your snapshot import by running the following command:
$ aws ec2 describe-import-snapshot-tasks --import-task-ids SNAPSHOT_ID_GOT_FROM_RESPONSE
Create an AMI from the snapshot. See the AWS documentation on using options with the following command to create and register a Linux AMI from a snapshot.
$ aws ec2 register-image
After installing Infrastructure Automation and running it for the first time, you must perform some basic configuration. You must complete these steps:
Add a disk to the infrastructure that is hosting your appliance.
Configure the database.
Configure messaging
Configure the appliance by using the internal appliance console.
Start the appliance and open a terminal console.
Enter the appliance_console
command. The Infrastructure Automation appliance
summary screen displays.
Press Enter
to manually configure settings.
Press the number for the item that you want to change, and press Enter
.
The options for your selection are displayed.
Follow the prompts to make the changes.
Press Enter
to accept a setting where applicable.
Note: The Infrastructure Automation appliance console automatically logs out after five minutes of inactivity.
Infrastructure Automation uses a database to store information about the environment. Before using Infrastructure Automation, configure the database options for it; Infrastructure Automation provides the following two options for database configuration:
Before installing an internal database, add a disk to the infrastructure hosting your appliance. See the documentation specific to your infrastructure for instructions for adding a disk. As a storage disk usually cannot be added while a virtual machine is running, Red Hat recommends adding the disk before starting the appliance. Infrastructure Automation only supports installing of an internal VMDB on blank disks; installation will fail if the disks are not blank.
Start the appliance and open a terminal console.
Enter the appliance_console
command. The Infrastructure Automation appliance
summary screen displays.
Press Enter to manually configure settings.
Select Configure Application from the menu.
You are prompted to create or fetch an encryption key.
If this is the first Infrastructure Automation appliance, choose Create key.
If this is not the first Infrastructure Automation appliance, choose Fetch key from remote machine to fetch the key from the first appliance. For worker and multi-region setups, use this option to copy key from another appliance.
Note: All Infrastructure Automation appliances in a multi-region deployment must use the same key.
Choose Create Internal Database for the database location.
In the Configure Messaging menu, select Make No messaging changes. If you see Configuration failed: Internal database require a volume mounted at /var/lib/pgsql. Please add an unpartitioned disk and try again.
message, then ensure to add a second disk for the database per instructions as defined above.
Choose a disk for the database. This can be either a disk you attached previously, or a partition on the current disk.
If there is an unpartitioned disk attached to the virtual machine, the dialog will show options similar to the following:
1) /dev/vdb: 20480
2) Don't partition the disk
Enter 1 to choose /dev/vdb
for the database location. This
option creates a logical volume using this device and mounts the
volume to the appliance in a location appropriate for storing
the database. The default location is /var/lib/pgsql
, which
can be found in the environment variable
$APPLIANCE_PG_MOUNT_POINT
.
Enter 2 to continue without partitioning the disk. A second prompt will confirm this choice. Selecting this option results in using the root filesystem for the data directory (not advised in most cases).
Enter Y or N for Should this appliance run as a standalone database server?
Select Y to configure the appliance as a database-only appliance. As a result, the appliance is configured as a basic PostgreSQL server, without a user interface.
Select N to configure the appliance with the full administrative user interface.
When prompted, enter a unique number to create a new region.
Create and confirm a password for the database.
Infrastructure Automation then configures the internal database. This takes a few minutes. After the database is created and initialized, you can log in to Infrastructure Automation.
Based on your setup, you need to configure the appliance to use an external PostgreSQL database. For example, you can only have one database in a single region. However, a region can be segmented into multiple zones, such as database zone, user interface zone, and reporting zone, where each zone provides a specific function. The appliances in these zones must be configured to use an external database.
The postgresql.conf
file requires
specific settings for correct operation. For example, it must correctly
reclaim table space, control session timeouts, and format the PostgreSQL
server log for improved system support. It is recommended that external databases use a
postgresql.conf
file based on the standard file used by the
Infrastructure Automation appliance.
Ensure you configure the settings in the postgresql.conf
to suit your
system. For example, customize the shared_buffers
setting according to
the amount of real storage available in the external system hosting the
PostgreSQL instance. In addition, depending on the aggregate number of
appliances that are expected to connect to the PostgreSQL instance, it might be
necessary to alter the max_connections
setting.
Note:
- Infrastructure Automation requires PostgreSQL version 13.
- `postgresql.conf` controls the operation of all databases managed by the PostgreSQL instance, therefore it is not recommended to run other databases on this PostgreSQL instance.
Use the following steps to configure an external PostgreSQL database:
Start the appliance and open a terminal console.
Enter the appliance_console
command. The appliance console summary screen is displayed.
Press Enter to manually configure settings.
Select Configure Application from the menu.
Choose Create Region in External Database for the database location.
Enter the database hostname or IP address when prompted.
Enter the database name or leave blank for the default
(vmdb_production
).
Enter the database username or leave blank for the default (root
).
Enter the chosen database user’s password.
Confirm the configuration if prompted.
Infrastructure Automation configures an external database.
Configuring messaging is required for the appliance setup. It is recommended to configure the broker on the same appliance where your database is configured
Note: You can only have one Kafka broker per region.
You can either configure the current appliance as a Kafka broker, or point the appliance to an existing external Kafka broker.
Select the appropriate option either Configure this appliance as a messaging server or Connect to an external messaging system to connect to an external Kafka broker. Fill in the required Message Client Parameters, such as IP address, username and password.
Select Proceed. appliance_console
applies the configuration that you requested.
Restart evmserverd
to pick up the changes.
Note: Use your Fully Qualified Domain Name (FQDN) as the messaging hostname rather than localhost
, ensure that a resolvable and reachable non-localhost name entry is present in /etc/hosts
.
You can use multiple appliances to facilitate horizontal scaling, as well as for dividing up work by roles. Accordingly, configure an appliance to handle work for one or many roles, with workers within the appliance carrying out the duties for which they are configured. You can configure a worker appliance through the terminal. The following steps demonstrate how to join a worker appliance to an appliance that already has a region configured with a database and messaging.
Start the appliance and open a terminal console.
Enter the appliance_console
command. The Infrastructure Automation appliance
summary screen displays.
Press Enter to manually configure settings.
Select Configure Application from the menu.
You are prompted to create or fetch a security key. Since this is not the first Infrastructure Automation appliance, choose 2) Fetch key from remote machine. For worker and multi-region setups, use this option to copy the security key from another appliance.
Note: All Infrastructure Automation appliances in a multi-region deployment must use the same key.
Choose Join Region in External Database for the database location.
Enter the database hostname or IP address when prompted.
Enter the port number or leave blank for the default (5432
).
Enter the database name or leave blank for the default
(vmdb_production
).
Enter the database username or leave blank for the default (root
).
Enter the chosen database user’s password.
Confirm the configuration if prompted.
Choose Connect to an external messaging system to connect to the external kafka broker located on the appliance with the external database
Note: You can only have one kafka broker per region
Enter the necessary Message Client Parameters such as the hostname/IP and username/password
Confirm the configuration if prompted.
Note: When configuring messaging, please refer to Amazon EC2 instance hostname types on hostname guidelines. For example, inclusion of .ec2.internal
or .compute.internal
in the hostname, which can be determined using hostname -f
.
Once Infrastructure Automation is installed, you can log in and perform administration tasks.
Browse to the URL for the login screen. For example, URL should be https://xx.xx.xx.xx
Currently, the appliance_console_cli
feature is a subset of the full functionality of the appliance_console
itself, and covers functions most likely to be scripted by using the command-line interface (CLI).
After starting the Infrastructure Automation appliance, log in with a user name of root
and the default password of smartvm
. This displays the Bash prompt for the root user.
Enter the appliance_console_cli
or appliance_console_cli --help
command to see a list of options available with the command, or simply enter appliance_console_cli --option <argument>
directly to use a specific option.
Option | Description |
–region (-r) | region number (create a new region in the database - requires database credentials passed) |
–internal (-i) | internal database (create a database on the current appliance) |
–dbdisk | database disk device path (for configuring an internal database) |
–hostname (-h) | database hostname |
–port | database port (defaults to 5432 ) |
–username (-U) | database username (defaults to root ) |
–password (-p) | database password |
–dbname (-d) | database name (defaults to vmdb_production ) |
| | | | —————————– | —————————————————– | | Option | Description | | –message-server-config | configure appliance as a messaging server (boolean) | | –message-server-unconfig | unconfigure appliance as a messaging server (boolean) | | –message-client-config | configure appliance as a messaging server (boolean) | | –message-client-unconfig | unconfigure appliance as a messaging server (boolean) | | –message-keystore-username | messaging keystore username (defaults to admin) | | –message-keystore-password | messaging keystore password | | –message-server-username | messaging server username (defaults to admin) | | –message-server-password | messaging server password | | –message-server-port | messaging server port (defaults to 9093) | | –message-server-use-ipaddr | use ipaddress not hostname for messaging server | | –message-server-host | messaging server hostname or ipaddress | | –message-truststore-path-src | path for messaging server truststore | | –message-ca-cert-path-src | path to a certificate authority for messaging SSL | | –message-persistent-disk | configure a new volume for storing messaging data |
Option | Description |
–key (-k) | create a new v2_key |
–fetch-key (-K) | fetch the v2_key from the given host |
–force-key (-f) | create or fetch the key even if one exists |
–sshlogin | ssh username for fetching the v2_key (defaults to root ) |
–sshpassword | ssh password for fetching the v2_key |
Option | Description |
–host (-H) | set the appliance hostname to the given name |
–ipaserver (-e) | IPA server FQDN |
–ipaprincipal (-n) | IPA server principal (default: admin ) |
–ipapassword (-w) | IPA server password |
–ipadomain (-o) | IPA server domain (optional). Will be based on the appliance domain name if not specified. |
–iparealm (-l) | IPA server realm (optional). Will be based on the domain name of the ipaserver if not specified. |
–uninstall-ipa (-u) | uninstall IPA client |
Note:
In order to configure authentication through an IPA server, in addition to using Configure External Authentication (httpd) in the appliance_console
, external authentication can be optionally configured via the appliance_console_cli
(command-line interface).
Specifying –host will update the hostname of the appliance. If this step was already performed via the appliance_console
and the necessary updates that are made to /etc/hosts
if DNS is not properly configured, the –host option can be omitted.
Option | Description |
–ca (-c) | CA name used for certmonger (default: ipa ) |
–postgres-client-cert (-g) | install certs for postgres client |
–postgres-server-cert | install certs for postgres server |
–http-cert | install certs for http server (to create certs/httpd* values for a unique key) |
–extauth-opts (-x) | external authentication options |
Note: The certificate options augment the functionality of the certmonger
tool and enable creating a certificate signing request (CSR), and specifying certmonger
the directories to store the keys.
Option | Description |
–logdisk (-l) | log disk path |
–tmpdisk | initialize the given device for temp storage (volume mounted at /var/www/miq_tmp ) |
–verbose (-v) | print more debugging info |
Example Usage.
$ ssh root@appliance.test.company.com
Set up an all-in-one appliance as a local database and messaging appliance and start the MIQ Server:
# appliance_console_cli --internal --dbdisk /dev/sdb --region 0 --password smartvm --message-server-config --message-keystore-password smartvm --server=start
Join a worker appliance to an existing region:
# appliance_console_cli --fetch-key db.example.com --sshlogin root --sshpassword smartvm --hostname db.example.com --password mydatabasepassword --message-client-config --message-server-host db.example.com --message-server-password smartvm --message-keystore-password smartvm --server start
To create a new database locally on the server by using /dev/sdb
:
# appliance_console_cli --internal --dbdisk /dev/sdb --region 0 --password smartvm
To copy the v2_key from a host some.example.com to local machine:
# appliance_console_cli --fetch-key some.example.com --sshlogin root --sshpassword smartvm
You could combine the two to join a region where db.example.com is the appliance hosting the database:
# appliance_console_cli --fetch-key db.example.com --sshlogin root --sshpassword smartvm --hostname db.example.com --password mydatabasepassword
To configure an appliance as a messaging server:
# appliance_console_cli --message-server-config --message-keystore-password="smartvm"
To configure an appliance as a messaging client:
# appliance_console_cli --message-client-config --message-server-host db.example.com --message-server-password smartvm --message-keystore-password smartvm
To configure external authentication:
# appliance_console_cli --host appliance.test.company.com
--ipaserver ipaserver.test.company.com
--ipadomain test.company.com
--iparealm TEST.COMPANY.COM
--ipaprincipal admin
--ipapassword smartvm1
To uninstall external authentication:
# appliance_console_cli --uninstall-ipa