Creating the monitoring Virtual Servers
You can monitor a wide range of components with the monitoring infrastructure provided by IBM Hyper Protect Virtual Servers.
Note: * The monitoring metrics are collected from Secure Service Container partitions. * Only Hyper Protect hosting appliance and Secure Service Container partition level metrics are supported for IBM Hyper Protect Virtual Servers 1.2.x.
For more information about collection of metrics, see Metrics collected by the monitoring infrastructure.
This procedure is intended for users with the role cloud administrator.
Before you begin
- Refer to the checklist that you prepared for the Hyper Protect Virtual Server on this topic Planning for the environment.
- Ensure that ports
8443
and25826
are available for the monitoring infrastructure on the Secure Service Container partition. - Ensure the IBM Hyper Protect Virtual Servers CLI is ready for use. For more information, see Setting up the environment by using the setup script.
- Ensure that running the
setup.sh
script has created the folder structure for monitoring container deployment under/root/hpvs/config/monitoring
. - Ensure that you do not specify any external IP details for the monitoring or collectd containers because they use the Secure Service Container's IP with port mapping for getting Secure Service Container LPAR metrics.
Procedure
On your x86 or Linux on IBM Z/LinuxONE (i.e., s390x architecture) management server, complete the following steps with root user authority.
-
Generate certificates for the secure communication between the Hyper Protect monitoring infrastructure (server) and the monitoring client. The monitor client invoke the
collectd-exporter
endpoint on the server to show the collected metrics. Note that when you generate certificates, usecollectdhost-<METRIC_DN_SUFFIX>.<COMMON_NAME>
or*.<COMMON_NAME>
as the common name. A wild card certificate with*.<COMMON_NAME>
common name can be used across multiple partitions. To generate CA signed certificates, see Creating CA signed certificates for the monitoring infrastructure. -
Copy the certificate and key files for the monitoring infrastructure into the
./keys
directory. The certificate and key are used by monitoring infrastructure to encrypt the metric data in transit. If you create the client certificate to enable the client authentication, you can also copy the client certificate to the./keys
directory.cp -p server.key $HOME/hpvs/config/monitoring/keys/server.key cp -p server-certificate.pem $HOME/hpvs/config/monitoring/keys/server-certificate.crt cp -p client-certificate.pem $HOME/hpvs/config/monitoring/keys/client-certificate.crt cp -p myrootCA.crt $HOME/hpvs/config/monitoring/keys/myrootCA.crt
-
Choose one of the options to provision the instance:
Using the yaml configuration files and hpvs deploy
command
This is the recommended option to provision the instance because of it's ease of use and is also an easier method of creating multiple instances quickly.
-
Update the template file
$HOME/hpvs/config/templates/virtualserver.template.yml
based on the networking configuration of the Hyper Protect Virtual Server instance if necessary. Thevs_monitoring.yml
file that has the configuration details for the virtual server refers to the corresponding sections of thevirtualserver.template.yml
when you run thehpvs deploy
command.version: v1 type: virtualserver-template networktemplates: - name: external_network subnet: "10.20.4.0/22" gateway: "10.20.4.1" parent: encf900 driver: macvlan - name: internal_network subnet: "192.168.40.0/24" gateway: "192.168.40.1" parent: encf900 driver: bridge quotagrouptemplates: # Passthrough quotagroup templates - A quotagroup will be dynamically created based # on the template and attached as single volume mount point to the virtual server. # Allowed filesystem types for the passthrough type quogagroup are btrfs, ext4, xfs - name: p-small size: 20GB filesystem : ext4 passthrough: true - name: p-medium size: 50GB filesystem : ext4 passthrough: true - name: p-large size: 100GB filesystem : ext4 passthrough: true - name: p-xlarge size: 200GB filesystem : ext4 passthrough: true - name: p-xxlarge size: 400GB filesystem : ext4 passthrough: true # Non passthrough quotagroup definitions - This quotagroups can be shared by # creating multiple volume mountpoints with the same virtual server or multiple # virtual server. A non passthrough quotagroup will be dynamically created based # on the template and attached as volume mount points to the virtual server. # Only brtfs filesystem is supported in non passthrough quotagroups # mount points attached to virtual server can have filesystem btrfs, ext4, xfs - name: np-small size: 20GB passthrough: false - name: np-medium size: 50GB passthrough: false - name: np-large size: 100GB passthrough: false - name: np-xlarge size: 200GB passthrough: false - name: np-xxlarge size: 400GB passthrough: false resourcedefinitiontemplates: - name: default cpu: 1 memory: 1024 - name: small cpu: 2 memory: 2048 - name: large cpu: 4 memory: 4096 - name: xl cpu: 8 memory: 8192 - name: xxl cpu: 12 memory: 12288
- For more information about the template file for a Hyper Protect Virtual Server instance, see Virtual server template file.
-
Create the configuration yaml file $HOME/hpvs/config/monitoring/demo_monitoring.yml for the instance by referring to the example file $HOME/hpvs/config/monitoring/vs_monitoring.yml. The following is an example of a
vs_monitoring.yml
file.
version: v1
type: virtualserver
virtualservers:
- name: test-monitoring
host: SSC_LPAR_NAME
hostname: monitoring-host-container
repoid: Monitoring
imagetag: 1.2.7.2
imagefile: Monitoring.tar.gz
imagecache: true
environment:
- key: "PRIVATE_KEY_SERVER"
value: "@/root/hpvs/config/monitoring/keys/server.key"
- key: "PUBLIC_CERT_SERVER"
value: "@/root/hpvs/config/monitoring/keys/server-certificate.crt"
- key: "PUBLIC_CERT_CLIENT"
value: "@/root/hpvs/config/monitoring/keys/myrootCA.crt"
- key: "METRIC_DN_SUFFIX"
value: "first"
- key: "COMMON_NAME"
value: "example.com"
ports:
- hostport: 8443
protocol: tcp
containerport: 8443
- hostport: 25826
protocol: udp
containerport: 25826
- name: test-collectd
host: SSC_LPAR_NAME
hostname: collectd-host-container
repoid: CollectdHost
imagetag: 1.2.7.2
imagefile: CollectdHost.tar.gz
imagecache: true
```
**Note**: Since an external IP is not specified for the monitoring container, this container can be reached by using Secure Service Container partition's IP address over port the 8443. If you want to customize network, resources or storage settings, please refer to the parameters and examples of [Virtual server configuration file](../topics/configfiles.html#vs_configfile_readme_yml).
For more information on other network configurations, see [Network requirements for Hyper Protect Virtual Server](../topics/hpvsnetwork_howto.html).
3. Create the instance by using the configurations in the yaml file.
hpvs deploy --config $HOME/hpvs/config/monitoring/demo_monitoring.yml
**Note**:
* You can use the `hpvs undeploy` command to delete this virtual server. For more information, see [Undeploying virtual servers](../topics/hpvs_undeploy.html).
* You can update the resources or configuration of a virtual server after the completion of the deploy operation by using the `-u`, or the `--update` flag of the `hpvs deploy` command. For more information, see [Updating virtual servers](../topics/hpvs_update.html).
### By using the `hpvs vs create` command {: #create-monitoring-server}
1. Upload the collectd image to the Secure Service Container partition by using the `hpvs image load` command.
hpvs image load --file=~/hpvs/config/monitoring/images/CollectdHost.tar.gz
2. Upload the monitoring image to the Secure Service Container partition by using the `hpvs image load` command.
hpvs image load --file=~/hpvs/config/monitoring/images/Monitoring.tar.gz
3. Create the collectd container by running the `hpvs vs create` command.
hpvs vs create --name collectd-host --repo CollectdHost --tag 1.2.7.2 --hostname collectd-host-container
4. Create the `env.json` file as shown below.
{ "PRIVATE_KEY_SERVER":"@/$HOME/hpvs/config/monitoring/keys/server.key", "PUBLIC_CERT_SERVER":"@/$HOME/hpvs/config/monitoring/keys/server-certificate.crt", "PUBLIC_CERT_CLIENT":"@/$HOME/hpvs/config/monitoring/keys/myrootCA.crt", "METRIC_DN_SUFFIX":"first", "COMMON_NAME":"example.com" }
**Note**: The COMMON_NAME (CN) value should coincide with CN value used during certificate creation at step 3. For example, if you set COMMON_NAME for creating server certificate as collectdhost-first.example.com, or `*`.example.com, the COMMON_NAME (CN) in the env.json file must be set to "example.com".
5. Create the monitoring container by running the `hpvs vs create` command.
hpvs vs create --name monitoring-host --repo Monitoring --tag 1.2.7.2 --hostname monitoring-host-container --ports "{containerport = 8443, protocol = tcp, hostport = 8443}" --ports "{containerport = 25826, protocol = udp, hostport = 25826}" --envjsonpath ~/hpvs/config/env.json
**Note**:
- Since an external IP is not specified for the monitoring container, this container can be reached by using Secure Service Container partition's IP address over port the 8443. For more information on other network configurations, see [Network requirements for Hyper Protect Virtual Server](../topics/hpvsnetwork_howto.html).
- You can update the resources or configuration of a virtual server after the virtual server is created by using the `hpvs vs update` command. For more information, see [Updating Hyper Protect Virtual Server containers](../topics/hpvs_update_vs.html).
## Next
You can configure any client tools that use the `collectd-exporter` endpoint to collect the monitoring metrics from the monitoring infrastructure.
* The following example file `prometheus.yml` shows how you can configure [Prometheus](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) to use the metrics collected by the monitoring infrastructure in IBM Hyper Protect Virtual Servers. Ensure that you copy the required keys and certificates to the file path mentioned in the `prometheus.yml` file.
```
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['collectdhost-first.example.com:8443']
scheme: https
tls_config:
ca_file: /etc/prometheus/keys/server-certificate.pem
cert_file: /etc/prometheus/keys/client-certificate.pem
key_file: /etc/prometheus/keys/client.key
server_name: collectdhost-first.example.com
**Note:**
* With a properly configured `prometheus.yml` file, and properly configured, created, and running `monitoring-host` and `collectd-host` containers on the Secure Service Container partition, the targets view of the prometheus server will show the target Secure Service Container partition "State" as "UP" with a default color green.
* To access the targets view of the Prometheus server, enter the following link with the actual IP address or the hostname of the Prometheus server in your browser. `http://<prometheus_server_IP_address_or_hostname>:9090/targets`
- The following example shows how you can view the current monitoring metrics for the
collectdhost-first.example.com
target Secure Service Container partition by using thewget
utility. In this example, thewget
command is executed from the directory containing theprometheus.yml
file's keys, with the output written to themetrics
file, or a derivative file if themetrics
file already exists. Make an entry in the/etc/hosts
file with collectdhost-first.example.com for the server IP(Secure Service Container LPAR IP).
Note: You can also use thewget https://collectdhost-first.example.com:8443/metrics --ca-certificate=myrootCA.crt --certificate=client-certificate.crt --private-key=client.key
wget
utility with the--no-check-certificate
option to skip the SSL certificate validation when retrieving the monitoring metrics from the target Secure Service Container partition.wget https://collectdhost-first.example.com:8443/metrics --ca-certificate=myrootCA.crt --certificate=client-certificate.crt --private-key=client.key --no-check-certificate