FAQ of IBM Cloud Infrastructure Center

General

You can use the **`root`** user and its password of the Linux operating system on the management node to log in to IBM Cloud Infrastructure Center after installation. For more information, see [UI login](../admintasks/configuring/configuring_security/CIC_UI_login.html).

Auth methods inside IBM Cloud Infrastructure Center

There are 2 primary auth methods inside the IBM Cloud Infrastructure Center, they are Linux OS auth and the IBM Cloud Infrastructure Center auth.

For Linux OS auth, it is primarily used by an administrator of the IBM Cloud Infrastructure Center to login management node or compute node to do daily operations such as configuration updates. If Ansible is used then it is for Ansible auth.

The IBM Cloud Infrastructure Center auth is primarily used to interact with the IBM Cloud Infrastructure Center API/CLI/UI. It can be used by administrators and end users. For more information about IBM Cloud Infrastructure Center auth, refer to configuring LDAP and IBM Cloud Infrastructure Center login.

How do I add host by using user other than ?

Both root and non-root users are supported in IBM Cloud Infrastructure Center to add host. You can find the required setups in [preparing the management and compute node](../setup/setup_mgmt_and_cmp_node/index.html).

Can I update the quota for a specific project, for example, restrict the number of virtual machines that one project can create?

Yes, IBM Cloud Infrastructure Center supports quota management, by default there are pre-defined quota values, and for more information, see [quota update](../admintasks/configuring/configuring_security/managing_projects/quota_update.html).

Can I change the OpenStack policies to enable more OpenStack actions?

**Warning**:  This is for advanced users only, enabling a policy unnecessarily may lead to a potential security issue.

IBM Cloud Infrastructure Center reuses and enhances **`policy`** management from OpenStack, and by default, IBM Cloud Infrastructure Center disables a set of policies which may lead to a potential security issue. If you need enable them for a special purpose, you can:
1. Find **`/opt/ibm/icic/policy/<component>/policy.yaml`** in management node, where **`<component>`** is name of the component for which you want to change policy.
2. Edit the policy.yaml file, then save the file.

Can I manage the IBM Cloud Infrastructure Center users using an external LDAP?

Yes. Please refer to [Configuring LDAP](../admintasks/configuring/configuring_security/configuring_LDAP.html).

Can I have more than one management node to have the management node in HA(High Availability)?

We can have multiple management nodes installed in Active-Passive HA mode by leveraging our backup/restore capability,  we do not support Active-Active HA mode yet. More details here: [High availability and disaster recovery](../admintasks/backing_up/standalone/High_availability_and_disaster_recovery.html).

I want to do an integration with the IBM Cloud Infrastructure Center. Where is the cacert file located for the communication?

The location of the cacert file is **`/etc/pki/tls/certs/icic.crt`**, by default, at the management node of the IBM Cloud Infrastructure Center.

How to share an image with multiple projects?

There are two ways to share an image with other projects:

1. Change the image to **`public`** so that all the projects in the IBM Cloud Infrastructure Center can access the image: create the image in **`ibm-default`** project (other projects do not have such a capability), then in the detailed image UI, switch the **`Visibility`** from **`private`** to **`public`**. Now, all the other projects are able to access the image. Setting it back to **`private`** makes the image only accessible from project **`ibm-default`**.

2. Share the image with one or more specified projects:
- Update the policy file: `/opt/ibm/icic/policy/glance/policy.yaml` on the management node to enable the image share related APIs:
    ```
        "add_member": ""
        "delete_member": ""
        "get_member": ""
        "get_members": ""
        "modify_member": ""
    ```

- Check the image's current shared projects:
    ```
    openstack image member list <your_image_name_or_id>
    ```

- Share the image with a specified project:
    ```
    openstack image add project <your_image_name_or_id> <target_project>
    ```

- Check the image's current shared projects. The image stays in "pending" status before it is accepted by the target project:
    ```
    # openstack image member list rhel85_eckd
    +--------------------------------------+----------------------------------+---------+
    | Image ID                             | Member ID                        | Status  |
    +--------------------------------------+----------------------------------+---------+
    | b8c9e456-b56e-47fa-a854-6addbfc8afdf | da83d841cb954a75a4b5818291a7a42b | pending |
    +--------------------------------------+----------------------------------+---------+
    ```

 - Refer to [Setting environment variables](../cli/setting_variables.html) to switch the project to the target project, accept the image to finish the share process by:
    ```
    # openstack image set --accept <your_image_name_or_id>
    # openstack image member list <your_image_name_or_id>
    +--------------------------------------+----------------------------------+----------+
    | Image ID                             | Member ID                        | Status   |
    +--------------------------------------+----------------------------------+----------+
    | b8c9e456-b56e-47fa-a854-6addbfc8afdf | da83d841cb954a75a4b5818291a7a42b | accepted |
    +--------------------------------------+----------------------------------+----------+
    ```

Why does the instance's id index increase by 3 instead of 1 in the multi-node cluster environment?

Because the backend maria DB in the management nodes works in the galera cluster when the IBM Cloud Infrastructure Center is installed to be a multi-node cluster, and

in a Galera Cluster, all nodes may write data to the tables. Imagine a situation in which all nodes in the cluster simultaneously try to insert rows in the same table at the same time. The result could potentially be duplicate values for any columns which use auto_increment. To avoid such conflicts, Galera increments values for the columns based on the number of nodes in the cluster. You can also refer to Galera Cluster Auto-increment for more details.

Compute

How to ssh my deployed virtual machines?

We suggest to use **`keypair`** to logon to the virtual machine (user/password can also be used). Using **`keypair`**, you need to
  1. Go to KeyPairs and via the Import Public Key you can import your public key to IBM Cloud Infrastructure Center through UI.
  2. When deploying a virtual machine, select that Keypair you created.
  3. After the virtual machine is running, you can use the private key to ssh login the virtual machine.

How do I achieve clone functions in IBM Cloud Infrastructure Center?

Some system management tools have a clone function, whereby you can create an identical virtual machine from an existing virtual machine. IBM Cloud Infrastructure Center doesn't provide clone functionality but you can achieve the same results by using the

snapshot functions. To do this, you use the snapshot action to create an image of a running virtual machine and use that image to deploy a new virtual machine. See working with images for more information.

Can I create multiple partitions using persistent volumes?

You can only create 2 partitions (disks) through GUI, which means you can only have a root disk mounted as `/` and one additional partition. You can create more than 2 partitions through CLI command line interface (CLI), see [create multiple partitions](../troubleshooting/known_problems_and_solutions/general/create_multiple_partitions.html) for the steps.

What's impact if life cycle actions of the virtual machine on the hypervisor directly?

IBM Cloud Infrastructure Center highly recommend avoid life cycle actions(start, stop, delete etc) on the hypervisor directly. It will make the status in IBM Cloud Infrastructure Center not sync with virtual machine's real status. If you really make life cycle actions on the hypervisor directly, IBM Cloud Infrastructure Center syncs the real status of the virtual machine to their database status, which is, for example, shutdown a virtual machine through CP command or virsh command, IBM Cloud Infrastructure Center marks the virtual machine SHUTOFF status shortly after.

What happened if the hypervisor is reboot?

If one hypervisor is rebooted, all the virtual machines are shutdown. Because compute node, the virtual machine's status become **`Unknown`**. After reboot successful, the compute node is up running, it will do initialization and all the virtual machines are brought back to their original status before the hypervisor shutdown, which is, the status of IBM Cloud Infrastructure Center's database status will be the **`truth`** and set the virtual machine back to ACTIVE, SHUTOFF etc status.

How to change the virtual machines' location directory on a compute node?

Ensure there are no existing virtual machines deployed by the IBM Cloud Infrastructure Center on the compute node before changing the virtual machines' default location directory.

  • For KVM Compute Nodes

    The virtual machines' default location directory on KVM compute nodes is /var/lib/libvirt/images/nova/instances . To change this directory, follow these steps:

    1. Change the owner of the new directory:
      ```
      chown -R nova:nova <new-directory-of-instances>
      ```
      
    2. Change the SELinux context type of the new directory to virt_var_lib_t :
      ```
      chcon -R -t virt_var_lib_t <new-directory-of-instances>
      ```
      
    3. Verify that the owner and SELinux context type of the new directory are changed correctly:
      ```
      ls -laZ <new-directory-of-instances>
      ```
      
    4. Edit the /etc/nova/nova.conf file on the compute node to update the instances_path configuration to the new directory. If this entry does not exist, add it:
      ```
      instances_path = <new-directory-of-instances>
      ```
      
    5. Restart the Nova service on the compute node:
    ```
    /opt/ibm/icic/bin/icic-services nova restart
    ```
    
  • For z/VM Compute Nodes

    The virtual machines' default location directory on z/VM compute nodes is /var/lib/nova/instances . To change this directory, follow these steps:

    1. Change the owner of the new directory:
      ```
      chown -R nova:nova <new-directory-of-instances>
      ```
      
    2. Edit the /etc/nova/nova.conf file on the compute node to update the instances_path configuration to the new directory. If this entry does not exist, add it:
      ```
      instances_path = <new-directory-of-instances>
      ```
      
    3. Restart the Nova service on the compute node:
    ```
    /opt/ibm/icic/bin/icic-services nova restart
    ```
    

Network

Does IBM Cloud Infrastructure Center support ?

Currently, IBM Cloud Infrastructure Center does not support VxLAN, you can create a VLAN or FLAT or Geneve type network, see [planning network](../planning/planning_networks/planning_networks.html) for more information.

What's the maximum number of supported networks that can be created by IBM Cloud Infrastructure Center?

With FLAT mode, you can only create 1 network for all projects. And with VLAN mode, each project can create 1+ networks and the maximum number of VLAN networks that can be created for all projects is 4094.

Persistent Storage

How can the two be mapped together, to know which lun in Linux is which volume in the IBM Cloud Infrastructure Center?

In the IBM® Cloud Infrastructure Center, only WWID of the volume can be seen on the **volume** details page;

Inside the guest VM, the LUN ID from lsluns command output can be seen, but not the WWID. How can the two be mapped together, to know which lun in Linux is which volume in the IBM® Cloud Infrastructure Center?

When the user uses the volume inside a Linux layer, then do not care about the LUN ID. In the production server, multipathd service needs to be enabled in the VM, as indicated in the [steps to make images](../admintasks/working_with_images/create_image/zvm/index.html). With multipathd service enabled, the user is suggested to use the multipath device (eg, mpatha, mpathb or the WWID depending on the multipath configurations) instead of the single path device (eg, sda, sdb, etc). So, the user needs to ensure to find the mapping between the volume WWID and the multipath device name.

Inside the VM, the user can get the WWID and the multipath device mapping by the multipath -ll command, more details can be found in [RedHat knowledgebase](https://access.redhat.com/solutions/474593)

How to get the relationship of volumes created from a consistency group or consistency group snapshot?

Shown beneath is an example of how to clone a consistency group from a consistency group snapshot or an existing consistency group.
```
1. Create a consistency group, named cg1.
2. Create 2 volumes, named volume1 and volume2, and add them to cg1.
3. Create a group snapshot of cg1, named cg1-snapshot.
4. Create a consistency group from group snapshot cg1-snapshot, named clone_cg1-snapshot, which has following 2 cloned volumes.  
4.1 clone_cg1-snapshot-1  
4.2 clone_cg1-snapshot-2
5. Create a consistency group from cg1, named clone-cg1, which has the following 2 cloned volumes.
5.1 clone-cg1-1
5.2 clone-cg1-2
  When checking the detail of cloned volume clone_cg1-snapshot-1 or clone-cg1-1, there is no information showing the source volume.  
  How to find the source volume of cloned volume? Currently, there is no mapping information of cloned and source volumes on the IBM Cloud Infrastructure Center UI. Instead, you can get the mapping information by the following steps.

  1. Run the command beneath on the management node to get the volume's source snapshot id. There are many fields listed. `snapshot_id` is the source consistency group snapshot id. `source_volid` is its source volume id. 
   ```
   openstack volume show <cloned volume name or id>
   ```

  2. You can check the source volume details using the command:
   ```
   openstack volume show <source_volid>
   ```
  3. You can check which VM the source volume is attached with the command:
   ```
   openstack volume list | grep <source_volid>
   ```