Enabling distributed virtual routing (DVR)
Review the following instructions to enable DVR in IBM® Cloud Manager with OpenStack.
Before you begin
- For DVR, the neutron-l3-agent service needs to run on the compute node, which means the L3 agent recipe creates an external network bridge on the compute node. To ensure the compute node connection does not break during IBM Cloud Manager with OpenStack deployment, at least two workable NICs on compute are required.
- To use DVR, ensure that there is a Linux kernel on the compute node that supports namespace.
- If you only use E-W traffic forwarding improvement in DVR, there
are no extra considerations. However, for N-S traffic forwarding,
when you are planning to associate instance floating IP, consider
the following:
- The NIC on the compute node has a Neutron external network bridge that needs access to an external network.
- To associate one floating IP, two external or public IP addresses are used. An additional IP address will be used implicitly for supporting the DVR implementation. This can easily exhaust the IP resources in your environment and cause increased risk for an IP address conflict.
- If you're planning to use HA, a DVR router is not recommended, as N-S traffic forwarding for DVR is not supported in an HA environment.
About this task
Procedure
Update your environment as normal, but make the following
changes in the override_attributes section of the
environment file.
- 'openstack'.'network'.'l3'.'router_distributed' = option:
Possible values for Option include 'true'/true or auto. Auto sets
all attributes to use DVR. 'True'/true sets the
basic attributes under /etc/neutron to use DVR.
However, you must still append "l2population" to "mechanism_drivers" in /etc/neutron/plugin.ini on
the network node (controller) after deploying.Important: It is not recommended to use DVR if you are planning to deploy an HA environment.