IBM Cloud Private endpoints
An endpoint is a network destination address that is exposed by Kubernetes resources, such as services and ingresses. The following sections describe the available external and internal endpoints in an IBM Cloud Private cluster.
See the endpoints that are created in your IBM® Cloud Private cluster.
The Master and Proxy endpoints are the external endpoints that are used for access from outside the cluster. You define these endpoints in the
config.yaml during installation. Typically, you create a fully qualified domain name (FQDN),
which is a DNS entry and CA-signed certificate, and apply it to the IBM Cloud Private master node. The master node applies the FQDN to all master endpoints.
The external endpoints can be defined in one of the following ways:
- Single node endpoint: The IP address or FQDN of a single node
- HA Endpoints that use a virtual IP address (VIP): The VIP address or FQDN of multiple HA nodes
- HA Endpoints that use a cluster load balancer: The load balancer IP address or FQDN of multiple HA nodes
All platform APIs are accessed through the Master node or nodes directly through the management ingress or through the management load balancer.
Following is the format of the URL to access the master endpoint:
https://<Cluster Master Host>:<Cluster Master API Port>/<API path>
- Cluster Master Host is one of the following values:
- Cluster Master API Port is one of the following values:
- Kubernetes API Port is one of the following values:
The proxy endpoint is one or more ingress proxies that are exposed by workloads that are deployed on IBM Cloud Private through the ingress resource.
Following is the format of the URL:
https://<Cluster Proxy Host>:<Cluster Proxy API Port>/<API path>
- Cluster Proxy Host is one or more of the following values:
- Cluster Proxy API Port is either of the following values:
Workloads can define services that are exposed as NodePorts. If a service uses the NodePort type, it bypasses the proxy endpoint.
Your IBM Cloud Private cluster has an internal network for workloads. Services must communicate with the workloads on the internal cluster network.
Services that need to communicate within the cluster to platform services do so by using the internal management ingress service on the internal cluster network, unless otherwise specified by the service API documentation.
The endpoint to access platform services is
https://icp-management-ingress.kube-system:8443. This endpoint is the internal endpoint for the management ingress and is available from all namespaces.
For other services, following are the formats to access the service by using the service name in the local cluster:
- If the service is in the same namespace, the format is
- If the service is in a different namespace, the format is
For more information, see the following articles:
Custom ingress URLs
By default, the
platform-auth ingresses use the localhost for Cross-Origin Resource Sharing (CORS). You can specify additional domains that the ingresses can use.
Add the domains to the ingress. See the following example:
add_header "Access-Control-Allow-Origin" "http://test.com, https://example.com"