Installing the developer portal subsystem
Install the developer portal subsystem.
About this task
- To enable effective high availability for your portal service, you need a latency that is less than 50 ms between all portal-db pods to avoid the risk of performance degradation. Servers with uniform specifications are required, as any write actions occur at the speed of the slowest portal-db pod, as the write actions are synchronous across the cluster of portal-db pods. It is recommended that there are three servers in each cluster of portal-db pods for the high availability configuration. The three servers can be situated in the same availability zone, or across three availability zones to ensure the best availability. However, you can configure high availability with two availability zones.
- The portal does not do certain operations if the free space on certain volumes falls beneath
predefined limits. In particular:
The
databaseVolumeClaimTemplate
needs at least 4GB of free space to create new sites, restore site backups, or upgrade or change the URL of existing sites.The
webVolumeClaimTemplate
needs at least 256MB free space to create new sites, restore site backups, or upgrade or change the URL of existing sites. - Ensure that your kernel or Kubernetes node has the value of its
inotify
watches set high enough so that the Developer Portal can monitor and maintain the files for each Developer Portal site. If set too low, the Developer Portal containers might fail to start or go into anon-ready
state when this limit is reached. If you have many Developer Portal sites, or if your sites contain a lot of content, for example, many custom modules and themes, then a larger number ofinotify
watches are required. You can start with a value of 65,000, but for large deployments, this value might need to go up as high as 1,000,000. The Developer Portal containers takeinotify
watches only when they need them. The full number is not reserved or held, so it is acceptable to set this value high. - Ensure that your kernel or Kubernetes node has a value of
nproc
(maximum number of processes) that applies to the user ID of the portal pods that have been assigned, and that it is high enough to allow all of the portal processes to execute. For smaller installations this might be as low as 16384, but for larger installations that have more concurrent web calls, you might need as many as 125205. If the number is too low, you will see errors like"fork: retry: Resource temporarily unavailable"
in the portal logs. Note that this value might need to be even higher if other, non-portal, pods are sharing the same user ID. - Ensure that your kernel or Kubernetes node has a value of
nofiles
(maximum number of open file descriptors) that applies to the user ID of the portal pods that have been assigned, and that it is high enough to allow the portal to open all of the files that it requires. For smaller installations this might be as low as 16384, but for larger installations that have more concurrent web calls and portal web sites, you might need as many as 1048576. If the number is too low, you will see errors like"too many open files"
in the portal logs. Note that this value might need to be even higher if other, non-portal, pods are sharing the same user ID.
The portal endpoints values are used when you configure a portal service in the Cloud Manager. See Registering a Portal service.
portalAdminEndpoint
- This is the Management Endpoint that is defined in Cloud Manager, which is used for communicating with API Manager.portalUIEndpoint
- This is the portal website URL defined in Cloud Manager. It determines the URL for the site that is created for each catalog. It is used for public access to the portal from a browser.
Procedure
What to do next
If you are creating a new deployment of API Connect, install other subsystems as needed.
When you completed the installation of all the required API Connect subsystems, you can proceed to defining your API Connect configuration by using the API Connect Cloud Manager; refer to the Cloud Manager configuration checklist.