As a market-leading Full Lifecycle API technology, many large enterprises are using IBM API Connect to drive very large digital transformation projects.

For those customers, performance and scale matter a whole lot—especially when they have the potential to have hundreds of people hitting their API Catalog at any one time. 

Considering those enormous growth requirements from our IBM API Connect customers, we’ve spent time in Fix Pack 8 to allow these customers to scale-out even further.

High numbers of items fetched and numbers of items rendered were causing slower page performance.


During our root cause analysis, we determined that application performance was dictated by two key factors: the number of items fetched and the number of items rendered. The former was causing slow API response times, and the latter was causing bad performance in the page itself. 

The initial implementation did not have the following:

  • The capability to request a subset of results (i.e., “limit” criteria in queries).
  • The capability to “jump” to a specific results (i.e., “offset” criteria in queries).
  • Large enough datasets that would impact performance in a meaningful way.

As customers increased their API Connect usage, the performance of the application deteriorated in proportion to their data growth. Since the impact was felt on both the client and server sides of the product, we needed to coordinate with the backend teams to decide on a unified solution. On the client-side, we decided to use the pagination component from the Carbon Design System, an IBM solution we already use in API Connect. For the server-side, specific APIs were enhanced to support the limit and offset criteria in request queries.


Using the Carbon Pagination Component, we were able to add pages, next buttons, and items-per-page features to various list pages. Modifying the request queries to fetch only a subset of database entries from the server instantly improved both the response and render times of each paginated page.

Basic pagination: Shows “total pages,” “total results”, and allows for “page jumps.”

For pages where basic pagination wasn’t possible, we decided to create new pages that supported the pieces that needed paginating. For example, the application page in the API Manager previously used an accordion which listed data for each subscription. As the number of subscriptions increased for each application, the performance of the entire component also would also degrade. To resolve this, the subscriptions feature was “split” into a new paginated page to keep the application page’s performance consistent.

New page: Subscriptions “split” from Applications into a new paginated page.

For pages with only partial pagination support, we created a custom “pseudo”-pagination solution. This basically allows for paging using only the “back” and “forward” buttons that smartly compute where you should be. The algorithm recursively fetches results and limits the number of items returned based on what we deem as “acceptable” to maintain performance. As a minimal working solution, the goal was to make sure the page was still usable when the “peak” number of results were returned. 

Due to the limitations of using this approach, we had to remove “page jumps,” “total pages,” and “total results” for pages using this method. This is definitely a temporary measure and any affected pages will be upgraded to basic pagination once the relevant APIs enhancements made are available.

“Pseudo”-pagination: Does not show “total pages” or “total results” and doesn’t allow for “page jumps.”


Before pagination, the time it took to load a page would range from slow to infinite. More data in the database meant slower load times. After pagination, performance increased substantially due to the reduced number of items returned. Primarily, the page rendering was decoupled from the total number of items in the database. 

The following table represents the fruits of our labour:

Learn more about IBM API Connect.


More from Announcements

IBM TechXchange underscores the importance of AI skilling and partner innovation

3 min read - Generative AI and large language models are poised to impact how we all access and use information. But as organizations race to adopt these new technologies for business, it requires a global ecosystem of partners with industry expertise to identify the right enterprise use-cases for AI and the technical skills to implement the technology. During TechXchange, IBM's premier technical learning event in Las Vegas last week, IBM Partner Plus members including our Strategic Partners, resellers, software vendors, distributors and service…

Introducing Inspiring Voices, a podcast exploring the impactful journeys of great leaders

< 1 min read - Learning about other people's careers, life challenges, and successes is a true source of inspiration that can impact our own ambitions as well as life and business choices in great ways. Brought to you by the Executive Search and Integration team at IBM, the Inspiring Voices podcast will showcase great leaders, taking you inside their personal stories about life, career choices and how to make an impact. In this first episode, host David Jones, Executive Search Lead at IBM, brings…

IBM watsonx Assistant and NICE CXone combine capabilities for a new chapter in CCaaS

5 min read - In an age of instant everything, ensuring a positive customer experience has become a top priority for enterprises. When one third of customers (32%) say they will walk away from a brand they love after just one bad experience (source: PWC), organizations are now applying massive investments to this experience, particularly with their live agents and contact centers.  For many enterprises, that investment includes modernizing their call centers by moving to cloud-based Contact Center as a Service (CCaaS) platforms. CCaaS solutions…

See what’s new in SingleStoreDB with IBM 8.0

3 min read - Despite decades of progress in database systems, builders have compromised on at least one of the following: speed, reliability, or ease. They have two options: one, they could get a document database that is fast and easy, but can’t be relied on for mission-critical transactional applications. Or two, they could rely on a cloud data warehouse that is easy to set up, but only allows lagging analytics. Even then, each solution lacks something, forcing builders to deploy other databases for…