Post release quality performance must be gauged sooner, on Day 1, Week 1, and Month 1. In the on-premises segment, we take much longer and typically compare release performance at age 1 year after release. We monitor quality performance all year, but the formal release-to-release comparison happens a year after release. We wait that long to ensure we have a significant percentage of the customer base upgraded to the new release. That's not necessary in the cloud, where all users upgrade simultaneously, and there is only one release in production at any given time. We can and should compare releases after just one week, and after one month. Do we receive fewer support calls per user? What do subscribers call about? That quicker turnaround allows for much better feedback to the product team, who still has the release project experience fresh in mind and can design more meaningful improvements as part of the continuous improvement effort. The rapid feedback cycle is actually a tremendous advance for teams who used to deliver on-premises software. Getting almost instant feedback, rather than waiting for a year, enables teams to respond far more rapidly to customer needs.
PS: To sort the blog and display just the ‘Cloud Difference’ series, click on the “cloud_difference” tag below the title of any post in the series.