This blog is contributed by Jungwhan Cho, a mobile architect from IBM with 10 years of experience in the mobile industry.
It’s a well-known fact that iterating releases for bug fixes and improvements is critical for an app’s success. Using various monitoring tools and methods such as Google Analytics and A/B testing, developers find ways to improve the quality of apps and making it easier for users to do what the developers want them to do. However these types of feedback don’t tell the whole story; how do users actually feel about your application?
This is where user reviews come in. In both the short and long run, you can’t ignore user reviews; after all, users are spending time to rate and comment on your app, and sometimes they can be unpleasant; but they are all critical to the success of your application. A lot of times, you don’t get that many reviews a day, allowing you to determine the areas of improvements on which you should focus. However when your app finally gets the attention it deserves, it’s difficult to get an overview of how users feel about your latest release, and keying on one area without prioritization makes you feel uneasy.
There is also a good chance that you are ignoring signs of a significantly poor area. Perhaps if you are like me, a techie, you may have a natural tendency to focus on technical issues and features and discard comments of a subjective nature such as poor usability or inconsistent UI designs that probably deserve more attention. So you hire someone to do constant monitoring of the quality of the app, by testing the app before the release, as well as collecting crash and analytics data. There is a good chance that your new hire, an aspiring graduate from an eminent university, will be sacrificing his/her evenings having to go through hundreds or thousands of reviews, reorganizing them in an excel file that is ever evolving, unwilling to yield into simplicity.
If you’ve ever wondered, with all the hype of semantic web and big data, why there isn’t a tool out there that allows you to get an overview of user reviews, then you’ve come to the right place.
Having developed numerous applications, a few with relative success and others not so much, I believe my experiences speak for many developers out there who found iterative releases by listening to users to be a critical factor in building a successful app.
Below is a snapshot I got from using IBM MQA during the open beta. And it delivers exactly what it promises to; by considering the large amount of information available on each area, filtered on areas that really do matter and which users do tend to care about.
The report shows that while usability, content and interoperability scored high, it's saying that there isn't sufficient information on elegance, pricing and privacy to provide a quantified information.
I have not disclosed the name of the app, but you can tell that the app is free without any in-app purchasable items, and without much UI and concerns about privacy, security (which may mean network connection is not required). However usability, performance and interoperability are high, and they scored high relative to other apps. If you guessed this app is a video player, then you've guessed it right. Having gone through the reviews manually in the past, I am quite surprised that the report reflects rather accurately on the overall user reviews, and also reflecting on iterations we had to go through in order to focus on what mattered in a video player.
Seeing a few other apps’ user sentiments, it’s clear that once you are used to this feature, you will not want to go back to a world where this is without. Check it out for yourself at the IBM Mobile Quality Assurance site.