As developers, we're always looking for new ways to make our lives easier - better tools, new SDKs, new services and cognitive computing. They're all new ways to make tasks more efficient, more scalable, or just simply easier to get things done. This blog introduces some new command line tools for interacting with IBM Watson services.
In the first post of this series we covered the first tutorial which extends a Swift-based, cognitive mobile app using the Alchemy API. Now we will focus on the second tutorial and how you can use the Visual Recognition service.
Today, we’re introducing an update to our algorithms that significantly enhances the performance of our Visual Recognition general tagging feature. Users will not only see over three times more tags returned for each image on average, but also greater specificity in those tags due to a 150% increase in the active vocabulary of the system.
The Custom Classifiers feature of the Watson Visual Recognition API enables users to adapt Watson to any custom visual content. Explore examples and best practices to help you make the most out of your Custom Classifiers.