IBM Mobile Foundation is excited to introduce a feature to manage the distribution of CoreML models.
Under-the-hood improvements to the general tagging feature of Visual Recognition have expanded its ability to accurately recognize the overall scene of an image. Read more.
Introducing the IBM Watson Visual Recognition food model which provides enhanced specificity and accuracy for food items based on over 2,000 food-specific tags.
This year, with the opportunity of a new working location within the new press building, our team undertook an exercise to re-imagine the client; members and guests experience for the IBM at The Masters Program. The focus for us was to bring IBM’s cognitive ambition and capabilities to the forefront, to demonstrate that Watson can be applied to every activity and touch point of an experience, and improve it. The idea formed from this approach became known as our “Cognitive Room”.
The Aero Expo, the Global Show for General Aviation, is running in my hometown Friedrichshafen from today until the weekend. One of the expo and conference topics is drones of the future (AERODrones UAS Expo). Drones or UAV (Unmanned Aerial Vehicles) have been and are a hot topic for IBM and its customers. Let me give a brief overview of some interesting work where drones, artificial intelligence, analytics, database systems, Internet of Things (IoT) and the IBM Cloud come together.
The IBM Watson Visual Recognition service can obviously tag images for content, recognize faces, and find similar images, but that’s not all it can do. If the condition you want to identify is only within a smaller region of a larger image, the entire image might not be classified with high enough confidence and a positive result could be missed. This post shows you how you can improve Watson Visual Recognition's ability to detect finer details.