The Watson Internet of Things Platform Lite Plan new quota is now live! Free of charge, you get:
The IBM Watson Visual Recognition service can obviously tag images for content, recognize faces, and find similar images, but that’s not all it can do. If the condition you want to identify is only within a smaller region of a larger image, the entire image might not be classified with high enough confidence and a positive result could be missed. This post shows you how you can improve Watson Visual Recognition's ability to detect finer details.
This post guides you on how to integrate Watson Speech-to-Text and Mobile Analytics into an Android native app. Speech-to-Text and Mobile Analytics are available as services on IBM Cloud i.e.., Bluemix. You will be integrating the services available on Bluemix into our existing chatbot using Watson Developer Android SDK and Mobile Analytics client SDK with minimum lines of code.
The above Chinese Proverb expresses what multi sensory learning is all about in three lines. The human brain has evolved to learn and operate optimally in multi sensory environments. Hence, when we engage more senses, more cognitive connections and associations are made and this helps to further process new information. In fact, we learn “better” using multiple senses at once. It's logical! Let’s use Dragon Creative Enterprise Solutions Ltd. (DCESL) to help us build some new cognitive connections of our own.
Use the OpenWhisk serverless platform and Watson cognitive services like Visual Recognition, Speech to Text and Natural Language Understanding to discover dark data hidden in videos - automatically.