Recently, we announced new tutorials to get you started on IBM Cloud. In continuation of our effort to bring in structured and well-defined tutorials, we are adding a mobile section
Imagine, you are in a conversation with a chatbot and you feel that the human angle is completely missing because the bot starts it's dialog with a usual "Hi" or "Hello". You may want to personify the conversation by adding the name of the person (who's logged in) to the boring "Hi" or "Hello". Ever thought of this? It's not just personification, How about wishing appropriately based on the time of the day someone invokes your chat application? Also, how about passing values back and forth during a conversation between the nodes or from application to a node?
IBM Mobile Foundation Adapter Auto-generation framework can automatically generate Adapters for a back-end system or microservices from its OpenAPI specification. The OpenAPI specification of the microservices can be made available either as a JSON file or yaml file. The adapter generation framework takes the OpenAPI Specification file as input and generates the adapter and downloads, which can be deployed to Mobile Foundation.
In this post, you will learn more in-depth about the investment insights application - how it's built, request and response of each service and how the Financial services are chained with Watson Discovery to assert the magic of news articles impacting the shock value. The shock value is then used with Simulated Instrument Analytics service on IBM Cloud to predict the Stressed price.
In this post, you will learn how to model and generate an OpenAPI (swagger 2.0) specification using API Connect on IBM Cloud.Also, you will be drafting, securing and publishing an API talking to a NoSQL database in this case Cloudant.
Distinguishing between two speakers in a conversation is pretty difficult especially when you are hearing them virtually or for the first-time. Same can be the case when multiple voices interact with AI/Cognitive systems, virtual assistants, and home assistants like Alexa or Google Home. To overcome this, Watson’s Speech To Text API has been enhanced to support real-time speaker diarization.
This 3-part series of posts helps you understand the in-depth features of Serverless Computing via OpenWhisk. OpenWhisk offers an easy way to chain services where an output of first action acts as an input to the second action and so on in a sequence. This post describes the resource requirements for performing this lab.
Serverless computing and Watson service chaining via OpenWhisk : Part 3 of 3 expose an action or sequence
By now, you should be aware of what OpenWhisk is and leverage OpenWhisk Sequence to chain Watson services. Also, you should have created Swift and NodeJS actions for transforming the JSON to required formats. In this post, you will learn how to expose an action or a sequence (Chain of actions) as a RESTful endpoint via OpenWhisk API Gateway and OpenWhisk CLI.
In Part 1 of this series, you learned the basics of Serverless computing and the building blocks behind OpenWhisk. In this post, you will create Watson Services and add them to an OpenWhisk Sequence on IBM Bluemix. As our post is all about chaining Watson Services using OpenWhisk, in this section you will create three Watson Services