Working with NLP models

The easiest way to get started with Watson NLP is to run a pre-trained model. Once you have your runtime running with the models mounted, you can make an inference request to that runtime using one of the client libraries or tools.

NOTE: Each inference request (API call) is tied to a single model, and these are defined in the individual model topics. For billing purposes, each request counts as an independent API call.

Available models

The available model images are listed in the models catalog section.