Accessing client libraries and tools
The runtime exposes both a gRPC server as well as a REST server. The REST server is created using the gRPC Gateway wrapper, which adds a REST gateway wrapper layer for a gRPC server with minimal customization, along with swagger definitions conforming to OpenAPI 2.0.
To make gRPC calls, clients can be generated in any language, or a command line utility like grpcurl can be used. Currently, gRPC client libraries for the watson_nlp runtime is supported in 2 languages: Python and Node.
Note: The Model ID mentioned in the models catalog page corresponds to the mm-model-id value that the users have to set while making client requests.
Connect using built-in client libraries
When you have the runtime server up and running with the models you need loaded, you can use one of the built-in client libraries to run inferences against the runtime. See Watson Embedded AI runtime client libraries for more information.
Python
To use the Python client library:
pip install watson-nlp-runtime-client
Once the Python client is installed, make the following example call:
# If FIPS compliance required, run the following before running
# this script
#
# export GRPC_SSL_CIPHER_SUITES="HIGH+ECDSA"
#
# NOTE: Setting os.enviorn["GRPC_SSL_CIPHER_SUITES"] will not
# correctly enable FIPS ciphers within the server
import grpc
from watson_nlp_runtime_client import (
common_service_pb2,
common_service_pb2_grpc,
syntax_types_pb2,
)
# if no TLS
channel = grpc.insecure_channel("localhost:8085")
# if TLS (uncomment below lines)
# with open('tls.crt', 'rb') as f:
# root_certificate = f.read()
# channel = grpc.secure_channel(url, credentials=grpc.ssl_channel_credentials(root_certificates=root_certificate))
stub = common_service_pb2_grpc.NlpServiceStub(channel)
request = common_service_pb2.SyntaxRequest(
raw_document=syntax_types_pb2.RawDocument(text="This is a test"),
parsers=("sentence", "token", "part_of_speech", "lemma", "dependency"),
)
response = stub.SyntaxPredict(
request, metadata=[("mm-model-id", "syntax_izumo_lang_en_stock")]
)
print(response)
Node
To use the NodeJS client library:
npm i @ibm/watson-nlp-runtime-client
Once the Node client is installed, make the following example call:
let messages = require("@ibm/watson-nlp-runtime-client/common-service_pb");
let services = require("@ibm/watson-nlp-runtime-client/common-service_grpc_pb");
let syntaxTypes = require("@ibm/watson-nlp-runtime-client/syntax-types_pb");
let grpc = require("@grpc/grpc-js");
function main() {
let target = "localhost:8085";
let client = new services.NlpServiceClient(
target,
grpc.credentials.createInsecure()
);
let rawDocument = new syntaxTypes.RawDocument();
rawDocument.setText("We have a working runtime woohoo");
let request = new messages.SyntaxRequest();
request.setRawDocument(rawDocument);
request.setParsersList(["token"]);
let meta = new grpc.Metadata();
meta.add("mm-model-id", "syntax_izumo_lang_en_stock");
client.syntaxPredict(request, meta, function (err, response) {
console.log(JSON.stringify(response.toObject(), null, 2));
});
}
main();
Connect using grpcurl
To connect to the runtime using a tool like grpcurl, or from an app in any language, a protobuf stack needs to be generated from scratch; this includes the service RPC definitions as well as the protobuf data models for watson-core and the watson library.
Gathering Proto files
The Watson Runtime image comes with the full stack of generated protobuf files, and you can copy them from the image into your local machine:
docker create --name watson-runtime-protos cp.icr.io/cp/ai/watson-nlp-runtime:1.0 && docker cp watson-runtime-protos:/app/protos/. protos && docker rm watson-runtime-protos
The command above places a protos directory and all the proto files in it.
Once the .proto files have been gathered, you can use a Command Line Interface (CLI) tool or proceed with generating a client for your language of choice.
Example: Using grpcurl
To use grpcurl, point it to common-service.proto and fill in a json representation of the data model input.
First, run:
cd protos
to switch into the protos directory created previously.
An example for a syntax RPC Predict would be:
grpcurl -plaintext -proto common-service.proto -d '{
"raw_document": {
"text": "This is a sample text"
},
"parsers": "token"
}' -H 'mm-model-id: syntax_izumo_lang_en_stock' localhost:8085 watson.runtime.nlp.v1.NlpService.SyntaxPredict
Generating the full data model and a client
The gRPC documentation has plenty of good information about generating clients in different languages.
Example: Python
You will need to install grpcio-tools to invoke the protoc command for Python:
pip install grpcio-tools
then using the protos directory which you got in the previous section:
protoc -I ${PROTOS_DIR} --python_out=generated --grpc_python_out=generated ${PROTOS_DIR}/*.proto
Once the Python files have been generated, you can use them to set up a connection and make a gRPC call.
NOTE: In case you are using Serve configured with TLS, then you can fetch the tls.crt file and un-comment the TLS section shown.
# If FIPS compliance required, run the following before running
# this script
#
# export GRPC_SSL_CIPHER_SUITES="HIGH+ECDSA"
#
# NOTE: Setting os.enviorn["GRPC_SSL_CIPHER_SUITES"] will not
# correctly enable FIPS ciphers within the server
import os, sys, grpc
# Make the proto files importable for each other
sys.path.insert(0, os.path.realpath("generated"))
from generated import common_service_pb2, common_service_pb2_grpc, syntax_types_pb2
# if no TLS
channel = grpc.insecure_channel("localhost:8085")
# if TLS
# with open('tls.crt', 'rb') as f:
# root_certificate = f.read()
# channel = grpc.secure_channel(url, credentials=grpc.ssl_channel_credentials(root_certificates=root_certificate))
stub = common_service_pb2_grpc.NlpServiceStub(channel)
request = common_service_pb2.SyntaxRequest(
raw_document=syntax_types_pb2.RawDocument(text="This is a test"),
parsers=("sentence", "token", "part_of_speech", "lemma", "dependency"),
)
response = stub.SyntaxPredict(
request, metadata=[("mm-model-id", "syntax_izumo_lang_en_stock")]
)
print(response)
Connect using REST
To access the swagger API's, you can visit locahost:8080/swagger/ and see the available REST APIs.
The example curl requests are in the respective model sections within NLP models.