Find answers to frequently asked questions about Cloud Pak for Data as a Service.
See Service plan changes and deprecations.
Go to Cloud Pak for Data as a Service.
For the URLs in other regions, see Which regions can I provision Cloud Pak for Data as a Service in?.
Yes, when you sign up for Cloud Pak for Data as a Service, you automatically provision Lite versions of the core services, which are free. Go to Cloud Pak for Data as a Service.
Currently, you can provision Cloud Pak for Data as a Service and its core services, Watson Studio, Watson Machine Learning, and Watson Knowledge Catalog, in these IBM Cloud regions:
However, some regions, other than the Dallas region, have limitations. See Regional limitations.
See IBM Cloud Regions.
You can provision other services to use with Watson Studio in any region. See Creating and managing IBM Cloud services.
Cloud Pak for Data as a Service is translated into multiple languages. The language you see when using the product or viewing the documentation depends on the locale setting of your browser. To change the language for the product user interface and this documentation, change the locale for your browser in the browser settings or preferences.
Language support for Cloud Pak for Data as a Service and this documentation includes these languages:
The translated documentation has these limitations:
For some offering plans, you can provision Watson Studio and Watson Knowledge Catalog services in multiple IBM Cloud service regions. However, your projects, catalogs, and data are specific to the region in which they were saved and can be accessed only from your services in that region. You must switch your region to see the projects, catalogs, and data from that region.
These are the recommended web browsers used for Cloud Pak for Data:
When you're ready to upgrade Cloud Pak for Data as a Service or any of the services that you created in Cloud Pak for Data as a Service, you can upgrade in place without losing any of your work or data.
You must be the owner or administrator of the IBM Cloud account for a service to upgrade it.
If you try to provision an instance of a service, for example, the Visual Recognition service, and you might get this error message:
You do not have the required permission to create an instance. You must be assigned the IAM Editor role or Operator role or higher. Contact the account owner to update your access.
To get the IAM Editor role:
If you have an enterprise account or work in an IBM Cloud that you don't own, you might need to ask an account owner to give you the Watson Knowledge Catalog service Admin role or the IBM Cloud account administrator role.
To find your IBM Cloud account owner:
owner next to it. To understand roles, see Roles for Cloud Pak for Data as a Service. To determine your roles, see Determine your roles.
Cloud Pak for Data as a Service provides a single, unified interface for a set of core IBM Cloud services and their related services. The core services are Watson Studio, Watson Machine Learning, and Watson Knowledge Catalog. You can add other services to supplement Watson Studio, store your data, or develop Watson applications.
See Overview of Cloud Pak for Data as a Service.
Your product name changed to Cloud Pak for Data as a Service because you have Watson Studio, Watson Machine Learning, or Watson Knowledge Catalog, plus another service in the Cloud Pak for Data as a Service services catalog, such as Cognos Dashboards Embedded. The features, plans, and costs of your services did not change.
Watson Studio is a single service, while Cloud Pak for Data as a Service is a set of services, which includes Watson Studio as one of its core services. The features of Watson Studio are the same in both cases.
See Overview of Cloud Pak for Data as a Service.
Cloud Pak for Data 4.0 is software that you must install and maintain, while Cloud Pak for Data as a Service is a set of IBM Cloud services that are fully managed by IBM.
See Comparison between Cloud Pak for Data deployments.
Yes, Cloud Pak for Data as a Service has a subscription plan. See Upgrading to a Cloud Pak for Data as a Service subscription account.
Cloud Pak for Data as a Service supports many data sources. See Connection types.
Log in at Cloud Pak for Data as a Service to go to your home page, then click New Project. Watch the video about creating a project to see how to create both a blank project and a project from a file.
If you are seeing an error that says that the ZIP file doesn't contain a Watson Studio project, you might be trying to import a ZIP file from a different platform.
You can import a project from a file on your local system only if the ZIP file that you select was exported from a Cloud Pak for Data as a Service project as a compressed file. You cannot import a compressed file that was exported from a project in IBM Cloud Pak for Data.
Spark environments are provided under Watson Studio. A Spark environment offers Spark kernels as a service (SparkR, PySpark and Scala) and is based on Armada/Kubernetes. The underlying Armada is shared across multiple users. However each kernel gets a dedicated Spark cluster and Spark executors. You can change the Spark configurations, and can specify the size of the executors and the number of executors per kernel. A Spark environment is more serverless in nature. In contrast, IBM Analytics Engine offers Hortonworks Data Platform on IBM Cloud. You get one VM per cluster node and your own local HDFS. You get Spark and the entire Hadoop ecosystem. You are given shell access and can also create notebooks.
No. Compute, data resources, and billing can't be shared in a Spark environment. Whereas you can open a notebook with an Anaconda environment, stop the kernel of the notebook, then start a second notebook with the same environment and share the runtime without stopping it, you cannot do this with a Spark environment. Spark environments runtimes can't be shared. Every notebook kernel has its own dedicated Spark cluster. If you create two notebooks using the same environment definition, two runtimes, each with their own kernel are started, which means that two clusters, each with a set of spark executors are created.
You can't load data files larger than 5 GB to your project from Watson Studio. If your files are larger, you must use the Cloud Object Storage API and load the data in multiple parts. See the curl commands for working with Cloud Object Storage directly on IBM Cloud.
The machine learning job you submitted is still in the "Pending" state because it is waiting for enough resources to start running. This can happen if resources are currently in high demand or if you submitted a large number of concurrent requests and your newer requests are waiting for the ones submitted earlier to complete execution.
The tool you need depends on your type of data, what you want to do with your data, and how much automation you want. To find the right tool, see Choosing a tool.
When you create an IBM Analytics Engine service from Watson Studio and try to associate the service with your Watson Studio project, a message appears telling you that the selected Analytics Engine service doesn't have a resource key.
Follow these steps to create a resource key to enable associating an Analytics Engine service with a project:
Create a wdp-writer service credential in IBM Cloud for your Analytics Engine service:
wdp-writer, give it Writer role, and click Add. clsadmin by default.Now you can select the service in your project, for example to run a notebook.
Note: If an admin resets the cluster password, you will need to delete the associated service from all the projects, reset the cluster password, and then re-associate the service.
When you create a project or catalog, you specify a IBM Cloud Object Storage and create a bucket that is dedicated to that project or catalog. These types of objects are stored in the IBM Cloud Object Storage bucket for the project or catalog:
You must upgrade your IBM Cloud Object Storage instance only when you run out of storage space. Core services can use any IBM Cloud Object Storage plan and you can upgrade any core service or your IBM Cloud Object Storage service independently.
IBM Cloud Object Storage requires an extra step for users who do not have administrative privileges for it. The account administrator must enable non-administrative users to create projects.
If you have administrator privileges and do not see the latest IBM Cloud Object Storage, try again later because server-side caching might cause a delay in rendering the latest values.
Watson Knowledge Catalog is a cloud-based enterprise metadata repository that lets you catalog your knowledge and analytics assets, including structured and unstructured data wherever they reside, so that they can be easily accessed and used to fuel data science and AI. For selected source types, Watson Knowledge Catalog can automatically discover and register data assets at the provided connection. As assets are added to the catalog, they are automatically indexed and classified, making it easy for users such as data engineers, data scientists, data stewards, and business analysts to find, understand, share, and use the assets. AI-powered search and recommendations guide users to the most relevant assets in the catalog based on understanding of relationships between assets, how those assets are used, and social connections between users.
Watson Knowledge Catalog also provides an intelligent and robust governance framework that lets you define and enforce data and access policies to ensure that the right data go to the right people.
Through Watson Knowledge Catalog's business glossary, users can create a common business vocabulary and associate them to your assets, policies and rules, providing the bridge between the business domain and your technical assets.
The new governance artifacts experience includes more types of governance artifacts, more relationships between artifacts and assets, and fine-grained control of user permissions to view and manage governance artifacts with categories.
See Upgrading to the new version of governance artifacts.
A catalog is where you share assets across the enterprise. A project is where you work with assets within smaller teams. An enterprise catalog can have thousands of assets shared with hundreds of users. Projects are designed for a team of collaborators to work with a small number of assets for a specific goal, such as developing an artificial intelligence model or data preparation, using Watson Studio.
Watson Knowledge Catalog supports over 50 connectors to cloud or on premises data source types. See Connection types.
Watson Knowledge Catalog also supports other asset types, such as structured data, unstructured data, models, and notebooks.
You can't load data files larger than 5 GB to your catalog from Watson Knowledge Catalog. To add a file that is larger than 5 GB to a catalog, upload the file to IBM Cloud Object Storage and then add it as a connected data asset.
No, you can keep all your data in their existing repositories or you can upload local files to the IBM Cloud Object Storage associated with the catalog. The choice is yours.
Watson Knowledge Catalog stores and manages only the metadata of your assets.
The number of assets you can have across all catalogs depends on your plan:
See Watson Knowledge Catalog offering plans.
Watson Knowledge Catalog includes an automated policy enforcement engine that will determine outcomes based upon the policies and the action taken place. Watson Knowledge Catalog provides the ability to set up your policies within the system and allow you to restrict access to data based upon the defined policies.
For governed catalogs that are created with data protection rule enforcement, Watson Knowledge Catalog automatically classifies the columns in your relational data assets when they are added to the catalog. Over 160 data classes for columns are provided, including names, emails, postal addresses, credit card numbers, driver's licenses, government identification numbers, date of birth, demographic information, DUNS number, and more. For ungoverned catalogs that do not enforce data protection rules, a user can choose to classify, or profile, a relational data asset, but assets are not automatically classified. Catalogs also profile unstructured data assets. See Profile assets.
Yes, data preparation capabilities are available in Data Refinery, which is part of Watson Knowledge Catalog. Data Refinery provides a rich set of capabilities that not only allow you to discover, cleanse, and transform your data with built-in operations, but it also comes with powerful profiling and visualization tools such as charts and graphs to help you interact with and understand your data.
Data access and transform policies defined in Watson Knowledge Catalog are also enforced in Data Refinery to ensure that sensitive data that originated from governed catalogs remain protected.
You can set up access groups through your IBM Cloud account in the Identity and Asset Management (IAM) area.
After you set up the access groups, on the Access Control page of a catalog, you can add the access group so that all members of the access group can access the catalog with the same permissions. See Add access groups.
Watson Knowledge Catalog uses its own local store for metadata.
Watson Knowledge Catalog runs on a cloud native persistence store that can meet the platform needs for performance, up-time, and scalability.
When adding assets from catalog or project, or publishing assets from project to catalog, both project and catalog must satisfy criteria:
In the catalog screen, the dropdown for target project when adding assets to project lists only the projects that satisfy all thses criteria.
Data protection rules are scoped to the IBM Cloud account and will be enforced on assets in all governed catalogs that belong to the same IBM Cloud account as the data protection rules.
No, Watson Knowledge Catalog is a data catalog for searching for data.
Policies affect only how data appears within the catalog. Policies do not affect users who access external data sources directly.
You must have special permissions to create governance artifacts, such as, policies, business terms, data classes, rules, and reference data sets. You must also be a member of a category with a role that provides permission to create artifacts in that category. See Managing governance artifacts.
You can install Python and Scala libraries and R packages through a notebook, and those libraries and packages will be available to all your notebooks that use the same environment definition. For instructions, see Import custom or third-party libraries. If you get an error about missing operating system dependencies when you install a library or package, notify IBM by clicking the chat icon. To see the preinstalled libraries and packages and the libraries and packages that you installed, from within a notebook, run the appropriate command:
No, there is no way to call one notebook from another notebook in Watson Studio. However, you can put your common code into a library outside of Watson Studio and then install it.
No, you can't extend your notebook capabilities by adding arbitrary extensions as a customization because all notebook extensions must be preinstalled. The only notebook extension which is preinstalled is the Esri ArcGIS extension,
which you can select when you create a runtime environment definition and select the Python 3.7 software configuration. This selection enables widgetsnbextension for ipywidgets.
After you load a CSV file into object storage, choose one of the options to create a DataFrame or other data structure from the Insert to code menu under the file name. For instructions, see Load and access data.
After you load the compressed file to object storage, get the file credentials by using the Insert to code menu under the file name. Then use this function to save the file from object storage in GPFS.
The credentials argument is the dictionary that was inserted to code in your notebook.
Cloud Pak for Data as a Service are very secure and resilient. See Security of Cloud Pak for Data as a Service.
The data that is loaded into your Spark service and notebooks is secure. Only the collaborators in your project can access your data or notebooks. Each Watson Studio account acts as a separate tenant of the Spark and Object Storage services. Tenants cannot access other tenant's data.
If you want to share your notebook with the public, then hide your data service credentials in your notebook. For the Python, R, and Scala languages, enter the following syntax: # @hidden_cell
Be sure to save your notebook immediately after you enter the syntax to hide cells with sensitive data.
Only then should you share your work.
No. Your notebooks are stored in IBM Cloud Object Storage, which provides resiligency in case of an outage.
{. #sharing-a-notebook} When you share a notebook, the permalink never changes. Any person with the link can view your notebook. You can unshare the notebook by clearing the check box to share it. Updates are not automatically shared. When you update your notebook, you can sync the shared notebook by reselecting the check box to share it.
One way of sharing your work outside of RStudio in Watson Studio is connecting it to a shared GitHub repository that you and your collaborators can work from. Read this blog post for more information.
However, the best method to share your work with the members of a project in Watson Studio is to use notebooks in the project using the R kernel.
RStudio is a great environment to work in for prototyping and working individually on R projects, but it is not yet integrated with Watson Studio projects.
By design, modeler flows can only be used in the project where the flow is created or imported. If you need to use a modeler flow in a different project, you must download the flow from current project (source project) to your local environment and then import the flow to another project (target project).
Go to Creating an AutoAI experiment from sample data to watch a short video to see how to create and run an AutoAI experiment and then follow a tutorial to set up your own sample.
The AutoAI graphical tool in Watson Studio automatically analyzes your data and generates candidate model pipelines customized for your predictive modeling problem. These model pipelines are created iteratively as AutoAI analyzes your dataset and discovers data transformations, algorithms, and parameter settings that work best for your problem setting. Results are displayed on a leaderboard, showing the automatically generated model pipelines ranked according to your problem optimization objective. For details, see AutoAI overview.
You can use popular tools, libraries, and frameworks to train and deploy machine learning models using IBM Watson Machine Learning. The supported frameworks topic lists supported versions and features, as well as deprecated versions scheduled to be discontinued.
API keys allow you to easily authenticate when using the CLI or APIs that can be used across multiple services. API Keys are considered confidential since they are used to grant access. Treat all API keys as you would a password since anyone with your API key can impersonate your service.
Yes, we encourage feedback as we continue to develop this exciting array of services. Click the chat icon, type a comment, and press Return.
IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant wherever your models were built or are running. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production
There's a Standard pricing plan that charges a flat fee per model, with no restrictions on the number of payload, feedback rows, or transactions for Explainability. The up-to-date information is available in the IBM Cloud catalog.
Watson OpenScale offers a free trial plan. To sign up, see Watson OpenScale web page and click Get started now. You can use the free plan if you want (subject to monthly usage limits that refresh every month).
Watson OpenScale is one of the included services for IBM Cloud Pak for Data.
For fairness monitoring, the prediction column allows only an integer numerical value even though the prediction label is categorical. How do I configure a categorical feature that is not an integer? Is a manual conversion required?
The training data might have class labels such as “Loan Denied”, “Loan Granted”. The prediction value that is returned by IBM Watson Machine Learning scoring end point has values such as “0.0”, “1.0". The scoring end point also has an optional column that contains the text representation of prediction. For example, if prediction=1.0, the predictionLabel column might have a value “Loan Granted”. If such a column is available, when you configure the favorable and unfavorable outcome for the model, specify the string values “Loan Granted” and “Loan Denied”. If such a column is not available, then you need to specify the integer and double values of 1.0, 0.0 for the favorable, and unfavorable classes.
IBM Watson Machine Learning has a concept of output schema that defines the schema of the output of IBM Watson Machine Learning scoring end point and the role for the different columns. The roles are used to identify which column contains the prediction
value, which column contains the prediction probability, and the class label value, etc. The output schema is automatically set for models that are created by using model builder. It can also be set by using the IBM Watson Machine Learning Python
client. Users can use the output schema to define a column that contains the string representation of the prediction. Set the modeling_role for the column to ‘decoded-target’. The documentation for the IBM Watson Machine Learning Python
client is available at: http://wml-api-pyclient-dev.mybluemix.net/#repository. Search for “OUTPUT_DATA_SCHEMA” to understand the output schema and the API to use is to store_model
API that accepts the OUTPUT_DATA_SCHEMA as a parameter.
You must either provide Watson OpenScale access to training data that is stored in Db2 or IBM Cloud Object Storage, or you must run a Notebook to access the training data.
Watson OpenScale needs access to your training data for the following reasons:
In the Notebook-based approach, you are expected to upload the statistics and other information when you configure a deployment in Watson OpenScale. Watson OpenScale no longer has access to the training data outside of the Notebook, which is run in your environment. It has access only to the information uploaded during the configuration.
Depending on your fairness configuration, your fairness score can exceed 100 percent. It means that your monitored group is getting relatively more “fair” outcomes as compared to the reference group. Technically, it means that the model is unfair in the opposite direction.
Use this Watson OpenScale Notebook to read the data from Netezza and generate the training statistics and also the drift detection model.
Watson OpenScale works on a deployment of a model, not on the model itself. You must create a new deployment and then configure this new deployment as a new subscription in Watson OpenScale. With this arrangement, you are able to compare the two versions of the model.
Multiple kinds of risks that are associated with machine learning models, such as any change in input data that is also known as Drift can cause the model to make inaccurate decisions, impacting business predictions. Training data can be cleaned to be free from bias but runtime data might induce biased behavior of model.
Traditional statistical models are simpler to interpret and explain, but unable to explain the outcome of the machine learning model can pose a serious threat to the usage of the model.
For more information, see Manage model risk .
No, you can set up email alerts for your production model deployments in Watson OpenScale, so that you receive email alerts whenever a risk evaluation test fails, and then you can come and check the issues and address them.
IBM offers an end-to-end model risk management solution with IBM Watson OpenScale and IBM OpenPages with Watson. IBM OpenPages MRG offers model risk governance to store and manage a comprehensive model inventory. IBM Watson OpenScale monitors and measures outcomes from AI Models across its lifecycle and validates models.
For more information, see Configure model governance with IBM OpenPages MRG .
Quality metrics are calculated that use manually labeled feedback data and monitored deployment responses for this data.
No, currently, the threshold can be set only for the 'Area under ROC' metric.
Before setting up alerts, you must configure SMTP server in Cloud Pak for Data. For more information, see Enabling email notifications