IBM Cloud Pak® for Data Version 4.8 will reach end of support (EOS) on 31 July, 2025. For more information, see the Discontinuance of service announcement for IBM Cloud Pak for Data Version 4.X.
Upgrade to IBM Software Hub Version 5.1 before IBM Cloud Pak for Data Version 4.8 reaches end of support. For more information, see Upgrading from IBM Cloud Pak for Data Version 4.8 to IBM Software Hub Version 5.1.
Quick start tutorials
Take quick start tutorials to learn how to perform specific tasks, such as refine data or build a model. These tutorials help you quickly learn how to do a specific task or set of related tasks.
If you want to learn how to implement specific use cases, then consider taking the use case tutorials. The use case tutorials help you to try out data fabric use cases, such as Data integration, or building and governing AI use cases, such as Data Science and MLOps.
The quick start tutorials are categorized by task as follows:
- Preparing data
- Analyzing and visualizing data
- Building, deploying, and trusting models
- Working with generative AI
- Governing AI
- Curating and governing data
Each tutorial requires one or more service instances. Some services are included in multiple tutorials. The tutorials are grouped by task. You can start with any task. Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources.
The tags for each tutorial describe the level of expertise (, , or ), and the amount of coding required (, , or ).
After completing these tutorials, see the Other learning resources section to continue your learning.
Preparing data
To get started with preparing, transforming, and integrating data, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform.
Your data preparation workflow has these basic steps:
-
Create a project.
-
Add data to your project. You can add data files from your local system, data from a remote data source that you connect to, or data from a catalog.
-
Choose a tool to analyze your data. Each of the tutorials describes a tool.
-
Run or schedule a job to prepare your data.
Tutorials for preparing data
Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources:
| Tutorial | Description | Expertise for tutorial |
|---|---|---|
| Refine and visualize data with Data Refinery | Prepare and visualize tabular data with a graphical flow editor. | Select operations to manipulate data. |
| Generate synthetic tabular data | Generate synthetic tabular data using a graphical flow editor. | Select operations to generate data. |
Analyzing and visualizing data
To get started with analyzing and visualizing data, understand the overall workflow, choose a tutorial, and check out other learning resources for working with other tools.
Your analyzing and visualizing data workflow has these basic steps:
-
Create a project.
-
Add data to your project. You can add data files from your local system, data from a remote data source that you connect to, or data from a catalog.
-
Choose a tool to analyze your data. Each of the tutorials describes a tool.
Tutorials for analyzing and visualizing data
Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources:
| Tutorial | Description | Expertise for tutorial |
|---|---|---|
| Tell a story with a dashboard | Create a dashboard on a graphical builder. | Drop elements on a canvas and select options. |
| Analyze data in a Jupyter notebook | Load data, run, and share a notebook. | Understand generated Python code. |
| Refine and visualize data with Data Refinery | Prepare and visualize tabular data with a graphical flow editor. | Select operations to manipulate data. |
Building, deploying, and trusting models
To get started with building, deploying, and trusting models, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform.
The model workflow has three main steps: build a model asset, deploy the model, and build trust in the model.
Tutorials for building, deploying, and trusting models
Each tutorial provides a description of the tool, a video, the instructions, and additional learning resources:
| Tutorial | Description | Expertise for tutorial |
|---|---|---|
| Build and deploy a machine learning model with AutoAI | Automatically build model candidates with the AutoAI tool. | Build, deploy, and test a model without coding. |
| Build and deploy a machine learning model in a notebook | Build a model by updating and running a notebook that uses Python code and the Watson Machine Learning APIs. | Build, deploy, and test a scikit-learn model that uses Python code. |
| Build and deploy a machine learning model with SPSS Modeler | Build a C5.0 model that uses the SPSS Modeler tool. | Drop data and operation nodes on a canvas and select properties. |
| Build and deploy a Decision Optimization model | Automatically build scenarios with the Modeling Assistant. | Solve and explore scenarios, then deploy and test a model without coding. |
| Evaluate a machine learning model | Deploy a model, configure monitors for the deployed model, and evaluate the model. | Run a notebook to configure the models and use Watson OpenScale to evaluate. |
Working with generative AI
To get started with generative AI, you must understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform.
Your prompt engineering workflow has these basic steps:
-
Create a project.
-
If necessary, create the service instance that provides the tool you want to use and associate it with the project.
-
Choose a tool to prompt foundation models. Each of the tutorials describes a tool.
-
Save and share your best prompts.
Tutorials for working with generative AI
Each tutorial provides a description of the tool, a video, the instructions, and additional learning resources:
| Tutorial | Description | Expertise for tutorial |
|---|---|---|
| Prompt a foundation model using Prompt Lab | Experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts. | Prompt a model using Prompt Lab without coding. |
| Prompt a foundation model with the retrieval-augmented generation pattern | Prompt a foundation model by leveraging information in a knowledge base. | Use the retrieval-augmented generation pattern in a Jupyter notebook that uses Python code. |
| Tune a foundation model | Tune a foundation model to enhance model performance. | Use the Tuning Studio to tune a model without coding. |
| Try the watsonx.ai end-to-end use case | Follow a use case from data preparation through prompt engineering. | Use various tools, such as notebooks and Prompt Lab. |
Governing AI
To get started with governing AI, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform.
Your AI governance workflow has these basic steps:
-
Create a project.
-
If necessary, create the service instance that provides the tool you want to use and associate it with the project.
-
Choose a tool to govern AI. Each of the tutorials describes a tool.
Tutorials for governing AI
Each tutorial provides a description of the tool, a video, the instructions, and additional learning resources:
| Tutorial | Description | Expertise for tutorial |
|---|---|---|
| Evaluate and track a prompt template | Evaluate a prompt template to measure the performance of foundation model and track the prompt template through its lifecycle. | Use the evaluation tool and an AI use case to track the prompt template. |
| Evaluate a machine learning model | Deploy a model, configure monitors for the deployed model, and evaluate the model. | Run a notebook to configure the models and use Watson OpenScale to evaluate. |
Curating and governing data
To get started with curating and governing data, understand the overall workflows, choose a tutorial, and check out other learning resources for working in Cloud Pak for Data.
Your data curation workflow has these basic steps:
- Add data assets to a catalog:
- Add data assets one at a time in a project and then publish them to a catalog.
- Add all data assets from a connection in a project by importing metadata, and then publish them to a catalog.
- Add data assets one at a time from within a catalog.
- Enrich the data assets by assigning governance artifacts, such as business terms.
Your governing data workflow has these basic steps:
- For a data protection rule, specify how to identify the type of data to mask and the masking method. The rule is enforced immediately.
- For all other types of governance artifacts:
- Create the draft governance artifacts in a category.
- Publish the governance artifacts.
Tutorials for curating and governing data
Choose a data fabric tutorial in the Data governance use case.
Other learning resources
General
Preparing data
Analyzing and visualizing data
Building, deploying, and trusting models
Curating and governing data
Working with generative AI
Governing AI
Videos
- A comprehensive set of videos that show many common tasks in Cloud Pak for Data.
Training
-
Take a data fabric tutorial to try out a data fabric use case, such as AI governance, Data Science and MLOps, Data governance, or Data integration.
-
Take control of your data with Watson Studio is a learning path that consists of step-by-step tutorials that explain the process of working with data using Watson Studio.
Parent topic: Getting started