Billing details for watsonx.ai Studio tools
Learn about how compute usage is measured using capacity unit hours (CUH) consumed by an active environment runtime in watsonx.ai Studio.
Capacity units per hour for notebooks
| Capacity type | Language | Capacity units per hour |
|---|---|---|
| 1 vCPU and 4 GB RAM | Python R |
0.5 |
| 2 vCPU and 8 GB RAM | Python R |
1 |
| 4 vCPU and 16 GB RAM | Python R |
2 |
| 8 vCPU and 32 GB RAM | Python R |
4 |
| 16 vCPU and 64 GB RAM | Python R |
8 |
| Driver: 1 vCPU and 4 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM | Spark with Python Spark with R |
1 CUH per additional executor is 0.5 |
| Driver: 1 vCPU and 4 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM | Spark with Python Spark with R |
1.5 CUH per additional executor is 1 |
| Driver: 2 vCPU and 8 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM; | Spark with Python Spark with R |
1.5 CUH per additional executor is 0.5 |
| Driver: 2 vCPU and 8 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM; | Spark with Python Spark with R |
2 CUH per additional executor is 1 |
| Driver: 3 vCPU and 12 GB RAM; 1 Executor: 3 vCPU and 12 GB RAM; | Spark with Python Spark with R |
2 CUH per additional executor is 1 |
The rate of capacity units per hour consumed is determined for:
-
Default Python or R environments by the hardware size and the number of users in a project using one or more runtimes
For example: The
IBM Runtime 24.1 on Python 3.10 XSwith 2 vCPUs will consume 1 CUH if it runs for one hour. If you have a project with 7 users working on notebooks 8 hours a day, 5 days a week, all using theIBM Runtime 24.1 on Python 3.10 XSenvironment, and everyone shuts down their runtimes when they leave in the evening, runtime consumption is5 x 7 x 8 = 280 CUH per week.The CUH calculation becomes more complex when different environments are used to run notebooks in the same project and if users have multiple active runtimes, all consuming their own CUHs. Additionally, there might be notebooks, which are scheduled to run during off-hours, and long-running jobs, likewise consuming CUHs.
-
Default Spark environments by the hardware configuration size of the driver, and the number of executors and their size. The Spark Driver is responsible for managing the overall execution of a Spark application, including task scheduling and communication with the cluster manager, while the Spark Executors are distributed processes on worker nodes that run the actual tasks assigned by the driver and handle data processing in parallel.
Capacity units per hour for notebooks with Decision Optimization
The rate of capacity units per hour consumed is determined by the hardware size and the price for Decision Optimization.
| Capacity type | Language | Capacity units per hour |
|---|---|---|
| 1 vCPU and 4 GB RAM | Python + Decision Optimization | 0.5 + 5 = 5.5 |
| 2 vCPU and 8 GB RAM | Python + Decision Optimization | 1 + 5 = 6 |
| 4 vCPU and 16 GB RAM | Python + Decision Optimization | 2 + 5 = 7 |
| 8 vCPU and 32 GB RAM | Python + Decision Optimization | 4 + 5 = 9 |
| 16 vCPU and 64 GB RAM | Python + Decision Optimization | 8 + 5 = 13 |
Capacity units per hour for notebooks with Watson Natural Language Processing
The rate of capacity units per hour consumed is determined by the hardware size and the price for Watson Natural Language Processing.
| Capacity type | Language | Capacity units per hour |
|---|---|---|
| 1 vCPU and 4 GB RAM | Python + Watson Natural Language Processing | 0.5 + 5 = 5.5 |
| 2 vCPU and 8 GB RAM | Python + Watson Natural Language Processing | 1 + 5 = 6 |
| 4 vCPU and 16 GB RAM | Python + Watson Natural Language Processing | 2 + 5 = 7 |
| 8 vCPU and 32 GB RAM | Python + Watson Natural Language Processing | 4 + 5 = 9 |
| 16 vCPU and 64 GB RAM | Python + Watson Natural Language Processing | 8 + 5 = 13 |
Capacity units per hour for Pipelines Bash script
| Size | Capacity type | Capacity units per hour |
|---|---|---|
| XXS | 1 vCPU and 2 GB RAM | 0.5 |
| XS | 1 vCPU and 4 GB RAM | 0.5 |
| S | 2 vCPU and 8 GB RAM | 1 |
| M | 4 vCPU and 16 GB RAM | 2 |
| ML | 4 vCPU and 32 GB RAM | 2 |
Notes:
- The runtimes for Bash scripts stop automatically when processing is complete.
- Resource consumption is measured in seconds, with a minimum of 1 second. For example, if the execution time is 52.1 seconds, the charge is for 53 seconds.
CUH consumption in pipelines
Pipelines are used to run various assets, such as notebooks and scripts. While building a pipeline does not consume compute resources, running the pipeline does.
One specific case where CUH is explicitly charged is when running Bash script nodes in a pipeline. These scripts are executed using a selected hardware configuration, and the CUH charge is based on the execution time and the configuration used.
Capacity units per hour for Synthetic Data Generator
| Capacity type | Capacity units per hour |
|---|---|
| 2 vCPU and 8 GB RAM | 7 |
When you use Synthetic Data Generator to generate unstructured synthetic data, costs are not calculated through capacity unit hours. Costs are determined by the underlying usage of tokens by the LLM whenever it processes input data and generates synthetic data. The following models are certified for use with the Synthetic Data Generator service:
ibm/granite-3-8bmistralai/mistral-large
For more information about pricing, see IBM foundation models and Third-party foundation models.
Capacity units per hour for SPSS Modeler flows
| Name | Capacity type | Capacity units per hour |
|---|---|---|
Default SPSS Modeler S |
2 vCPU and 8 GB RAM | 1 |
Default SPSS Modeler M |
4 vCPU and 16 GB RAM | 2 |
Default SPSS Modeler L |
6 vCPU and 24 GB RAM | 3 |
Capacity units per hour for Data Refinery and Data Refinery flows
As of 4th September 2025, the Default Spark 3.4 & R 4.2 runtime in Watsonx.ai Studio is deprecated. From 3rd October 2025, you cannot create new notebooks or custom environments using Default Spark 3.4 & R 4.2 runtime. You can still run your code that uses the deprecated runtime until 3rd November 2025. To avoid disruption, make sure to update your code to use the Default Spark 3.4 & R 4.3 runtime.
- For information about changing environments, see Changing notebook environments.
| Name | Capacity type | Capacity units per hour |
|---|---|---|
| Default Data Refinery XS runtime | 3 vCPU and 12 GB RAM | 1.5 |
| Default Spark 3.4 & R 4.3 | 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM | 1.5 |
| Default Spark 3.4 & R 4.2 | 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM |
1.5 |
Capacity units per hour for RStudio
| Name | Capacity type | Capacity units per hour |
|---|---|---|
| Default RStudio XS | 2 vCPU and 8 GB RAM | 1 |
| Default RStudio M | 8 vCPU and 32 GB RAM | 4 |
| Default RStudio L | 16 vCPU and 64 GB RAM | 8 |
Capacity units per hour for GPU environments
| Capacity type | GPUs | Language | Capacity units per hour |
|---|---|---|---|
| 1 x NVIDIA Tesla V100 | 1 | Python with GPU | 68 |
| 2 x NVIDIA Tesla V100 | 2 | Python with GPU | 136 |
Learn more
- For information on monitoring your account's resource usage, see Monitoring account resource usage.
- For details on computing resource allocation and consumption, see watsonx.ai Studio environments compute usage.