Compute usage is calculated by the number of capacity unit hours (CUH) consumed by an active environment runtime in Watson Studio. Watson Studio plans govern how you are billed monthly for the resources you consume.
A set amount of capacity units per month is included in each plan per month. With the Standard and Enterprise plans, if you exceed the set amount in a month, you pay for more compute usage.
| Feature | Lite | Standard | Enterprise |
|---|---|---|---|
| Processing usage | 50 CUH per month |
50 CUH per month + pay for more |
5000 CUH per month + pay for more |
| Capacity type | Language | Capacity units per hour |
|---|---|---|
| 1 vCPU and 4 GB RAM | Python R |
0.5 |
| 2 vCPU and 8 GB RAM | Python R |
1 |
| 4 vCPU and 16 GB RAM | Python R |
2 |
| 8 vCPU and 32 GB RAM | Python R |
4 |
| 16 vCPU and 64 GB RAM | Python R |
8 |
| Driver: 1 vCPU and 4 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM | Spark with Python Spark with R Spark with Scala |
1 CUH per additional executor is 0.5 |
| Driver: 1 vCPU and 4 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM | Spark with Python Spark with R Spark with Scala |
1.5 CUH per additional executor is 0.5 |
| Driver: 2 vCPU and 8 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM; | Spark with Python Spark with R Spark with Scala |
1 CUH per additional executor is 0.5 |
| Driver: 2 vCPU and 8 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM; | Spark with Python Spark with R Spark with Scala |
2 CUH per additional executor is 0.5 |
| Driver: 1 vCPU and 4 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM | Spark with Python Spark with R Spark with Scala |
1 CUH per additional executor is 0.5 |
| Driver: 1 vCPU and 4 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM | Spark with Python Spark with R Spark with Scala |
1.5 CUH per additional executor is 0.5 |
| Driver: 2 vCPU and 8 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM; | Spark with Python Spark with R Spark with Scala |
1.5 CUH per additional executor is 0.5 |
| Driver: 2 vCPU and 8 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM; | Spark with Python Spark with R Spark with Scala |
2 CUH per additional executor is 0.5 |
The rate of capacity units per hour consumed is determined for:
Default Python or R environments by the hardware size and the number of users in a project using one or more runtimes
For example: The Default Python 3.7 XS with 2 vCPUs will consume 1 CUH if it runs for one hour. If you have a project with 7 users working on notebooks 8 hours a day, 5 days a week, all using the Default Python 3.7 XS environment,
and everyone shuts down their runtimes when they leave in the evening, runtime consumption is 5 x 7 x 8 = 280 CUH per week.
The CUH calculation becomes more complex when different environments are used to run notebooks in the same project and if users have multiple active runtimes, all consuming their own CUHs. Additionally, there might be notebooks, which are scheduled to run during off-hours, and long-running jobs, likewise consuming CUHs.
| Capacity type | Language | Capacity units per hour |
|---|---|---|
| 1 vCPU and 4 GB RAM | Python R |
0.5 |
| 2 vCPU and 8 GB RAM | Python R |
1 |
| 4 vCPU and 16 GB RAM | Python R |
2 |
| 8 vCPU and 32 GB RAM | Python R |
4 |
| 16 vCPU and 64 GB RAM | Python R |
8 |
The rate of capacity units per hour consumed is determined by the hardware size and the price for Decision Optimization.
| Capacity type | Language | Capacity units per hour |
|---|---|---|
| 1 vCPU and 4 GB RAM | Python + Decision Optimization | 0.5 + 5 = 5.5 |
| 2 vCPU and 8 GB RAM | Python + Decision Optimization | 1 + 5 = 6 |
| 4 vCPU and 16 GB RAM | Python + Decision Optimization | 2 + 5 = 7 |
| 8 vCPU and 32 GB RAM | Python + Decision Optimization | 4 + 5 = 9 |
| 16 vCPU and 64 GB RAM | Python + Decision Optimization | 8 + 5 = 13 |
| Name | Capacity type | Capacity units per hour |
|---|---|---|
| Default SPSS XS | 4 vCPU 16 GB RAM | 2 |
| Name | Capacity type | Capacity units per hour |
|---|---|---|
| Default Data Refinery XS runtime | 3 vCPU and 12 GB RAM | 1.5 |
| Default Spark 3.0 & R 3.6 | 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM | 1.5 |
Default Spark 2.4 & R 3.6|2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM|1.5| |Default Spark 2.3 & R 3.4|2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM|1.5|
| Name | Capacity type | Capacity units per hour |
|---|---|---|
| Default RStudio XS | 2 vCPU and 8 GB RAM | 1 |
| Default RStudio M | 8 vCPU and 32 GB RAM | 4 |
| Default RStudio L | 16 vCPU and 64 GB RAM | 8 |
| Capacity type | GPUs | Language | Capacity units per hour |
|---|---|---|---|
| 1/2 x NVIDIA Tesla K80 | 1 | Python with GPU | 4 |
| 1 x NVIDIA Tesla K80 | 2 | Python with GPU | 8 |
| 2 x NVIDIA Tesla K80 | 4 | Python with GPU | 12 |
You are notified when you're about to reach the monthly runtime capacity limit for your Watson Studio service plan. When this happens, you can:
Remember: The CUH counter continues to increase while a runtime is active so stop the runtimes you aren't using. If you don't explicitly stop a runtime, the runtime is stopped after an idle timeout. During the idle time, you will continue to consume CUHs for which you are billed.
You can view the environment runtimes that are currently active in a project, and monitor usage for the project from the project's Environments page.
The CUH consumed by the active runtimes in a project are billed to the account that the project creator has selected in his or her profile settings at the time the project is created. This account can be the account of the project creator, or another account that the project creator has access to. If other users are added to the project and use runtimes, their usage is also billed against the account that the project creator chose at the time of project creation.
You can track the runtime usage for an account on the Environment Runtimes page if you are the IBM Cloud account owner or administrator.
To view the total runtime usage across all of the projects and see how much of your plan you have currently used, choose Administration > Environment runtimes.
A list of the active runtimes billed to your account is displayed. You can see who created the runtimes, when, and for which projects, as well as the capacity units that were consumed by the active runtimes at the time you view the list.