Pricing
How Pipeline Billing Works
TIR Pipelines are billed based on the resource plan selected for each run. Key billing principles:
- Per-hour billing — You are charged for the duration of active run execution, billed hourly.
- No idle charges — There is no cost when pipelines have no active runs. You only pay during execution.
- Resource-plan based — Pricing varies by the CPU or GPU plan selected when creating a run.
- Scheduled runs — Each triggered execution is billed independently based on its duration and resource plan.
Resource Plans
CPU Plans
CPU plans are suitable for data preprocessing, lightweight inference, and orchestration tasks.
| Plan Type | Use Case |
|---|---|
| CPU Basic | Simple data processing, file transfers, orchestration |
| CPU Standard | Medium workloads, data transformation, batch processing |
| CPU High | CPU-intensive tasks, large-scale data preprocessing |
GPU Plans
GPU plans are designed for training, fine-tuning, and GPU-accelerated workloads.
| Plan Type | Use Case |
|---|---|
| GPU Basic | Small model training, quick experiments |
| GPU Standard | Full model training, fine-tuning |
| GPU High | Large-scale training, multi-step GPU workflows |
info
For specific plan pricing and configurations, use the E2E Calculator. Click here to estimate your costs.
Pricing Examples
Example 1: Short Training Job
Scenario: A single-step training pipeline running for 2 hours on a GPU plan.
- Resource: GPU Standard plan
- Duration: 2 hours
- Billing: 2 hours x GPU Standard hourly rate
- Result storage: Pipeline artifacts stored in EOS (billed separately under Datasets pricing)
Example 2: Scheduled Batch Processing
Scenario: A daily scheduled pipeline for data preprocessing, running 30 minutes each time on a CPU plan.
- Resource: CPU Standard plan
- Frequency: Once daily (30 runs/month)
- Duration per run: 30 minutes (billed as 1 hour minimum or pro-rated)
- Monthly billing: 30 x CPU Standard hourly rate
Example 3: Multi-step Argo Workflow
Scenario: A 3-step pipeline — data preprocessing (CPU, 1 hour), training (GPU, 4 hours), evaluation (CPU, 30 minutes).
- Step 1: CPU plan x 1 hour
- Step 2: GPU plan x 4 hours
- Step 3: CPU plan x 1 hour (rounded)
- Total: Sum of each step's resource-plan cost
Cost Optimization Tips
1. Right-size Resources
- Use CPU plans for data preprocessing, file transfers, and evaluation steps.
- Reserve GPU plans only for steps that require GPU acceleration (training, inference).
- Avoid selecting GPU plans for orchestration-only steps.
2. Use Scheduling Wisely
- Set appropriate
max_concurrencyon scheduled runs to prevent overlapping executions that consume duplicate resources. - Remove or disable scheduled runs that are no longer needed.
3. Optimize Pipeline Steps
- Leverage the retry mechanism to resume failed jobs without re-running from scratch — this avoids paying for already-completed work.
- Store intermediate results in EOS buckets to avoid recomputation if a later step fails.
- Break large pipelines into smaller steps so that only failed steps need to be retried.
4. Clean Up Unused Resources
- Delete pipelines and versions that are no longer needed.
- Delete completed or failed runs to keep your workspace clean.