Skip to main content

Pricing

How Pipeline Billing Works

TIR Pipelines are billed based on the resource plan selected for each run. Key billing principles:

  • Per-hour billing — You are charged for the duration of active run execution, billed hourly.
  • No idle charges — There is no cost when pipelines have no active runs. You only pay during execution.
  • Resource-plan based — Pricing varies by the CPU or GPU plan selected when creating a run.
  • Scheduled runs — Each triggered execution is billed independently based on its duration and resource plan.

Resource Plans

CPU Plans

CPU plans are suitable for data preprocessing, lightweight inference, and orchestration tasks.

Plan TypeUse Case
CPU BasicSimple data processing, file transfers, orchestration
CPU StandardMedium workloads, data transformation, batch processing
CPU HighCPU-intensive tasks, large-scale data preprocessing

GPU Plans

GPU plans are designed for training, fine-tuning, and GPU-accelerated workloads.

Plan TypeUse Case
GPU BasicSmall model training, quick experiments
GPU StandardFull model training, fine-tuning
GPU HighLarge-scale training, multi-step GPU workflows
info

For specific plan pricing and configurations, use the E2E Calculator. Click here to estimate your costs.


Pricing Examples

Example 1: Short Training Job

Scenario: A single-step training pipeline running for 2 hours on a GPU plan.

  • Resource: GPU Standard plan
  • Duration: 2 hours
  • Billing: 2 hours x GPU Standard hourly rate
  • Result storage: Pipeline artifacts stored in EOS (billed separately under Datasets pricing)

Example 2: Scheduled Batch Processing

Scenario: A daily scheduled pipeline for data preprocessing, running 30 minutes each time on a CPU plan.

  • Resource: CPU Standard plan
  • Frequency: Once daily (30 runs/month)
  • Duration per run: 30 minutes (billed as 1 hour minimum or pro-rated)
  • Monthly billing: 30 x CPU Standard hourly rate

Example 3: Multi-step Argo Workflow

Scenario: A 3-step pipeline — data preprocessing (CPU, 1 hour), training (GPU, 4 hours), evaluation (CPU, 30 minutes).

  • Step 1: CPU plan x 1 hour
  • Step 2: GPU plan x 4 hours
  • Step 3: CPU plan x 1 hour (rounded)
  • Total: Sum of each step's resource-plan cost

Cost Optimization Tips

1. Right-size Resources

  • Use CPU plans for data preprocessing, file transfers, and evaluation steps.
  • Reserve GPU plans only for steps that require GPU acceleration (training, inference).
  • Avoid selecting GPU plans for orchestration-only steps.

2. Use Scheduling Wisely

  • Set appropriate max_concurrency on scheduled runs to prevent overlapping executions that consume duplicate resources.
  • Remove or disable scheduled runs that are no longer needed.

3. Optimize Pipeline Steps

  • Leverage the retry mechanism to resume failed jobs without re-running from scratch — this avoids paying for already-completed work.
  • Store intermediate results in EOS buckets to avoid recomputation if a later step fails.
  • Break large pipelines into smaller steps so that only failed steps need to be retried.

4. Clean Up Unused Resources

  • Delete pipelines and versions that are no longer needed.
  • Delete completed or failed runs to keep your workspace clean.