Skip to main content

Training Cluster

TIR Training Cluster provides a dedicated compute environment for running distributed AI model training workloads. Get fixed-price allocations of GPU, CPU, and RAM with support for PyTorch, PyTorch Lightning, Slurm, and OpenMPI frameworks — no extra charges for deployments running inside the cluster.

Multi-Node GPU TrainingPyTorch / Slurm / MPIFixed PricingSFS / PFS Storage

Quick Start

Explore Training Cluster

API Reference

REST API

</>Training Cluster API Reference

Programmatically create, manage, and monitor TIR Training Clusters and deployments. Automate cluster provisioning, retrieve job status, and control lifecycle via REST.

Explore REST APIs
Authentication & Endpoints
Request and Response Schemas
Open API Reference →
tir.e2enetworks.com / api / v1
GET/projects/{id}/distributed_jobs/cluster/List all training clusters
POST/projects/{id}/distributed_jobs/cluster/Create a training cluster
GET/projects/{id}/distributed_jobs/cluster/{id}/Get cluster details
DELETE/projects/{id}/distributed_jobs/cluster/{id}/Delete a training cluster
POST/projects/{id}/distributed_jobs/jobs/Create a deployment
GET/projects/{id}/distributed_jobs/jobs/List all deployments

Billing & Plans

Billing & Credits

Training Clusters are billed at a fixed rate based on your cluster plan. Pricing does not vary with resource utilization, and deployments running inside the cluster incur no additional charges.

View Billing Docs →

Fixed-rate billing

Billed per hour based on cluster plan — costs do not change with GPU utilization.

No per-deployment charges

Run multiple training deployments inside your cluster at no extra cost.

Plan-based pricing

Choose a cluster plan that matches your RAM, CPU, and GPU workload requirements.