Skip to main content

Instances

TIR Instances are high-performance, containerized compute environments for AI/ML development. Launch GPU-backed workspaces with pre-installed ML frameworks, JupyterLab access, and persistent storage, ready in minutes.

GPU: H200 / H100 / A100 / L40SCPU InstancesPrivate ClusterJupyterLab & SSH

Quick Start

Explore Instances

Instance Capabilities

CPU, GPU, Spot & Private Cluster


Customization

Images, containers & integrations


Add-Ons

UI tools & visualizations


API Reference

REST API

</>Instances API Reference

Programmatically create, manage, and monitor TIR Instances. Automate launches, retrieve status, and control lifecycle via REST.

Explore REST APIs
Authentication & Endpoints
Request and Response Schemas
Open API Reference →
tir.e2enetworks.com / api / v1
GET/projects/{id}/nodesList all instances
POST/projects/{id}/nodesLaunch a new instance
GET/projects/{id}/nodes/{node_id}Get instance details
POST/projects/{id}/nodes/{node_id}/startStart a stopped instance
POST/projects/{id}/nodes/{node_id}/stopStop a running instance
DELETE/projects/{id}/nodes/{node_id}Delete an instance

Billing & Plans

Billing & Credits

Instances are billed per hour for active compute. Stopped instances incur only storage charges.

View Billing Docs →

Hourly billing

Stopped instances are not charged for compute; only storage applies.

Real-time usage tracking

Analyze spending by project, service category, and custom date range.

Committed plans

Reserve and launch dedicated compute resources with flexible commitment plans of 1, 6, or 12 months.