Instances
TIR Instances are high-performance, containerized compute environments for AI/ML development. Launch GPU-backed workspaces with pre-installed ML frameworks, JupyterLab access, and persistent storage, ready in minutes.
Quick Start
Create Your First Instance
Launch a GPU-backed notebook with pre-installed ML frameworks in minutes.
Instance Features
SSH access, JupyterLab, persistent storage, and more built-in capabilities.
Use Your Own Image
Bring a custom Docker image and deploy it as a TIR Instance.
Troubleshooting & FAQs
Resolve common issues and get answers to frequently asked questions.
Explore Instances
Storage & Persistence
Volumes, datasets & mount
Customization
Images, containers & integrations
Guides
Step-by-step tutorials
API Reference
Instances API Reference
Programmatically create, manage, and monitor TIR Instances. Automate launches, retrieve status, and control lifecycle via REST.
/projects/{id}/nodesList all instances/projects/{id}/nodesLaunch a new instance/projects/{id}/nodes/{node_id}Get instance details/projects/{id}/nodes/{node_id}/startStart a stopped instance/projects/{id}/nodes/{node_id}/stopStop a running instance/projects/{id}/nodes/{node_id}Delete an instanceBilling & Plans
Billing & Credits
Instances are billed per hour for active compute. Stopped instances incur only storage charges.
Hourly billing
Stopped instances are not charged for compute; only storage applies.
Real-time usage tracking
Analyze spending by project, service category, and custom date range.
Committed plans
Reserve and launch dedicated compute resources with flexible commitment plans of 1, 6, or 12 months.