TIR
TIR is E2E Networks' managed AI platform. Launch GPU powered notebooks, deploy production ready model endpoints, manage datasets efficiently, and run distributed training jobs on India’s fastest cloud GPU infrastructure.
Quick Start
Launch a Notebook
Spin up a GPU-backed Jupyter notebook with pre-installed ML frameworks.
Quick Start: Model Inference
Deploy your first model endpoint and make an inference call in under 5 minutes.
Training Cluster
High-performance GPU cluster designed for large-scale AI and ML model training.
Try Gen AI API
Call any foundation model via an OpenAI-compatible endpoint in seconds.
Nemotron Models Now Available
NVIDIA's Nemotron-4-340B and Nemotron-3-8B are now live in Model Endpoints. Optimised for enterprise workloads with context windows up to 4096 tokens.
Platform Capabilities
Notebooks
Managed Jupyter environments
Inference
One-click inference serving
Training Cluster
Deployments & distributed training
Storage & Data
Datasets, file systems & registry
AI Integration and Deployment
Workflows, RAG & vector search
API Reference
TIR API Reference
Complete reference for every TIR endpoint. Programmatically manage notebooks, inference, datasets, training clusters, and pipelines.
/projects/{id}/nodesList notebook nodes/projects/{id}/endpointsCreate model endpoint/projects/{id}/datasetsList datasets/projects/{id}/nodesLaunch a node/projects/{id}/nodes/{node_id}Stop & delete node/projects/{id}/pipelinesList pipelinesBilling & Credits
Billing & Credits
Track your spending and optimize GPU costs across all TIR resources.
Per-hour billing
Granular per hour billing ensures precise and fair cost control.
Committed Resources
Commit to a plan and receive applicable discounts on select resources.
Real-time usage tracking
Analyze spending by project, service category, and custom date range.