Skip to main content

Model Repositories

TIR Model Repositories give you a central, scalable place to store your AI/ML model weights and configuration files. They run on E2E Object Storage (EOS) with S3-compatible APIs, so it's easy to version models, collaborate with your team, and connect them to TIR inference.

Model VersioningS3-Compatible StorageEOS BucketsModel Deployment Pipeline

Quick Start


What are Model Repositories?

Model Repositories are managed storage systems designed specifically for storing and organizing AI/ML model artifacts. They serve as the foundation for model deployment workflows, allowing you to:

Store model weights and configuration files in a centralized location

Version models using flexible folder structures (v1, v2, etc.)

Share models across team members within a project

Integrate seamlessly with Model Endpoints for automated model loading

Access models from Instances and Inference services

Key Characteristics

Structure

Flexible Model Definition

A model in TIR is simply a directory on EOS. There is no rigid structure, format, or framework required. Use subfolders like v1 and v2 to track versions.

API

S3-Compatible Storage

Built on E2E Object Storage (EOS) with full S3 API support. Use standard tools like Minio CLI (mc) and s3cmd without vendor lock-in.

Flexibility

Multiple Storage Options

Use a new EOS bucket, link an existing EOS bucket, or connect an external S3-compatible bucket for your model repository.


Use Cases

01

Model Versioning and Management

  • Store multiple versions of the same model (v1, v2, production, staging)
  • Track model iterations and roll back to previous versions
  • Maintain model lineage and metadata
02

Model Deployment Pipeline

  • Store models before deploying to Model Endpoints
  • Support CI/CD workflows for model deployment
03

Fine-tuning Workflows

  • Store base models for fine-tuning
  • Save fine-tuned model checkpoints
  • Download models to Instances for training
04

Multi-Framework Support

  • Store PyTorch models (.pth, .pt files)
  • Store TensorFlow models (SavedModel format)
  • Store Triton model repositories
  • Store custom model formats
05

Direct Deployment from Model Repository

  • Use the Deploy Model option in the Model Repository table to deploy as a Model Endpoint
  • Select a framework (for example, vLLM, SGLang, NVIDIA Triton) and link the repository to the endpoint

API Reference

REST API

</>Model Repository API Reference

Programmatically create, list, and delete model repositories in TIR.

Explore REST APIs
Authentication & Endpoints
Request and Response Schemas
Open API Reference →
tir.e2enetworks.com / api / v1
GET/teams/{Team_Id}/projects/{Project_Id}/serving/model/List model repositories
POST/teams/{Team_Id}/projects/{Project_Id}/serving/model/Create model repository
GET/teams/{Team_Id}/projects/{Project_Id}/serving/model/model_types/Get model types
DELETE/teams/{Team_Id}/projects/{Project_Id}/serving/model/{Model_repo_id}/Delete model repository