Model Endpoints
TIR provides two methods for deploying containers that serve model API endpoints for AI inference services:
-
Deploy Using Pre-built Containers (Provided by TIR)
Before launching a service with TIR's pre-built containers, you need to create a TIR Model and upload the necessary model files. These containers are configured to automatically download model files from an EOS (E2E Object Storage) Bucket and start the API server. Once the endpoint is ready, you can make synchronous requests to the endpoint for inference. Learn more in this tutorial.
-
Deploy using your own container
You can launch an inference service using your own Docker image, either public or private. Once the endpoint is ready, you can make synchronous requests for inference. Optionally, you can attach a TIR model to automate the download of model files from an EOS bucket to the container. Learn more in this tutorial.
Create Model Endpoints
-
To create a model endpoint:
-
Navigate to Inference: Click on "Model Endpoints."
-
Create an Endpoint: Click on the "CREATE ENDPOINT" button.
-
Choose a Framework:
You can use the search bar to explore the available frameworks.
-
-
Click on Link with Model Repository and select from Model Repository.
-
Click on Download from Huggingface and select token from HF Token Integration.
-
In case any HF Token is not integrated, then click on Click Here to create a new Integration.
-
Add Integration Name and Hugging Face Token and then click on create.
-
Plan Details
Machine
Here you can select a machine type, either GPU or CPU. While selecting the plan, you can choose the plan for the inference from Committed or Hourly Billed.
You can apply filters to the available machines to refine your search results.
Serverless Configuration
For Hourly Billed Inference, it's possible to set the Active Workers to 0. You can set the number of Active Workers to any value from 0 up to the max workers. When there are no incoming requests, the number of active workers automatically reduces to 0, ensuring that no billing is incurred for that period.
For Committed Inference, the Active Workers cannot be set to 0, and the number of active workers must always remain greater than 0, ensuring the committed nature of Inference. You can set the number of Active Workers to any value from 1 up to the max workers.
Note: All additional inferences, which are added automatically due to the autoscaling feature, will be charged at an hourly billing rate. Similarly, if the Active Worker count is increased from a serverless configuration after the inference is created, those additional inferences will also be billed at the hourly rate.
Replicas
The number of replicas you specify in the input field below will always be ready, and you will be charged continuously for them.
Enable Autoscaling
Here you have the option to enable Autoscaling.
Environment Variables
Add Variable
You can add a variable by clicking the ADD VARIABLE button.
Summary
You can view the endpoint summary on the summary page, and if needed, modify any section of the inference creation process by clicking the edit button.
- In the case of Hourly Billed Inference with Active Worker not equal to Max Worker, the billing is shown as:
- In the case of Committed Inference with Active Worker not equal to Max Worker, the billing is shown as:
If the committed inference option is selected, a dialog box will appear asking for your preference regarding the next cycle.
- You can change the serving model for the inference at any stage of the inference creation process.
Model Endpoint Dashboard
After successfully creating model endpoints, you will see the following screen.
Overview
In the overview tab, you can see the Endpoint Details and Plan Details.
Logs
In the log tab, logs are generated, and you can view them by clicking on the log tab.
Deployment Events
In the Deployments Events tab, information about Deployment Events is provided.
Access Logs
In the Access log tab, Access logs are generated, and you can view them by clicking on the tab.
Monitoring
In the Monitoring tab, you can choose between Hardware and Service as metric types. Under Hardware, you'll find details like GPU Utilization, GPU Memory Utilization, GPU Temperature, GPU Power Usage, CPU Utilization, and Memory Usage.
Serverless Configuration
In the Serverless Configuration, you can modify both the Active Worker and Max Worker counts. If the Active Worker count is not equal to the Max Worker, autoscaling will be enabled.
Auto Scaling
In the Auto Scaling tab, you can view the current number of replicas, desired replicas, and also have the option to disable auto-scaling. You can increase the number of desired replicas.
Manage Replicas
In the Replica Management tab, you can handle the replicas. You'll see the current replicas, and you also have the option to delete them by clicking the delete icon.
Search Model Endpoint
In the search model endpoint tab, you can search for the model endpoint by name and apply advanced filter configurations to refine your search criteria.
Update Model Endpoint
- To update a Model Endpoint, you have to click on the Edit option.
When it is Committed Inference
When it is Hourly Billed Inference
Model Download
Here you can select Link with Model Repository or Download from Huggingface.
Machine
Here you can select a machine type, either GPU or CPU.
When updating an Hourly Billed Inference, only the Storage per Worker can be modified. All other aspects, such as scaling and worker adjustments, are managed through the Serverless Configuration tab.
Disk Size per Replica
You have the option to specify the disk size for each replica, measured in gigabytes (GB), to match your storage requirements.
Replicas
The number of replicas you specify in the input field below will always be ready, and you will be charged continuously for them.
In case the Autoscaling is enabled, then the replicas cannot be managed manually.
Environment Variable
Add Variable