TIR AI PLATFORM Release Notes

Release Updates

  • TIR AI (PBAC) Release Update - 26 July, 2024 08:30 PM

We are delighted to introduce Policy-Based Access Control (PBAC), seamlessly integrated into our TIR AI Platform for an enhanced user experience. TIR PBAC (Policy-Based Access Control) is a comprehensive system designed to manage user roles and permissions within the TIR platform. This feature ensures that the right people have access to the right resources, streamlining collaboration and enhancing security across various teams and projects. With TIR PBAC, organizations can efficiently assign roles such as Owner, Admin, Team Lead, Team Member, Project Lead, and Member, ensuring clear and controlled access to critical functions and information. This structured approach not only simplifies user management but also fosters a more organized and productive working environment.

  • Node and Inference(Advance Filter) Release Update - 12 July, 2024 6:15 PM

We are thrilled to announce the addition of an advanced filter feature on the Node and Model Endpoint Listing page. This enhancement enables users to perform comprehensive filtering based on multiple criteria and combinations.

  • Finetune jobs new model Release Update - 12 July, 2024 03:45 PM

We are thrilled to announce the addition of a new model to our Fine-Tuning Jobs: stabilityai/stable-diffusion-xl-base-1.0. This model is now available for fine-tuning, enhancing the versatility and capabilities of your machine learning projects.

  • Node (Restart Option) Release Update - 3 July, 2024 4:30 PM

We are excited to announce that we have introduced a restart Options in the Notebook. Users can now restart their notebooks. For more info on Containers

  • Vector Database (Qdrant) Release Update - 28 Jun, 2024 6:30 PM

We are thrilled to announce the addition of Snapshot Capability in our vector databases enabling users to capture and store the current database state for backup. Users can easily revert to a previous database state using the stored snapshots, facilitating quick recovery from data loss or testing scenarios.

  • Node Release Update - 28 Jun, 2024 6:00 PM

We are excited to announce the release of a new improved feature in Node, where users can now add multiple SSH keys at any status of the node.

  • Model Endpoint Release Update - 20 Jun, 2024 5:30 PM

We are excited to announce the latest addition of Restart Action to the model endpoint actions, and using this the user will be able to restart the endpoint.

  • Fine Tuning Release Update - 20 Jun, 2024 5:00 PM

We are excited to announce that users can now create fine-tuned models by continuing training from previous model checkpoints, with only fine-tuned repositories available. Also the user is able to clone the previous fine tuned job to create a new fine tuned job. For more info on FineTune Models

  • Model Endpoint Release Update - 14 Jun, 2024 5:30 PM

We are excited to announce the release of new feature in model endpoint where user can update the specification of the repositories and endpoints. For more info Click Here

  • SSH Keys (Sync SSH Keys) Release Update - 14 June, 2024 04:45 PM

We are thrilled to announce the launch of a new feature that allows users to import SSH keys from MyAccount to TIR AI platform. For more info on SSH Key Management

  • Node (Notebook Launch Options) Release Update - 13 June, 2024 11:00 AM

We are thrilled to announce the launch of a new feature that allows users to launch notebooks without JupyterLab. Additionally, for container registry, users can now specify whether the image includes JupyterLab or not. This enhancement provides greater flexibility and customization for your notebook environments. For more info on Create Notebooks

  • Finetune jobs (Training Metrics Feature) Release Update - 7 June, 2024 11:00 AM

We are excited to announce the launch of a new feature: training metrics in Fine-Tuning Jobs. This feature includes the following metrics:

  • train/loss: This metric represents the loss value calculated on the training data during the training process

  • train/grad_norm: This metric indicates the norm of the gradients of the model parameters

  • train/learning_rate: This metric tracks the learning rate used by the optimizer at each training step

  • train/epoch: This metric represents the epoch number during training

These metrics provide valuable insights into the training process, helping you monitor and optimize your models effectively. For more info on Fine-Tune-models

  • Finetune jobs (New Model Additions) Release Update - 6 June, 2024 03:45 PM

We are thrilled to announce the addition of three new models to our Fine-Tuning Jobs: mistralai/Mistral-7B-Instruct-v0.2, meta-llama/Meta-Llama-3-8B-Instruct, and google/gemma-7b-it. These models are now available for fine-tuning, enhancing the versatility and capabilities of your machine learning projects.

  • Model EndPoint (HF Token Integration) Release Update - 5 June, 2024 03:00 PM

We are pleased to announce the addition of the HF Token Integration Section in the Model Endpoint creation and updating processes. This enhancement allows users to avoid repeatedly providing an HF Token during model creation. Additionally, it supports the storage of multiple integration tokens simultaneously, streamlining and improving the user experience. For more info on Model Endpoints

  • Finetune jobs (Added LORA and bnb quantization option) Release Update - 27 May, 2024 05:00 PM

We are thrilled to unveil our latest update, integrating LORA configuration and Bits and Bytes (BnB) quantization options for fine-tuning models. For more info on Fine-tuning Meta-Llama-3-8B

What’s New:

LORA Configuration: Harness the power of LORA (Low-Rank Adaptation) to optimize your model’s performance like never before. With LORA, you can fine-tune your models with enhanced efficiency and precision. BnB Quantization: Faster training times and reduced model sizes! Our Bits and Bytes (BnB) quantization option empowers you to compress your models without compromising on quality.

  • Finetune jobs (Wandb Integration) Release Update - 21 May, 2024 04:46 PM

We are excited to announce that we have integrated Wandb for model fine-tuning. Wandb, short for Weights and Biases, is a platform that offers tools for machine learning experiment tracking, visualization, and collaboration. It helps you keep track of your machine learning experiments, log hyperparameters, metrics, and output visualizations, making it easier to monitor and share your work with others. For more info on Fine-tuning Meta-Llama-3-8B

  • Finetune jobs (Feature update and UI chnages) Release Update - 17 May, 2024 04:30 PM

We are excited to announce the addition of new features to our fine-tuning, including logs, events, and utilization metrics. We have also made updates to the fine-tuning model’s GUI for an enhanced user experience. For more info on Fine-tuning Meta-Llama-3-8B

Event: In the event section, you can monitor recent pod activities, including scheduling, container startup, and other relevant events.

Metrics: In the metrics section, you can monitor the resource utilization of the pod, including GPU utilization, GPU memory usage, and other relevant metrics.

  • Model Endpoints (vLLM model) Release Update - 3 May, 2024 04:00 PM

We are excited to announce that we have introduced the new vLLM model in Model Endpoints.When you set up Model Endpoints , you have the option to select the vLLM model. The vLLM server is structured to accommodate the OpenAI Chat API, enabling users to partake in dynamic conversations with the model. This chat interface provides a highly interactive method of interaction, facilitating fluid exchanges that can be archived in the chat history. For more info on Model Endpoints

  • GenAI API (llama-3-8b-instruct model) Release Update - 29 Apr, 2024 05:00 PM

We are excited to announce that we have introduced the new llama-3-8b-instruct model in GenAI API process.When you set up GenAI API, you have the option to select the llama-3-8b-instruct model.This model is an advanced version of ‘Llama-2-13b-chat’ model. It is a user-friendly platform designed for easy access, allowing you to select an AI model, adjust settings, and observe real-time results. It’s perfect for individuals interested in exploring machine learning and AI models For more info on Playground

  • Finetune jobs (meta-llama/Meta-Llama-3-8B model) Release Update - 19 Apr, 2024 04:30 PM

We are excited to announce that we have introduced the new ‘meta-llama/Meta-Llama-3-8B’ model in fine-tuning process.When you set up fine-tuning, you have the option to select the ‘meta-llama/Meta-Llama-3-8B’ model in the job model configuration. For more info on Fine-tuning Meta-Llama-3-8B

  • Finetune jobs (google/gemma-7b model) Release Update - 16 Apr, 2024 05:30 PM

We are excited to announce that we have introduced the new ‘google/gemma-7b’ model in fine-tuning process.When you set up fine-tuning, you have the option to select the ‘google/gemma-7b’ model in the job model configuration. For more info on Fine-tuning Gemma-7b

  • GenAI API (Whisper-large-v3 and e5-mistral-7b-instruct(Vector Embeddings) models) Release Update - 12 Apr, 2024 03:00 PM

We are excited to announce that we have introduced the new Whisper-large-v3 and e5-mistral-7b-instruct(Vector Embeddings) models in GenAI API process.When you set up GenAI API, you have the option to select the whisper-large-v3 or e5-mistral-7b-instruct model. It is a platform designed to be easy to use, where you can choose an AI model, tweak settings, and see the results in real-time. It’s ideal for those who want to delve into machine learning and AI models. For more info on Model Playground

  • Finetune jobs (mistralai/Mixtral-8x7B model) Release Update - 22 Mar, 2024 03:30 PM

We are excited to announce that we have introduced the new mistralai/Mixtral-8x7B model in fine-tuning process.When you set up fine-tuning, you have the option to select the mistralai/Mixtral-8x7B model in the job model configuration. For more info on Fine-tuning Mixtral-8x7B model

  • Scheduled Run Release Update - 29 Feb, 2024 04:30 PM

We are thrilled to announce that we have implemented the “Scheduled Run” feature in run pipeline.You can set up a scheduled run for a specific date and time. After that, the designated pipeline will automatically run at the scheduled date and time. For more info on Scheduled Run

  • Model Playground Release Update - 21 Feb, 2024 06:00 PM

We are happy to share that we have introduced the Model Playground feature to the TIR AI Platform. Right now, we have launched three models available: Llama-2-13b-chat, Stable Diffusion 2.1, and Code Llama 13b. The Model Playground is a service that makes it easy to use AI models through a single API. It is an interactive and easy-to-use platform where you can pick a AI model, adjust settings, and see the outcomes. It’s designed for people who want to experiment with machine learning and AI models. For more info on Model Playground

  • Finetune jobs (Mistral 7b model) Release Update - 15 Feb, 2024 05:30 PM

We are excited to announce that we have introduced the new Mistral 7b model in fine-tuning process.When you set up fine-tuning, you have the option to select the Mistral 7b model in the job model configuration. For more info on Fine-tuning Mistral-7B

  • Fine Tuning Release Update - 9 Feb, 2024 07:00 PM

We are thrilled to announce that we have implemented the EOS dataset in the Fine Tuning feature. When you are setting up fine-tuning, you have the option to either attach a custom dataset or create a new one. For more info on EOS Dataset for Fine-tuning

  • Pipeline and Run Release Update - 29 Jan, 2024 04:30 PM

In the context of Artificial Intelligence (AI), a pipeline refers to a series of data processing steps or operations that are performed in sequence to achieve a specific AI task or goal. An AI pipeline typically involves several stages, each with a specific function, and it is designed to process and transform input data into meaningful output. For more info on AI/ML Pipeline

  • Container Release Update - 23 Jan, 2024 11:30 PM

We are thrilled to announce that we have updated the notebook user interface, and we have changed the name of the notebook to ‘Container’. For more info on Containers

  • Notebook Release Update - 27 Oct, 2023 03:30 PM

We are thrilled to announce that in the notebook section, we now provide a committed option for creating notebooks. Before, we only had an hourly option. Now, users can choose a plan that suits their needs. For more info on Plans