Skip to main content

Pipeline

Introduction

In the context of Artificial Intelligence (AI), a pipeline refers to a series of data processing steps or operations that are performed in sequence to achieve a specific AI task or goal. An AI pipeline typically involves several stages, each with a specific function, and it is designed to process and transform input data into meaningful output. Each stage in the pipeline plays a crucial role in the overall AI process, and the effectiveness of the pipeline depends on the quality of data, the choice of algorithms, and the expertise in designing and optimizing each step. AI pipelines are commonly used in various applications, including machine learning, natural language processing, computer vision, and more

What is Pipeline

  • TIR Pipelines offer a way to write scalable, serverless and asynchronous training jobs based on docker containers. The supported formats include Argo and Kubeflow Pipelines Templates.

  • You no longer have to worry about the reliability of training of jobs as TIR pipelines offer best-in class retry function. This allows you to restart a job without losing completed work.

  • Additionally, TIR pipelines also support unlimited re-runs, stored results (in EOS buckets) and all resource plans (CPU and GPU).

Guide to Create a Pipeline