Quick Start Guide
Prerequisites
- TIR Account: Ensure you have access to a TIR project.
- Pipeline YAML File: A valid Argo Workflow or Kubeflow Pipelines YAML template.
- (Optional) Docker Image: If running custom containers, have your image pushed to the E2E Container Registry.
Create Your First Pipeline
-
Navigate to Pipelines:
- In your TIR project dashboard, click Pipelines > Pipeline in the sidebar.
- You will be directed to the Manage Pipelines page.
-
Start Pipeline Creation:
- If this is your first pipeline, click GET STARTED to create a default hello-world pipeline.
- Otherwise, click CREATE PIPELINE to create a custom pipeline.
-
Choose Creation Mode:
- Create a new pipeline — Upload a
.yamlfile, provide a name and description, then click UPLOAD. - Create a new pipeline version under an existing pipeline — Select the target pipeline, upload a
.yamlfile, and click UPLOAD.
warningAvoid introducing additional nodes or commands in your YAML that could conflict with existing node definitions.
- Create a new pipeline — Upload a
-
Pipeline Created:
- After successful creation, you will see the pipeline details page with pipeline versions.
- From here you can create runs or view existing runs.
Create a Run
-
Navigate to Runs:
- Go to Pipelines > Run in the sidebar, or click the Create Run icon from a specific pipeline.
-
Select an Experiment:
- Choose an existing experiment or create a new one. An experiment groups related pipelines and their run histories.
-
Configure Run Parameters:
- Select the pipeline version to execute.
- Configure any run parameters defined in your YAML template.
- Click Next.
-
Select Resources:
- Choose a resource plan (CPU or GPU) for your run execution.
- Click FINISH to create the run.
-
View Results:
- The created run will appear in the Run section.
- Click the run name to view execution details and logs.
API Reference
For programmatic access to Pipelines, refer to our comprehensive API documentation:
Create, manage, and execute Pipelines, Runs, and Scheduled Runs using the TIR API. Includes endpoints for pipeline upload, version management, experiment tracking, and run scheduling.