|||

Quick search

Myaccount

  • Getting Started with Myaccount
  • Release Notes

Compute

  • Nodes
  • GPU
  • EQS
  • Appliance
  • Auto Scaling
  • Network
  • Terraform
  • Security
  • Others
  • Abuse
  • Billing & Payment

Database

  • Database

Storage

  • Storage

Kubernetes

  • Kubernetes Service
  • Container Registery

API

  • Developers Guide

SDK

  • E2E CLI Tool

AI/ML

  • TIR AI Platform
  • AI/ML Tutorials
    • Fine-tune LLMA with Multiple GPUs
    • Deploy Inference for LLMA 2
    • Deploy Inference for Codellama
    • Deploy Inference for Stable Diffusion v2.1
    • Deploy Inference Endpoint For MPT-7B-CHAT
    • Creating a ready-to-use Model endpoint
    • Creating Model endpoint with custom model weights
    • Custom Containers in TIR

TutorialsΒΆ

TIR platform is a full-stack AI Development platform. Discover typical model development patterns in this section.

  • Fine-tune LLMA with Multiple GPUs
    • Steps
    • Conclusion
  • Deploy Inference for LLMA 2
  • Deploy Inference for Codellama
    • Model Endpoint creation for Codellama-7b using prebuilt container
    • Creating Model endpoint with custom model weights
      • Supported Parameters
  • Deploy Inference for Stable Diffusion v2.1
    • A guide on Model Endpoint creation & Image generation
    • Creating Model endpoint with custom model weights
    • Supported parameters for image generation
  • Deploy Inference Endpoint For MPT-7B-CHAT
  • Creating a ready-to-use Model endpoint
  • Creating Model endpoint with custom model weights
    • INFERENCE:
    • Supported parameters
  • Custom Containers in TIR
<Model Endpoints
Fine-tune Google Flan UL2 with Multiple GPUs>
© Copyright 2023, E2E Networks limited. Created using Sphinx 4.5.0.

Styled using the Piccolo Theme