FINETUNING SDK
Welcome to the E2E Networks Finetuning SDK notebook! This guide will take you through the entire workflow of fine-tuning a model using the E2E Networks SDK.
Whether you are working with text or image models, this notebook will guide you through every step, from creating a finetuning job to launching an inference endpoint.
For detailed information about the available methods and how to use them, you can call the help()
function of the FinetuningClient
class.
Overview
The Finetuning SDK offers a user-friendly interface to:
Create Finetuning Jobs: Start a new finetuning process with your selected model, using either predefined or custom training arguments.
Manage Finetuning Jobs: Delete, stop, or retry finetuning jobs as necessary.
Monitor Finetuning Progress: Get detailed logs and status updates on your ongoing finetuning tasks.
Create Inference Endpoints: Deploy an inference endpoint based on the model trained during your finetuning job.
Explore Supported Models and Plans: View available models and plans for finetuning on the E2E Networks platform.
Step 1: Setup the SDK and Initialize a FinetuningClient
Initialize the SDK with your credentials and then set up a FinetuningClient
.
from e2enetworks.cloud import tir
from e2enetworks.cloud.tir import FinetuningClient
# Define your credentials and project details
TIR_API_KEY = 'your_tir_api_key'
TIR_ACCESS_TOKEN = 'your_tir_access_token'
TIR_PROJECT_ID = 'your_tir_project_id'
TIR_TEAM_ID = 'your_tir_team_id'
# Initialize the SDK
tir.init(
api_key=TIR_API_KEY,
access_token=TIR_ACCESS_TOKEN,
project=TIR_PROJECT_ID,
team=TIR_TEAM_ID
)
# Initialize the FinetuningClient
finetuning_client = FinetuningClient()
# Optionally, display the available methods and their usage
finetuning_client.help()
Step 2: Enter the Finetuning Details
Change the finetuning parameters as per your requirements. You can make use of the various functions provided at the end of the notebook to explore different options that suit your needs.
# Define your finetuning parameters
FINETUNING_NAME = "sample-finetuning"
HF_ID = 123456 # Replace with your TIR HuggingFace integration ID
# Finetuning Type should be in ["Instruction-Finetuning", "Text-Classification", "Summary-Generator", "Mask-Modelling", "Question-Answering"]
FINETUNING_TYPE = "Instruction-Finetuning"
# Use the functions provided at the end to get available options
BASE_MODEL_NAME = "base_model_name"
PLAN_NAME = "plan_name"
Step 3: Configure dataset details
- There are 2 types of datasets which can be used to fine-tune a model with:
Custom dataset - set the
DATASET_TYPE
variable aseos-bucket
to use a custom datasetHuggingface dataset - set the
DATASET_TYPE
variable ashuggingface
to use a Huggingface dataset
Custom Dataset
Custom datasets can be used for fine-tuning text models by providing a prompt configuration which is based on the dataset used.
DATASET_TYPE = "eos-bucket"
DATASET = "dataset_path" # Enter the path of the dataset in the eos-bucket e.g., tir-dataset-123/sample-dataset.jsonl
PROMPT_CONFIGURATION = """
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: [replace with Instruction Column Name]
### Response: [replace with Response Column Name]
"""
Huggingface dataset
Datasets from the Huggingface hub can directly be used for fine-tuning.
DATASET_TYPE = "huggingface"
DATASET = "mlabonne/guanaco-llama2-1k" # Replace with your desired dataset
Step 4: Creating a Finetuning Job
To create a new finetuning job, call the create_finetuning
method.
Here is an example of how to set up and create a new finetuning:
# Define training arguments
SAMPLE_TRAINING_ARGS_DICT = {
"validation_split_ratio": 0.1,
"target_dataset_field": 'text',
"gradient_accumulation_steps": 1,
"context_length": 512,
"learning_rate": 0.0000141,
"epochs": 1,
"batch_size": 4,
"task": FINETUNING_TYPE,
"prompt_configuration": PROMPT_CONFIGURATION
}
# Create finetuning
is_success, data = finetuning_client.create_finetuning(
name=FINETUNING_NAME,
model_name=BASE_MODEL_NAME,
dataset=DATASET,
dataset_type=DATASET_TYPE, # use "custom" to finetune a custom dataset
plan_name=PLAN_NAME,
huggingface_integration_id=HF_ID,
**SAMPLE_TRAINING_ARGS_DICT
)
# Check success
if is_success:
print("Your finetuning is created successfully.")
else:
print("Failed to create finetuning.")
Step 5: Monitoring and Logging
Retrieve detailed information and logs for your finetuning jobs.
# To list all the existing fine-tuning jobs
finetuning_client.show_all_finetunings()
# To get detailed information of a specific fine-tuning job
finetuning_client.show_finetuning_details(FINETUNING_NAME)
# To get the logs of a fine-tuning run
finetuning_client.show_finetuning_logs(FINETUNING_NAME)
Step 6: Managing Finetuning Jobs
Learn how to stop, delete, or retry finetuning jobs.
# To stop a fine-tuning job
finetuning_client.stop_finetuning(FINETUNING_NAME)
# To retry a fine-tuning job
finetuning_client.retry_finetuning(FINETUNING_NAME)
# To delete a fine-tuning job
finetuning_client.delete_finetuning(FINETUNING_NAME)
Step 7: Create an inference endpoint from your finetuned model
After a fine-tuning job is succeeded, you can launch an inference using the create_finetuning_inference
method.
# Finetuning inference creation params
INFERENCE_NAME = "your_inference_name"
is_success, response = finetuning_client.create_finetuning_inference(
inference_name=INFERENCE_NAME,
finetuning_id=FINETUNING_NAME,
huggingface_integration_id=HF_ID,
plan_name=PLAN_NAME
)
# Check success
if is_success:
print("Your inference is created successfully.")
else:
print("Failed to create inference.")
Important functions for various use cases
Additional functions that might be of use are mentioned below—use them as needed.
# View the available GPU plans
finetuning_client.show_plan_names()
# View supported models for finetuning
finetuning_client.show_supported_models()
# View supported training arguments
finetuning_client.show_text_model_training_inputs()
finetuning_client.show_image_model_training_inputs()