Skip to main content

Step by Step Guide to Fine-Tuning BLOOM

Introduction

BLOOM is a powerful tool that can be used for a variety of tasks, including:

  • Text generation: BLOOM can generate text in any of the languages it was trained on, including creative text formats such as poems, code, scripts, musical pieces, emails, and letters.
  • Translation: BLOOM can translate text from one language to another with high accuracy.
  • Code generation: BLOOM can generate code in various programming languages, including Python, Java, C++, and JavaScript.
  • Question answering: BLOOM can answer questions in a comprehensive and informative way, even if they are open-ended, challenging, or strange.

BLOOM is still under development but has the potential to revolutionize the way we interact with computers. It can be used to create innovative applications in various fields, including education, healthcare, and business.

One of the key benefits of BLOOM is that it is open-source and open-access. This means that anyone can use BLOOM to develop new applications or explore the capabilities of LLMs. This democratizes access to LLM technology, enabling more people to benefit from its capabilities.

What Is Fine-Tuning?

Fine-tuning is a technique in machine learning where a pre-trained model is adapted to a new task by training it on a small amount of data specific to the new task. This differs from training a model from scratch, which requires a large amount of data and can be time-consuming.

Fine-tuning is a powerful technique for training LLMs on new tasks. Since BLOOM is a pre-trained LLM, it can be fine-tuned to perform various tasks, such as generating text, translating languages, and writing creative content.

Why Fine-Tune BLOOM?

There are several reasons to fine-tune BLOOM:

  • Improve performance: Fine-tune BLOOM for specific tasks, such as generating more creative texts or translating languages more accurately.
  • Adapt to new domains: Fine-tune BLOOM to generate text in a specific industry or translate languages from a particular region.
  • Develop custom LLMs: Tailor BLOOM to meet specific needs, such as generating company-specific text or translating relevant languages for research.

Benefits of Fine-Tuning BLOOM

Fine-tuning BLOOM offers numerous benefits, including:

  • Improved performance on specific tasks.
  • Adaptation to new domains.
  • Development of custom LLMs.
  • Reduced training time and cost.

Fine-tuning BLOOM can be a relatively quick and easy way to enhance its performance for specific tasks or adapt it to new domains, providing a significant advantage over training a model from scratch.

Requirements

Python Libraries

To fine-tune BLOOM, a user needs the following:

  • A notebook backed by a GPU (at least 4GB of memory is recommended).
  • Python programming language.
  • Transformers library.
  • BLOOM model and tokenizer.
  • A training dataset (a collection of text examples relevant to the task).

Steps to Fine-Tune BLOOM

  1. Install the required Python libraries.
  2. Download the BLOOM model and tokenizer.
  3. Load your training data.
  4. Prepare your training data.
  5. Define your training arguments.
  6. Train the model.
  7. Evaluate the model.
  8. Save the fine-tuned model.

Considerations

  • Size of the training dataset: A larger training dataset typically yields better performance.
  • Quality of your training dataset: The dataset should be high-quality and representative of the task.
  • Hyperparameters: Tune hyperparameters like training epochs and learning rate for optimal performance.

Launch Your GPU-Backed Notebook

Head over to E2E Cloud, and sign in or register. Once logged in, click on the top left corner to navigate to TIR - AI Platform. Then click on "Create a Notebook."

Create Notebook on TIR

Make sure to select a GPU notebook. Free credits are available, which should easily suffice for this tutorial.

Once the notebook has been launched, follow the next steps.

Packages and Libraries

You will need to install the following libraries:

!pip install transformers
!pip install accelerate -U
!pip install datasets
import transformers
from transformers import BloomForCausalLM
from transformers import BloomForTokenClassification
from transformers import BloomTokenizerFast
from transformers import TrainingArguments
from transformers import Trainer
from transformers import AutoTokenizer
from transformers import DataCollatorWithPadding
import torch
from datasets import load_dataset
import random

BLOOM Model and Tokenizer

To fine-tune BLOOM, you will need to load your training data. The training data should consist of text examples relevant to the specific task you want to fine-tune BLOOM for.

For example:

  • If you aim to fine-tune BLOOM to generate more creative texts, consider using a training dataset of poems, code, scripts, musical pieces, emails, and letters.
  • If your goal is to enhance language translation accuracy, a training dataset of parallel text in multiple languages would be suitable.

Once you have collected your training data, it needs to be converted into a format usable by the BLOOM model. The BLOOM model expects the training data to be in a tokenized format. You can use the BLOOM tokenizer to tokenize the training data, which will split the text into individual tokens—basic units that the BLOOM model can understand.

Here’s an example of how to tokenize the training data using the BLOOM tokenizer:

import transformers

tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom-1b7")

# Load the training data
training_data = []

with open("training_data.txt", "r") as f:
for line in f:
training_data.append(line.strip())

# Tokenize the training data
tokenized_training_data = tokenizer(training_data, return_tensors="pt")

Training Data

To prepare the training data for training, you need to split the tokenized training data into pairs of input and target sequences. The input sequence is the text that the BLOOM model should predict, while the target sequence is the text that the model should aim to generate.

Example

If you are fine-tuning BLOOM to generate text, your training dataset might look like this:

  • Input sequence: I am a cat.
    Target sequence: Meow.

  • Input sequence: I love to play.
    Target sequence: Fun!

Once you have created a training dataset, you need to split it into training and validation sets. The training set should comprise about 80% of the total dataset, while the validation set should make up about 20%.

Splitting the Dataset

To split the dataset, you can use the following Python code:

import random

# Split the dataset into training and validation sets
train_dataset = []
val_dataset = []

for input_ids, attention_mask in tokenized_training_data:
if random.random() < 0.8:
train_dataset.append((input_ids, attention_mask))
else:
val_dataset.append((input_ids, attention_mask))

Example

Suppose tokenized_training_data is a list containing the following pairs of input_ids and attention_mask:

tokenized_training_data = [
([1, 2, 3], [1, 1, 1]),
([4, 5, 6], [1, 1, 1]),
([7, 8, 9], [1, 1, 1]),
([10, 11, 12], [1, 1, 1])
]

Training the BLOOM Model

To train the BLOOM model, you can use the Trainer class from the Transformers library. The Trainer class provides a number of features that make it easy to train and evaluate LLMs, such as:

  • Automatic gradient computation
  • Distributed training
  • Early stopping
  • Evaluation metrics

To train the BLOOM model using the Trainer class, you can use the following Python code:

!pip install transformers
!pip install accelerate -U
!pip install datasets

import transformers
from transformers import BloomForCausalLM
from transformers import BloomForTokenClassification
from transformers import BloomTokenizerFast
from transformers import TrainingArguments
from transformers import Trainer
from transformers import AutoTokenizer
from transformers import DataCollatorWithPadding
import torch
from datasets import load_dataset

Load the BLOOM model and tokenizer

model = BloomForCausalLM.from_pretrained("bigscience/bloom-1b7")
tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom-1b7")

Create the training dataset

raw_datasets = load_dataset("glue", "mrpc")
checkpoint = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)


def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True)


tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
train_dataset=tokenized_datasets["train"]

Create the validation dataset

eval_dataset=tokenized_datasets["validation"]

Define the training arguments

 training_args = TrainingArguments(
output_dir="output",
num_train_epochs=10,
per_device_train_batch_size=16,
per_device_eval_batch_size=8,
learning_rate=2e-5,
)

Create the trainer

trainer = Trainer(
model,
training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=data_collator,
tokenizer=tokenizer,
)

# Train the model
trainer.train()

Evaluation

To evaluate the fine-tuned BLOOM model, the user can use the following Python code:

# Evaluate the model on the test dataset
test_accuracy = trainer.evaluate(test_dataset)

The test loss and accuracy will give you an indication of how well the fine-tuned BLOOM model will perform on new data. If the test loss and accuracy are satisfactory, you can use the fine-tuned model to generate text, translate languages, and write creative content. Here are some tips for fine-tuning BLOOM:

  • Use a large and representative training dataset. The larger and more representative the training dataset is, the better the fine-tuned model will perform.
  • Use a suitable learning rate. The learning rate controls how quickly the model learns. A too-high learning rate can cause the model to overfit the training data, while a too-low learning rate can cause the model to learn slowly.
  • Use early stopping. Early stopping prevents the model from overfitting the training data by stopping training when the validation loss stops decreasing.
  • Experiment with different hyperparameters. The hyperparameters of the training process can have a significant impact on the performance of the fine-tuned model. Experiment with different hyperparameters to find the best combination for your task.

Saving and Loading

Once you have trained and evaluated the fine-tuned BLOOM model, you can save it to a file for later use. To do this, you can use the save_pretrained() method of the Trainer class:

# Load the fine-tuned BLOOM model
model = BloomForCausalLM.from_pretrained("directory")

# Save the fine-tuned model to a file
model.save_pretrained("directory")

Once the user has saved the fine-tuned model, they can load it back into memory using the from_pretrained() method:

# Load the fine-tuned BLOOM model
model = BloomForCausalLM.from_pretrained("directory")

The user can then use the fine-tuned model to generate text, translate languages, and write creative content.

Example

Let us test BLOOM with various prompts. The required packages are first installed, and necessary libraries are imported:

!pip install transformers
from transformers import BloomForCausalLM
from transformers import BloomForTokenClassification
from transformers import BloomTokenizerFast
import torch

The BLOOM model is then imported to the local drive. Here, the 1b7 version of BLOOM is imported into the system. The size of the model is 3.44 GB.

tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom-1b7", local_files_only=False)
model = BloomForCausalLM.from_pretrained("bigscience/bloom-1b7", local_files_only=False)

Once the pretrained model is downloaded, we can then move to giving different prompts as inputs.

prompt = ""
result_length = ""
inputs = tokenizer(prompt, return_tensors="pt")

print(tokenizer.decode(model.generate(inputs["input_ids"],
max_length=result_length,
num_beams=2,
no_repeat_ngram_size=2,
early_stopping=True
)[0]))

Example 1:

Let us consider generating text related to E2E Cloud. BLOOM has no prior awareness of it. But it has to generate text based on its training even without knowing about E2E. The required length is 50 characters.

prompt = "E2E cloud is a cloud service provider. It is"
result_length = 50
inputs = tokenizer(prompt, return_tensors="pt")

Output:

E2E cloud is a cloud service provider. It is the only cloud provider in the world that provides a complete cloud solution for the entire enterprise. The company is based in Singapore and has offices in Hong Kong, Singapore, and the United States.

From the output, it can be seen that the model has given wrong information, but it is somewhat believable to someone who does not know about the company. The generated text attributes characteristics and locations to the company based on what the model has learned during its training, although these details might not be accurate because the model is not aware of the specific company. This highlights both the flexibility and the limitations of such large-scale language models.

Example 2:

Let us consider generating text related to some generic content, in this case an animal. The prompt given is “Jaguars are wild species, they”, with the required length of 50 characters.

    prompt = "Jaguars are wild species, they"
result_length = 50
inputs = tokenizer(prompt, return_tensors="pt")

Output:

Jaguars are a wild species, they are not easy to domesticate, and they have a tendency to become aggressive. They are also very territorial, so it is important to keep them in a small space.

BLOOM demonstrates its ability to produce factual and contextually relevant information about jaguars, a wild animal species. This suggests that BLOOM is well-equipped to handle generalized queries, synthesizing its vast training data into an accurate and informative output. Overall, this example showcases the model's strength in producing reliable and coherent text when presented with a generic topic.

Example 3:

Let us consider generating a longer text with a question. The text asked is a technical question. The prompt given is: “What is deep learning?”, with a required length of 200 characters.

    prompt = "What is deep learning?"
result_length = 200
inputs = tokenizer(prompt, return_tensors="pt")

Output:

What is deep learning? Deep learning is a machine learning technique that uses a large amount of data to learn a model that can be used to solve a specific problem. Deep neural networks (DNNs) are a type of machine-learning algorithm that learns from data. They are used in a wide range of applications, including computer vision, speech recognition, and natural language processing. The DNN is trained using a set of labeled data, which is called the training set. Once trained, the model is able to generalize well to unseen data. Deep learning has been used for a variety of tasks, such as image classification, text categorization, image captioning, object detection, etc. In this paper, we focus on the task of image segmentation. Image segmentation is the process of identifying and classifying the objects in an image. There are two main types of segmentation algorithms: region-based and object-based. Region-based algorithms segment the image into regions based on certain criteria, while object

BLOOM takes on a technical question about deep learning and provides a comprehensive answer within the 200-character limit. BLOOM effectively demonstrates its capability to not only answer a question but also provide contextual background information, thereby giving the reader a rounded understanding of the subject. This example exemplifies BLOOM's prowess in generating detailed, informative, and relevant content in response to technical queries.

Conclusion

Here are some example use cases for fine-tuned BLOOM:

  • Generating creative text: Fine-tuned BLOOM can be used to generate creative text, such as poems, code, scripts, musical pieces, emails, and letters.
  • Translating languages: Fine-tuned BLOOM can be used to translate languages more accurately and fluently.
  • Answering questions: Fine-tuned BLOOM can be used to answer questions in a more comprehensive and informative way.
  • Summarizing text: Fine-tuned BLOOM can be used to summarize text more concisely and accurately.

These are just a few examples of the many possible use cases for fine-tuned BLOOM. By fine-tuning BLOOM to a specific task, you can unlock its full potential to generate creative text, translate languages, answer questions, summarize text, and write different kinds of creative content. Fine-tuning BLOOM is a powerful way to adapt the model to a specific task. By following the tips in this guide, users can fine-tune BLOOM to achieve good performance on a variety of tasks, such as generating creative text, translating languages, answering questions, summarizing text, and writing different kinds of creative content.

Running BLOOM on a large scale requires a large RAM size and cloud resources. E2E Cloud offers various cloud-based GPU processors at a nominal cost. If you require running BLOOM LLM, consider using E2E cloud services. NVIDIA L4 and A100 GPUs are considered to be good for Natural Language Processing. Compare them to find which is more suitable for your requirements.