Create Instances with Your Own Container Image
The platform is container-native, giving you the full flexibility and performance of containers for AI/ML workloads — without any infrastructure overhead.
You can launch instances using your own custom container image, allowing you to tailor the environment to your exact needs by adding specific packages, libraries, and configurations.
Packages installed directly in a running instance are not fully persistent — only the /data and /home/jovyan directories are preserved on restart. Using a custom image guarantees your environment is consistent and reproducible every time your instance starts.
Overview
Setting up a custom container involves three stages:
| Step | Description |
|---|---|
| Build | Create a Docker image with your required packages and environment |
| Push | Upload the image to the E2E Container Registry |
| Launch | Start an instance using your custom image |
Step 1: Build a Container Image
We recommend extending our pre-configured base images, as they are already optimized for platform compatibility and GPU support.
Requirements
Before building, ensure your image meets the following:
- Platform: Image must be built for
linux/amd64 - Registry: Image must be available in a container registry
- Jupyter Lab: Container must expose port
8888running Jupyter Lab
To target the correct platform, specify it in your Dockerfile:
FROM --platform=linux/amd64 <base-image>
Or pass it as a build flag:
docker build --platform=linux/amd64 ...
Option A: Extend a Base Image via Dockerfile (Recommended)
This is the most reliable and reproducible approach. Pick a base image that matches your framework, add your dependencies, and build.
- Ubuntu (GPU)
- PyTorch (GPU)
- TensorFlow (GPU)
- Transformers (GPU)
FROM --platform=linux/amd64 aimle2e/ubuntu-cuda-jupyter:22.04-12.2.2
# Install your required packages
# RUN pip install -r requirements.txt
docker build --platform linux/amd64 -t my-repo/tir-cuda:12.2.2-ubuntu22.04-01 .
FROM --platform=linux/amd64 aimle2e/nvidia-pytorch:23.06-2.1.0
# Install your required packages
# RUN pip install -r requirements.txt
docker build --platform linux/amd64 -t my-repo/tir-pytorch:23.06-py3-04 .
FROM --platform=linux/amd64 aimle2e/nvidia-tensorflow:23.06-2.12.0
# Install your required packages
# RUN pip install -r requirements.txt
docker build --platform linux/amd64 -t my-repo/tir-tensorflow:23.06-tf2-py3-02 .
FROM --platform=linux/amd64 aimle2e/nvidia-pytorch-transformers:23.06-2.1.0-4.31.0
# Install your required packages
# RUN pip install -r requirements.txt
docker build --platform linux/amd64 -t my-repo/tir-transformers:23.06-py3 .
Option B: Patch a Running Container Interactively
Prefer installing packages manually? You can run a base container, install what you need inside it, and save it as a new image.
-
Start the base image:
docker run --platform=linux/amd64 -d <base-image> -
Enter the running container:
docker exec -it <container_id> /bin/bashInstall your required packages inside the container, then type
exitto leave. -
Save your changes as a new image:
docker commit <container_id> <your-image-name:tag> -
Push the image to your registry — continue to Step 2.
Option C: Use the Image Builder Utility
Already have a custom image that isn't platform-compatible? Use the Image Builder Utility to automatically patch it for instance use.
git clone https://github.com/tire2e/notebook-image-builder.git
cd notebook-image-builder/
./generate_image.sh -b my-trainer:latest -i tir-my-trainer -t v1
To auto-push to the E2E Container Registry, add the -P flag:
./generate_image.sh -b my-trainer:latest -i tir-my-trainer -t v1 -P
Use
-u <username>if you are not already logged in to the registry.
If you used -P to auto-push via the Image Builder, skip Step 2 — your image is already in the registry.
Which Option Is Right for Me?
| Scenario | Recommended Option |
|---|---|
| Starting fresh, want a clean and reproducible setup | Option A – Dockerfile |
| Prefer installing packages manually and saving the result | Option B – Patch a running container |
| Have an existing custom image that needs compatibility fixes | Option C – Image Builder Utility |
Step 2: Push the Container Image
2.1 Set Up Registry Integration
- Log in to the AI platform and select your Project.
- Navigate to Integrations → E2E Container Registry.
- Click Create E2E Registry.
- Provide a Namespace (e.g.,
my-workspace) and a Username Prefix. - Once created, your registry is ready to receive images.
2.2 Log In to the Registry Locally
docker login registry.e2enetworks.net
Enter your platform credentials when prompted.
2.3 Tag and Push Your Image
First, tag your image with the full registry path:
docker tag my-custom-image:v1 registry.e2enetworks.net/<your-namespace>/my-custom-image:v1
Then push it to the registry:
docker push registry.e2enetworks.net/<your-namespace>/my-custom-image:v1
Replace <your-namespace> with the namespace you created in Step 2.1.
Your namespace-specific tag and push commands are also available directly in the dashboard under Commands.
Step 3: Launch an Instance with Your Custom Image
- Go to the platform and click Create Instance.
- Under image selection, choose Custom Images.
- Set Image Type to Private.
- Select your Registry Namespace from the dropdown.
- Choose your pushed image and tag.
- Fill in the remaining instance details (hardware, storage, etc.) and click Launch.
- Once the instance is running, open Jupyter Lab or connect via SSH to verify your packages are available.
Troubleshooting
| Issue | Likely Cause | Fix |
|---|---|---|
| Build fails with platform error | Wrong architecture | Ensure --platform=linux/amd64 is set |
| Push fails with auth error | Not logged in to registry | Run docker login registry.e2enetworks.net |
| Packages missing after instance restart | Installed in live session, not in image | Rebuild your image with those packages |
| Port error on launch | Port 8888 not exposed | Ensure your image runs Jupyter Lab on port 8888 |