Kubernetes
With the E2E MyAccount portal, you can quickly launch the Kubernetes master and worker nodes and get started with your Kubernetes cluster in just a minute.
Getting Started
Login to MyAccount
Go to MyAccount and log in using your credentials set up at the time of creating and activating your E2E Networks account.
Navigate to the Kubernetes Service
In the left-hand navigation panel of the MyAccount dashboard, locate the Compute section. Click on Kubernetes listed under it to open the Kubernetes service dashboard.
Create a Kubernetes Cluster
In the Kubernetes service dashboard, click the Create Kubernetes button in the top-right area of the page. This opens the cluster configuration page where you can select the desired settings and enter the required details.
Kubernetes Configuration and Settings
After clicking Create Kubernetes, configure the required master node settings and network options. Click Next when done to proceed to the worker node pool setup.
-
Kubernetes Version — Select the version for your cluster from the available list. Each version is labeled with its release number and support status. Choose a version that aligns with your application's requirements or compatibility needs.
-
VPC Selection — Select the VPC (Virtual Private Cloud) for your cluster. You can choose between a Default VPC or a Custom VPC.
- If you select a Custom VPC, you can optionally choose a Subnet.
- Selecting a VPC is mandatory.
- All master and worker nodes receive their IP addresses from the selected VPC's IP pool.
-
Encryption — To encrypt the Kubernetes cluster, select the Enable Encryption checkbox. Entering a passphrase is optional.
-
Security Group — A security group is preselected by default. To use a different one, choose it from the available list.
To establish successful communication with the Kubernetes API Server, the security group must allow port 6443, along with other essential Kubernetes ports. Port 6443 is mandatory — it is used by the Kubernetes API Server to handle all control plane and client communications. If this port is not open, cluster components and external clients will be unable to connect to the API Server, resulting in cluster creation or operational failures.
Worker Node Pool
What is a Worker Node Pool?
A Worker Node Pool is a collection of nodes with the same configuration (CPU, memory, disk) used to run workloads in your Kubernetes cluster. Each node pool can be scaled individually, allowing you to optimize resource allocation for different workloads (e.g., compute-intensive apps vs. memory-heavy apps).
You can create multiple pools to manage diverse workloads efficiently. This also enables granular scaling and maintenance flexibility without impacting the entire cluster.
Add a Worker Node Pool
On the worker node pool page, click Add Node Pool to begin configuring a new pool.
Static Pool
Choose the worker pool configuration according to your requirements. In the Worker Count field, set the desired number of worker nodes for the static pool.
AutoScale Pool
To configure an AutoScale Pool:
- Enable autoscaling by toggling the Autoscaling option.
- Set the desired (minimum) and maximum number of worker nodes for the pool.
- Define the scaling policy — this determines when and how the pool scales up or down based on resource utilization or a custom attribute.
Scaling Policies
Elastic Policy
The Elastic Policy scales your node pool automatically based on resource utilization. You can choose between two policy types: Default or Custom.
Default
The default policy scales based on one of two metrics:
- CPU — This policy scales the number of resources based on MEMORY utilization. When MEMORY utilization reaches a certain threshold, the number of resources will increase according to your set policy. When MEMORY utilization decreases, the number of resources will decrease.
- Memory — This policy scales the number of resources based on MEMORY utilization. When MEMORY utilization reaches a certain threshold, the number of resources will increase according to your set policy. When MEMORY utilization decreases, the number of resources will decrease.
Custom
Defines a custom attribute to drive scaling decisions. Once configured, you receive a CURL command to update the custom parameter value. This command can be used in scripts, hooks, or cron jobs. When the custom parameter reaches the defined threshold, nodes are added; when it drops below the threshold, nodes are removed.
The default value of the custom parameter is set to 0.
Manage Custom Policy
The custom policy feature lets you define your own scaling attribute. The autoscaling service uses this attribute to make scaling decisions. Once the Kubernetes cluster is launched, you receive a curl command to update the attribute based on your use case.
- Policy Parameter Name — This field is where you enter the name of the custom attribute you want to monitor. Use descriptive names related to the aspect of your service being monitored, such as
NETTXfor network traffic orDISKWRIOPSfor disk write operations. - Node Utilization Section — Specify values that trigger a scale-up (increase in cardinality) or scale-down (decrease in cardinality) operation.
- Scaling Period Policy — Define the watch period, duration of each period, and cooldown period.
How to Get a CURL Command
- After the cluster is created and running, navigate to the Node pool.
- Click Get Curl to generate the CURL command associated with your custom policy.
- Copy the generated command. It includes placeholders for your API Key and Bearer token — replace these before using the command.
After completing the configuration, click Create Cluster. Once the cluster is up and running, a curl command will be generated. Use this command in Postman by providing your API Key and token, then modify the custom parameter value in the request body.
Updating the custom parameter via CLI
Paste the CURL command into your terminal. Replace the placeholder values with your actual API Key and Bearer token, then set the custom parameter value as required. Run the command to update the scaling parameter.
Scheduled Policy
Auto scale pool schedule policy is a feature that allows you to define a predetermined schedule for automatically adjusting the capacity of your resources. A scheduled autoscaling policy allows you to scale your resources based on a defined schedule. For example, you may use a scheduled autoscaling policy to increase the number of instances in your service during peak traffic hours and then decrease the number of instances during off-peak hours.
- Recurrence — Refers to the ability to schedule scaling actions to occur on a recurring basis. This can be useful for applications that experience predictable traffic patterns, such as a website that receives more traffic on weekends or a web application that receives more traffic during peak business hours.
- Upscale Recurrence — Specify the number of nodes to add at a scheduled time using cron settings. This value must be lower than the configured maximum node count.
- Downscale Recurrence — Specify the cardinality of nodes at a specific time by adjusting the field in the cron settings. Ensure that the value is greater than the maximum number of nodes you had previously set.
To apply a scheduled policy, select Schedule Policy in place of Elastic Policy, set the upscale and downscale recurrence values, and click Create Scale.
Elastic and Scheduled Policy
To use both policies together, select the Elastic and Scheduled Policy option, configure the parameters for each, and proceed with creating the cluster.
Deploy Kubernetes
After filling in all the details, click Create Cluster. It will take a few minutes to provision the cluster. Once complete, you will be redirected to the Kubernetes Service page where you can monitor the status and details of your cluster.
Kubernetes Service
Cluster Details
The cluster details section displays all essential information about your Kubernetes cluster, including its name and version. You can review this at any time from the cluster overview page.
Download the Kubeconfig File
- In the cluster details section, locate and click the Download Kubeconfig option to save the
kubeconfig.yamlfile to your local machine. - Ensure
kubectlis installed on your system. To install it, follow the guide. - Run the following command to start the proxy:
kubectl --kubeconfig="download_file_name" proxy - Open the Kubernetes dashboard in your browser:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ - On the dashboard login page, select the Token option to authenticate.
- In the cluster details section, locate the Kubeconfig Token field and copy the token.
- Paste the token into the dashboard's token input field and click Sign In.
SSL Certificate Renewal
When the SSL certificate of a cluster is about to expire, the system provides both manual and automatic renewal options.
When the renewal date of the SSL certificate is within 5 days, the following message appears in the cluster details:
"The SSL certificate of this cluster is about to expire. Click here to renew."
Manual Renewal
Click the link displayed in the cluster details section to immediately renew the SSL certificate.
Automatic Renewal
If no manual action is taken, the system will automatically renew the SSL certificate after 5 days, ensuring uninterrupted security.
Node Pool
The Node Pool tab provides information about all worker nodes in the cluster. From this tab, you can add, edit, resize, power on/off, reboot, and delete worker nodes.
Add a Node Pool
- Navigate to the Node Pool tab within your cluster.
- Click Add Node Pool to open the node pool creation form.
- Configure the node pool settings as required.
- Click Add Node Pool to confirm.
After successfully adding a new node pool, it appears at the top of the Node Pools list. The Add Node Pool button is located at the bottom of the list for better usability.
Edit a Node Pool
- In the Node Pool tab, locate the node pool you want to edit.
- Click the Edit option associated with that pool.
- Update the configuration as needed and save your changes.
Resize a Node Pool
- In the Node Pool tab, locate the node pool you want to resize.
- Click the Resize button associated with that pool.
- Adjust the node count or configuration as needed.
- Confirm the resize action to apply the changes.
Delete a Node Pool
- In the Node Pool tab, locate the node pool you want to remove.
- Click the Delete option associated with that pool.
- Confirm the deletion when prompted.
Power Off a Worker Node
- In the Node Pool tab, locate the worker node you want to power off.
- Click the actions menu (three dots) associated with that node.
- Select Power Off from the list of available actions.
Power On a Worker Node
- Locate the powered-off worker node in the Node Pool tab.
- Click the actions menu associated with that node.
- Select Start to bring the node back online.
Only the Start action is available for a powered-off node. Reboot and Delete are unavailable while the node is powered off.
Reboot a Worker Node
- Locate the running worker node in the Node Pool tab.
- Click the actions menu associated with that node.
- Select Reboot from the menu to initiate the reboot process.
Persistent Volume (PVC)
The Persistent Volume section displays a list of all persistent volumes associated with your Kubernetes cluster. Each entry shows key details such as the volume name, size, and status.
To check existing PVCs, run:
kubectl get pvc
Create a Persistent Volume
- Navigate to the Persistent Volume section within your cluster.
- Click Add Persistent Volume in the top-right area of the page.
- Fill in the required details on the form.
- Click Create to provision the volume.
You can review the Audit Logs section to view the timestamp and status of the creation operation.
Delete a Persistent Volume
- In the Persistent Volume list, locate the volume you want to delete.
- Click the Delete option associated with that volume.
- Confirm the deletion when prompted.
Once a persistent volume is deleted, all data stored in it is permanently lost and cannot be recovered.
You can review the Audit Logs section to view details about the deletion.
Dynamic Persistent Volume (PVC)
Sizes displayed in the UI and on the bill are in GB; however, Kubernetes allocates storage in GiB at creation time. Decreasing the size of an existing volume is not supported — the Kubernetes API does not allow it.
Dynamic volume provisioning allows storage volumes to be created on-demand.
To create a dynamic PersistentVolumeClaim, at least one PersistentVolume must first be created in your Kubernetes cluster from the MyAccount Kubernetes PersistentVolume section.
Create a Storage Class
To enable dynamic provisioning, a cluster administrator must pre-create one or more StorageClass objects. These objects define which provisioner to use and what parameters to pass during dynamic provisioning.
The following manifest creates a storage class named csi-rbd-sc that provisions standard disk-like persistent disks:
cat <<EOF > csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: 4335747c-13e4-11ed-a27e-b49691c56840
pool: k8spv
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
EOF
kubectl apply -f csi-rbd-sc.yaml
Create a PersistentVolumeClaim
To request dynamically provisioned storage, include a storage class in your PersistentVolumeClaim. The following example uses the csi-rbd-sc storage class:
cat << EOF > pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
storageClassName: csi-rbd-sc
EOF
kubectl apply -f pvc.yaml
Once applied, the PersistentVolumeClaim appears in the Dynamic Persistent Volume list with its current binding status. You can verify it is bound by running kubectl get pvc.
This claim results in an SSD-like persistent disk being automatically provisioned. When the claim is deleted, the volume is destroyed.
Networking
Public IPv4 Address
A public IP address is an IPv4 address reachable from the Internet. You can use public addresses for communication between your Kubernetes cluster and the Internet. You can reserve the default assigned public IP for your account until you release it. Public IP allocation for Kubernetes is done using a load-balancer type.
Private IPv4 Address
A private IPv4 address is not reachable over the Internet. You can use private IPv4 addresses for communication between instances in the same VPC. When you launch an E2E Kubernetes cluster, a private IPv4 address is automatically allocated to it.
LoadBalancer
E2E Kubernetes includes native integration with the bare-metal load balancer MetalLB.
MetalLB integrates with the Kubernetes cluster and provides a network load-balancer implementation through two key features: address allocation and external announcement.
With MetalLB, services can be exposed using a load-balanced IP address (External/Service IP) that can float across Kubernetes nodes if a node fails. As long as network traffic is routed to one of the Kubernetes nodes associated with this load-balanced IP, the service remains accessible from outside the cluster.
Configure a Load Balancer IP Pool
- Navigate to the LoadBalancer section within your Kubernetes cluster.
- Open the LB IP Pool tab.
- Click Reserve a new IP to allocate a new Service IP address.
- Save the configuration. MetalLB will use this IP when assigning addresses to Kubernetes services of type
LoadBalancer.
With Service IP, users can reserve IP addresses that are automatically configured as MetalLB address pools within the Kubernetes cluster. MetalLB assigns and releases individual IP addresses dynamically as services are created or removed, but only from the configured IP pools.
To reserve a Service IP, navigate to the Service IP section and follow the on-screen instructions to associate a reserved IP with your load balancer pool.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
clusterIP: 10.0.171.239
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.0.2.127
Security Group
Security groups act as virtual firewalls that control inbound and outbound traffic for Kubernetes clusters. They define rules specifying which traffic is allowed to reach the cluster and which traffic the cluster can send out. Properly managing security groups ensures secure and controlled communication between nodes, pods, and external services.
The Security Groups tab in the cluster view displays all security groups currently attached to the cluster, along with their associated rules.
To ensure kubectl commands work successfully, TCP port 6443 must be allowed in the security group associated with your Kubernetes control plane. This port is used by the Kubernetes API Server — if it is blocked, kubectl will be unable to connect to the cluster.
Attaching a Security Group
- Navigate to the Kubernetes section and select your cluster.
- Go to the Security Groups tab.
- Click Attach Security Group.
- Select security groups from the available list.
- Click Attach to confirm.
Detaching a Security Group
- Open the Security Groups tab of your cluster.
- Locate the security group you want to detach.
- Click Detach next to it.
- Confirm the action in the confirmation dialog.
- At least one security group must remain attached to the Kubernetes cluster at all times. The system prevents removal of all security groups.
- If you encounter traffic issues caused by conflicts between OS firewall services and Security Group rules, you can use the Allow All Traffic option temporarily. This enables unrestricted inbound and outbound traffic until you identify and resolve the conflict.
Monitoring
Monitoring is a free service that provides insights into resource usage across your infrastructure. Navigate to the Monitoring tab within your cluster to view metrics for the selected worker node:
- CPU Performance — CPU utilization over time.
- Disk Read/Write Operations — Rate of disk reads and writes.
- Network Traffic Statistics — Inbound and outbound network traffic.
Select the worker node you want to inspect from the node selector before viewing metrics.
Alerts
Set Up a Monitoring Alert for a Worker Node
- Go to the Alerts tab within your Kubernetes cluster.
- Select the master or worker node for which you want to create an alert.
- Click Create Alert.
- Select the Trigger Type — this defines the metric to monitor (e.g., CPU usage, memory usage).
- Select the Trigger Operator (e.g., greater than, less than) to define the comparison logic.
- Enter the Threshold Value (%) — the alert is triggered when this value is crossed.
- Select the Severity level (e.g., Critical, Warning, Info).
- Select the User Group to receive the alert notification. For more information, Click Here.
- To create a new user group from this form, click the + button and follow the same steps described in the Node Monitoring section. To learn more, Click Here.
- Click Create to save and activate the alert.
Once created, the alert appears in the Alerts section with its trigger condition, severity, and assigned user group. When the condition is met, all users in the specified user group receive a notification.
Activity Timeline
Displays a chronological sequence of all actions performed on the Kubernetes cluster. Each entry includes a timestamp and a status indicator (Success or Failure), allowing you to trace the full operational history of the cluster at a glance.
Actions
Add Node Pool
To add a node pool to an existing cluster:
- Click Add Node Pool in the cluster actions menu.
- Select the node pool type:
- Default — Uses standard scaling parameters. Configure the instance type, node count, and other basic settings.
- Custom — Allows you to define custom scaling attributes, policy parameters, and node pool behavior tailored to your workload.
- Fill in the required configuration details.
- Click Create to add the node pool.
Delete Kubernetes Cluster
Deleting a Kubernetes cluster permanently removes all associated data and resources, including nodes, volumes, and configurations. This action cannot be undone.
- Click Delete in the cluster actions menu.
- Confirm the deletion in the dialog to proceed.
Container Registry Secrets
What is a Secret?
A Secret is a Kubernetes object that stores a small amount of sensitive data — such as a password, token, or key — separately from your application code or container image. This prevents confidential data from being embedded directly in pod specifications or images.
Create a Docker Registry Secret
Run the following command to create a Docker registry secret:
kubectl create secret docker-registry name-secrets \
--docker-username=username \
--docker-password=pass1234 \
--docker-server=registry.e2enetworks.net
Create a Pod Using the Registry Secret
cat > private-reg-pod-example.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
name: node-hello
spec:
containers:
- name: node-hello-container
image: registry.e2enetworks.net/vipin-repo/node-hello@sha256:bd333665069e66b11dbb76444ac114a1e0a65ace459684a5616c0429aa4bf519
imagePullSecrets:
- name: name-secrets
EOF