Skip to main content

Kubernetes

With the E2E MyAccount portal, you can quickly launch the Kubernetes master and worker nodes and get started with your Kubernetes cluster in just a minute.

Getting Started

How to Launch Kubernetes Service from MyAccount Portal

Login into MyAccount

Please go to ‘MyAccount’ and log in using your credentials set up at the time of creating and activating the E2E Networks ‘MyAccount’.

After logging into the E2E Networks ‘MyAccount’, click on the following options:

  • On the left side of the MyAccount dashboard, click on the “Kubernetes” sub-menu available under the Compute section.

Kubernetes Menu

Create Kubernetes Service

In the top right section of the Kubernetes service dashboard, click on the Create Kubernetes icon, which will prompt you to the cluster page where you will select the configuration and enter the details of your database.

Create Kubernetes Service

Kubernetes Configuration and Settings

After clicking on Create Kubernetes, you need to click on Add Plan and select the required configuration and settings for your Kubernetes, as mentioned below.

Kubernetes Configuration

Add Worker Node Pool

You can select either a Static Pool or an AutoScale Pool.

Static Pool

Static Pool

Add Another Node Pool

If you want to add another node pool, just click on the "Add Another Node Pool" button.

Add Another Node Pool

AutoScale Pool

In the AutoScale Pool, you have to select the worker scaling policy.

AutoScale Pool

Elastic Policy

The Elastic Policy allows you to choose between two scaling policies: Default or Custom. If you choose Default, scaling will be based on CPU or Memory utilization. If you choose Custom, you can specify a custom attribute to determine scaling.

Elastic Policy

Default: In the Default policy parameter type, you can select two types of policy parameters.

  • CPU: This policy scales the number of resources based on CPU utilization. When CPU utilization reaches a certain threshold, the number of resources will increase according to your set policy. When CPU utilization decreases, the number of resources will decrease.

  • MEMORY: This policy scales the number of resources based on MEMORY utilization. When MEMORY utilization reaches a certain threshold, the number of resources will increase according to your set policy. When MEMORY utilization decreases, the number of resources will decrease.

Select Policy Parameter Default

Custom: Once an auto-scaling configuration is created with custom parameters, you'll receive a CURL command for updating the custom parameter value. This command can be used within scripts, hooks, cron jobs, and similar actions. When the value of this custom parameter reaches a specific threshold, the number of resources will increase. Conversely, when the value decreases, the number of resources will decrease.

Note: The default value of the custom parameter is set to 0.

Custom Policy

Manage Custom Policy

The custom policy feature in the Kubernetes Autoscale Pool enables you to define your custom attribute. The auto-scaling service utilizes this attribute to make scaling decisions. After setting this custom attribute, you can scale or downscale your node pool nodes. Once the Kubernetes cluster is launched, you will receive a curl command to update this custom attribute according to your use case.

Manage Custom Policy

Policy Parameter Name: This field is where you enter the name of the custom attribute you want to monitor. Use descriptive names related to the aspect of your service being monitored, such as “NETTX” for Network traffic or “DISKWRIOPS” for disk write operations.

Node Utilization Section: Specify the values that will trigger a scale-up (increase in cardinality) or scale-down (decrease in cardinality) operation based on your preferences.

Scaling Period Policy: Define the watch period, duration of each period, and cooldown period.

How to Get a CURL

You can obtain a curl command by clicking on 'Get Curl'.

Get Curl

Note

After you complete filling the fields, click the 'create cluster' button. When the cluster is up and running, you'll get a curl command. Use this command in Postman, inputting your API Key and token. Then, in the Postman's body section, modify the value of the custom parameter.

Curl Command Example

How to change custom parameter value in a CURL through CLI:

When you receive a curl command, paste it into the terminal along with your API Key, Bearer token, and set the value of the custom parameter as you prefer.

CLI Example 1
CLI Example 2

Scheduled Policy

Auto scale pool schedule policy is a feature that allows you to define a predetermined schedule for automatically adjusting the capacity of your resources. A scheduled autoscaling policy allows you to scale your resources based on a defined schedule. For example, you may use a scheduled autoscaling policy to increase the number of instances in your service during peak traffic hours and then decrease the number of instances during off-peak hours.

  • Recurrence: Refers to the ability to schedule scaling actions to occur on a recurring basis. This can be useful for applications that experience predictable traffic patterns, such as a website that receives more traffic on weekends or a web application that receives more traffic during peak business hours.

  • Upscale and Downscale Recurrence in auto scale pool refers to the process of increasing and decreasing the number of resources in Kubernetes, respectively. This can be done on a recurring basis, such as every day, week, or month.

  • Upscale Recurrence: Specify the cardinality of nodes at a specific time by adjusting the field in the cron settings. Ensure that the value is lower than the maximum number of nodes you had previously set.

  • Downscale Recurrence: Specify the cardinality of nodes at a specific time by adjusting the field in the cron settings. Ensure that the value is greater than the maximum number of nodes you had previously set.

To choose scheduled policy as your option, select Schedule Policy in place of Elastic Policy and then set the upscale and downscale recurrence and click on Create Scale button.

Scheduled Policy

Elastic and Scheduled Policy

If the user desires to create a Kubernetes service using both options, they can choose the Elastic and Scheduled Policy option, configure the parameters, and proceed with creating the cluster service.

Elastic and Scheduled Policy

Add Another Node Pool

To add another node pool, click on "Add Another Node Pool" button.

Add Another Node Pool

Network

Use VPC: In the List Type, you can select the VPC.

Use VPC

Deploy Kubernetes

After filling in all the details successfully, click on the Create Cluster button. It will take a few minutes to set up the scale group, and you will be taken to the ‘Kubernetes Service’ page.

Kubernetes Service Page

Kubernetes Service

Cluster Details

You will be able to check all the basic details of your Kubernetes. You can check the Kubernetes name and version details.

Cluster Details

How To Download Kubeconfig.yaml File

  1. After downloading the Kube config, please make sure kubectl is installed on your system.
  2. To install kubectl, follow this doc.
  3. Run kubectl --kubeconfig="download_file_name" proxy.

Kubeconfig Setup
Kubeconfig Setup 2

  1. Open the below URL in the browser:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Kubernetes Dashboard

  1. Copy the kubeconfig token in the option below:

Kubeconfig Token

  1. Paste the Kubeconfig Token to access the Kubernetes dashboard.

Kubernetes Dashboard Access

Node Pool

The Node Pool Details tab provides information about the Worker nodes. Users can also add, edit, resize, power on/off, reboot, and delete the worker nodes.

Node Pool Details

To Add Node Pool

To add a node pool, click on the Add node icon.

Add Node Pool Icon

After clicking on the icon, you can add a node pool by clicking the "Add node pool" button.

Add Node Pool Button

To Edit Node Pool

Edit Node Pool
Edit Worker Node Pool

To Resize Node Pool

Users can also resize the worker nodes pool. Click on the resize button.

Resize Node Pool Step 1
Resize Node Pool Step 2

To Delete Node Pool

Delete Node Pool

To Power Off a Worker Node

When a user clicks the three dots icon associated with a worker node, they will find a list of available actions. Simply select the 'Power Off' option from this list to power off the node.

Power Off Worker Node

To Power On a Worker Node

To bring the powered-off node back online, locate the three dots icon associated with it. You'll notice that only the "start" action is available, while the "reboot" and "delete" actions are unavailable. Simply select "start" to bring the node to a running state.

Power On Worker Node

To Reboot a Worker Node

To reboot a running node, locate the three dots icon associated with the node. Then, select the "Reboot" option from the menu that appears. This will initiate the reboot process for the node.

Reboot Worker Node

Persistent Volume (PVC)

Persistent Volume

To check PVC, run the command shown below.

Check PVC Command

Create Persistent Volume

On the top right section of the manage persistent volume, click on the “Add persistent volume” button which will prompt you to the Add persistent volume page where you will click the create button.

Create Persistent Volume Step 1

You can check the audit logs for details about the creation of the persistent volume.

Persistent Volume Creation Log

Delete Persistent Volume

To delete your persistent volume, click on the delete option. Please note that once you have deleted your persistent volume, you will not be able to recover your data.

Delete Persistent Volume

You can check the audit logs for details about the deletion of the persistent volume.

Persistent Volume Deletion Log

Dynamic Persistent Volume (PVC)

Note: The sizes shown over the UI and in the bill are calculated in GB; however, in creation, the size used by K8s is in GiB. Decreasing the size of an existing volume is not possible. The Kubernetes API doesn’t even allow it.

Dynamic volume provisioning allows storage volumes to be created on-demand.

Note: To create a dynamic PersistentVolumeClaim, at least one PersistentVolume should be created in your Kubernetes cluster from the MyAccount Kubernetes PersistentVolume section.

Create a Storage Class

To enable dynamic provisioning, a cluster administrator needs to pre-create one or more StorageClass objects for users. StorageClass objects define which provisioner should be used and what parameters should be passed to that provisioner when dynamic provisioning is invoked.

The following manifest creates a storage class "csi-rbd-sc" which provisions standard disk-like persistent disks.

cat <<EOF > csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: 4335747c-13e4-11ed-a27e-b49691c56840
pool: k8spv
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
EOF
    kubectl apply -f csi-rbd-sc.yaml

CREATE A PERSISTENT VOLUME CLAIM**

Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim. To select the "csi-rbd-sc" storage class, for example, a user would create the following PersistentVolumeClaim

  cat << EOF > pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
storageClassName: csi-rbd-sc
EOF
   kubectl apply -f pvc.yaml

Dynamic Persistent Volume

This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed.

Public IPv4 Address

A public IP address is an IPv4 address that’s reachable from the Internet. You can use public addresses for communication between your Kubernetes and the Internet. You can reserve the default assigned public IP for your account until you release it. If you want to allocate public IP addresses for your Kubernetes, this can be done using a load-balancer type.

Private IPv4 Address

A private IPv4 address is an IP address that’s not reachable over the Internet. You can use private IPv4 addresses for communication between instances in the same VPC. When you launch an E2E Kubernetes, we allocate a private IPv4 address for your Kubernetes.

LoadBalancer

We have a native integration of the bare-metal Load Balancer MetalLB in our Kubernetes Appliance.

MetalLB hooks into our Kubernetes cluster and provides a network load-balancer implementation. It has two features that work together to provide this service: address allocation and external announcement.

With MetalLB, you can expose the service on a load-balanced IP (External/Service IP), which will float across the Kubernetes nodes in case some node fails. This differs from a simple External IP assignment.

As long as you ensure that the network traffic is routed to one of the Kubernetes nodes on this load-balanced IP, your service should be accessible from the outside.

We provide pools of IP addresses that MetalLB will use for allocation. Users have the option to configure Service IP.

Load Balancer IP Pool Step 1
Load Balancer IP Pool Step 2
Load Balancer IP Pool Step 3

With Service IP, customers can reserve IPs that will be used to configure pools of IP addresses automatically in the Kubernetes cluster. MetalLB will handle assigning and unassigning individual addresses as services come and go, but it will only ever hand out IPs that are part of its configured pools.

Service IP Configuration

Monitoring Graphs

Monitoring of server health is a free service that provides insights into resource usage across your infrastructure. There are several different display metrics to help you track the operational health of your infrastructure. Select the worker node for which you want to check the metrics.

  • Click on the ‘Monitoring’ tab to check the CPU Performance, Disk Read/Write operations, and Network Traffic Statistics.

Monitoring Graph 1
Monitoring Graph 2
Monitoring Graph 3

Alerts

Setup Monitoring Alert for Worker Node

Go to the 'Alerts' tab and click on the Create Alert button. If you want to configure email notifications, click on the Configure Email button.

Create Alert
Configure Email Alert
Configure Email Alert 2

Actions

You can perform the following actions available for the respective cluster.

Add Node Pool

You can upgrade your Kubernetes current plan to a higher plan by clicking on the Add Node Pool button.

Add Node Pool Action 1
Add Node Pool Action 2

Default
Default Node Pool
Default Node Pool 2

Custom
Custom Node Pool

Delete Kubernetes

Click on the Delete button to delete your Kubernetes. Termination of a Kubernetes will delete all the data on it.

Delete Kubernetes Action 1
Delete Kubernetes Action 2

Create a Secret for Container Registry

Secrets

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confidential data in your application code.

Create Secrets

To create a Docker registry secret, use the following command:

kubectl create secret docker-registry name-secrets \
--docker-username=username \
--docker-password=pass1234 \
--docker-server=registry.e2enetworks.net

Create a Pod that uses container registry Secret

cat > private-reg-pod-example.yaml << EOF
apiVersion: v1
kind: Pod
metadata:
name: node-hello
spec:
containers:
- name: node-hello-container
image: registry.e2enetworks.net/vipin-repo/node-hello@sha256:bd333665069e66b11dbb76444ac114a1e0a65ace459684a5616c0429aa4bf519
imagePullSecrets:
- name: name-secrets
EOF