EKS Cluster and Create CSI Driver to store credentials in AWS Secrets Manager via SecretProviderClass


EKS Cluster | CSI Driver | SecretProviderClass | AWS Secrets Manager

Setup EKS Cluster and Manage Credentials at runtime using CSI driver using SecretProviderClass and Secrets Manager


Assuming you have Configured/Installed AWS CLI, EKSCTL, KUBECTL, HELM.


CSI Basic Information:

  1. CSI (Container Storage Interface) widely used as a Storage Technology. Created by Google | Mesosphere | Docker. 
  2. It has two two Plugins one runs on the Master Node (Centralized Controller Plugin) and another one on Worker Nodes (Decentralized headless Node Plugin). 
  3. CSI communication protocol is gRPC. 
  4. The communication between Container Orchestration to Controller Plugin (Master) and to Node Plugin (Worker Node) happens using gRPC
  5. CSI Drivers: vendor specific compiled into Kubernetes/openshift binaries. To use a CSI driver, a StorageClass needs to be assigned first. 
  6. The CSI driver is then set as the Provisioner for the Storage Class.
  7. CSI drivers provide three main services, which are: Identity | Controller and Node based.
  • Identity: For Plugins related.
  • Controller: For Cluster operations related.
  • Node: For Container and Local machine(s) operations related.
      


As shown above, this is how CSI Operates using the Remote Procedure Call by gRPC.

Now we will create the basic EKS Cluster using EKSCTL which in turn runs a Cloud Formation Stack to create the AWS resources.

Here is the basic ClusterConfig yaml file (my-cluster.yaml) to start the EKS Cluster named "basic-cluster".
------------------------------------------------------------------------------------------------------------------

---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: basic-cluster
  region: ap-southeast-1
  version: "1.22"
managedNodeGroups:
  - name: mana-ng-pub
    instanceType: t3.medium
    minSize: 1
    desiredCapacity: 1
    maxSize: 1
    availabilityZones: ["ap-southeast-1a"]
    volumeSize: 20
    ssh: # Using existing EC2 Key pair
      allow: true
      publicKeyName: eks-demo-sg # had this key already, can replace this with yours
    tags:
      nodegroup-role: worker

------------------------------------------------------------------------------------------------------------------
Below is the "SecretProviderClass" yaml (demo-spc.yaml) file.
Note: Can skip this file shown below to give a basic idea about it. 
------------------------------------------------------------------------------------------------------------------

---
# apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 # deprecated
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: aws-secret-application
spec:
  provider: aws # for aws can use this else for any other Clouds can change this value
  secretObjects:
    - secretName: MySecret #application-api-key  # the k8s secret name
      type: Opaque
      data:
        - objectName: MySecret #secret-api-key  # reference the corresponding parameter
          key: api_key
  parameters:
    objects: |
      - objectName: "MySecret" #(secret-api-key, # the AWS secret)
        objectType: "secretsmanager"

------------------------------------------------------------------------------------------------------------------

1.a, Create the EKS Cluster using the below command and verify once done -
➜  AWS_ACCOUNT_DVO_EMAIL $ eksctl create cluster -f my-cluster.yaml

2, Get EKS Cluster Information -
➜  AWS_ACCOUNT_DVO_EMAIL $ eksctl get cluster
2022-05-11 23:59:36 [ℹ]  eksctl version 0.96.0
2022-05-11 23:59:36 [ℹ]  using region ap-southeast-1
NAME    REGION    EKSCTL CREATED
basic-cluster ap-southeast-1  True

3, Get EKS Cluster SVC running -
➜  AWS_ACCOUNT_DVO_EMAIL $ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   47m

4, "Install the ASCP (AWS Secrets and Configuration Provider)" :
The ASCP is available on GitHub in the secrets-store-csi-provider-aws repository.
The repo also contains example YAML files for creating and mounting a secret.
You first install the Kubernetes Secrets Store CSI Driver, and then you install the ASCP.

a. To install the SecretsStore CSI Driver, run the following commands.
For full installation instructions, see Installation in the Secrets Store CSI Driver Book.
➜  AWS_ACCOUNT_DVO_EMAIL $ helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
➜  AWS_ACCOUNT_DVO_EMAIL $ helm install -n kube-system csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver

To verify that Secrets Store CSI Driver has started, run:
➜  AWS_ACCOUNT_DVO_EMAIL $ kubectl --namespace=kube-system get pods -l "app=secrets-store-csi-driver"
NAME                                               READY   STATUS    RESTARTS   AGE
csi-secrets-store-secrets-store-csi-driver-k24z6   3/3     Running   0          31s

b. To install the ASCP, use the YAML file in GitHub repo deployment directory.
➜  AWS_ACCOUNT_DVO_EMAIL $ kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml
serviceaccount/csi-secrets-store-provider-aws created
clusterrole.rbac.authorization.k8s.io/csi-secrets-store-provider-aws-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-secrets-store-provider-aws-cluster-rolebinding created
daemonset.apps/csi-secrets-store-provider-aws created

Note: The above command will create the following and the respective code for them is shown below -
- ServiceAccount
- ClusterRole
- ClusterRoleBinding
- DaemonSet
------------------------------------------------------------------------------------------------------------------
# https://kubernetes.io/docs/reference/access-authn-authz/rbac
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-secrets-store-provider-aws
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: csi-secrets-store-provider-aws-cluster-role
rules:
- apiGroups: [""]
  resources: ["serviceaccounts/token"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["serviceaccounts"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: csi-secrets-store-provider-aws-cluster-rolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: csi-secrets-store-provider-aws-cluster-role
subjects:
- kind: ServiceAccount
  name: csi-secrets-store-provider-aws
  namespace: kube-system

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: kube-system
  name: csi-secrets-store-provider-aws
  labels:
    app: csi-secrets-store-provider-aws
spec:
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: csi-secrets-store-provider-aws
  template:
    metadata:
      labels:
        app: csi-secrets-store-provider-aws
    spec:
      serviceAccountName: csi-secrets-store-provider-aws
      hostNetwork: true
      containers:
        - name: provider-aws-installer
          image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-6-gee95299-2022.04.14.21.07
          imagePullPolicy: Always
          args:
              - --provider-volume=/etc/kubernetes/secrets-store-csi-providers
          resources:
            requests:
              cpu: 50m
              memory: 100Mi
            limits:
              cpu: 50m
              memory: 100Mi
          volumeMounts:
            - mountPath: "/etc/kubernetes/secrets-store-csi-providers"
              name: providervol
            - name: mountpoint-dir
              mountPath: /var/lib/kubelet/pods
              mountPropagation: HostToContainer
      volumes:
        - name: providervol
          hostPath:
            path: "/etc/kubernetes/secrets-store-csi-providers"
        - name: mountpoint-dir
          hostPath:
            path: /var/lib/kubelet/pods
            type: DirectoryOrCreate
      nodeSelector:
        kubernetes.io/os: linux
------------------------------------------------------------------------------------------------------------------

3, To create and mount a secret :
a, Set the AWS Region & name of your cluster as shell variables so you can use the min bash commands. For <REGION>, enter the AWS Region where your Amazon EKS cluster runs. For <CLUSTERNAME>, enter the name of your cluster.
➜  AWS_ACCOUNT_DVO_EMAIL $ export REGION=ap-southeast-1 && export CLUSTERNAME="basic-cluster"

b, Create a test secret:
➜  AWS_ACCOUNT_DVO_EMAIL $ aws --region "$REGION" secretsmanager  create-secret --name MySecret --secret-string '{"username":"foo", "password":"bar"}'
{
    "ARN": "arn:aws:secretsmanager:ap-southeast-1:1233444455566:secret:MySecret-rbkra8",
    "Name": "MySecret",
    "VersionId": "8e47ef05-4f60-499a-b842-90903c7082fd"
}

c, Create a Resource Policy for the Pod that Limits its Access to the Secret you created in the previous step.
For <SECRETARN>, use the ARN of the secret. Save the policy ARN in a shell variable.
➜  AWS_ACCOUNT_DVO_EMAIL $ POLICY_ARN=$(aws --region "$REGION" --query Policy.Arn --output text iam create-policy --policy-name nginx-deployment-policy --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret"], "Resource": ["arn:aws:secretsmanager:ap-southeast-1:1233444455566:secret:MySecret-rbkra8"]}] }')

d, Create IAM OIDC Provider for Cluster if you don't already have one -
➜  AWS_ACCOUNT_DVO_EMAIL $ eksctl utils associate-iam-oidc-provider --region="$REGION" --cluster="$CLUSTERNAME" --approve # Only run this once
2022-05-12 00:19:43 [ℹ]  eksctl version 0.96.0
2022-05-12 00:19:43 [ℹ]  using region ap-southeast-1
2022-05-12 00:19:44 [ℹ]  will create IAM Open ID Connect provider for cluster "basic-cluster" in "ap-southeast-1"
2022-05-12 00:19:45 [✔]  created IAM Open ID Connect provider for cluster "basic-cluster" in "ap-southeast-1"

e, Create Service Account, Pod uses & associate the Resource Policy created in step3 with that service account.
For this tutorial, for the service account name, you use nginx-deployment-sa -
➜  AWS_ACCOUNT_DVO_EMAIL $ eksctl create iamserviceaccount --name nginx-deployment-sa --region="$REGION" --cluster "$CLUSTERNAME" --attach-policy-arn "$POLICY_ARN" --approve --override-existing-serviceaccounts

f, Create SecretProviderClass to specify which secret to mount in Pod.
The following command uses ExampleSecretProviderClass.yaml in the ASCP GitHub repo examples directory to mount the secret you created in step 1 -
➜  AWS_ACCOUNT_DVO_EMAIL $ kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/examples/ExampleSecretProviderClass.yaml
Warning: secrets-store.csi.x-k8s.io/v1alpha1 is deprecated. 
Use "secrets-store.csi.x-k8s.io/v1" instead.
secretproviderclass.secrets-store.csi.x-k8s.io/nginx-deployment-aws-secrets created

------------------------------------------------------------------------------------------------------------------
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: nginx-deployment-aws-secrets
spec:
  provider: aws
  parameters:
    objects: |
        - objectName: "MySecret" # can name as per your secret in AWS Secrets Manager
          objectType: "secretsmanager" # can change to AWS Secrets Parameter if using that.
------------------------------------------------------------------------------------------------------------------

g, Deploy your pod. The following command uses ExampleDeployment.yaml in the ASCP GitHub repo examples directory to mount the secret in /mnt/secrets-store in the pod -
➜  AWS_ACCOUNT_DVO_EMAIL $ kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/examples/ExampleDeployment.yaml
service/nginx-deployment created
deployment.apps/nginx-deployment created

h, To verify Secret has been mounted properly, use the following command and confirm that your secret value appears -
➜  AWS_ACCOUNT_DVO_EMAIL $ kubectl exec -it $(kubectl get pods | awk '/nginx-deployment/{print $1}' | head -1) cat /mnt/secrets-store/MySecret; echo
OR
➜  AWS_ACCOUNT_DVO_EMAIL $ kubectl exec nginx-deployment-76f76f6b68-5wgjn -- cat /mnt/secrets-store/MySecret; echo
{"username":"foo", "password":"bar"}

i, To Get inside the POD to check the secrets -
➜  AWS_ACCOUNT_DVO_EMAIL $ kubectl exec -it nginx-deployment-76f76f6b68-5wgjn -- sh
and then run inside the POD -
# cat /mnt/secrets-store/MySecret
{"username":"foo", "password":"bar"}

4, Once all test and verified can delete the EKS Cluster and other things -
➜  AWS_ACCOUNT_DVO_EMAIL $ eksctl delete cluster --name $CLUSTERNAME

---------
To get a shell access to the container running inside the application pod, all you have to do is:
➜  AWS_ACCOUNT_DVO_EMAIL $ kubectl exec -it --namespace <your namespace> <your pod name> -- sh
---------


Benefits of using CSI Driver:

1. Can store the secrets directly into the PODS using the CSI ➜ SecretProviderClass ➜ AWS Secrets Manager.
2. No need to have configurations for the application to pass the values.
3. Can retrieve the credentials, api-keys etc at run time from the mounted volumes in Deployment/Pods using the environment(s).
4. Can pass multiple secrets instead of single in the SecretProviderClass and can use them in application using the environment data.
5, And many more..


Ref:


Few basic things about ECS and Kubernetes


Important things to note about ECS and Kubernetes

Few basic things about ECS and Kubernetes


Basic Information about the AWS ECS and Kubernetes and there Components.

AWS ECS Basic Information:
AWS ECS is the Docker-suitable container orchestration solution from Amazon. It allows us to run containerised applications on EC2 instances and scale both of them. The below architecture shows the high-level information about ECS.




As shown above, ECS Clusters consist of Tasks which run in Docker containers, and container instances, among many other components. 

Here are some AWS services commonly used with ECS:

Elastic Load Balancer: This component can route traffic to containers. 3 kinds of load balancing are available: application, network and classic.

Elastic Block Store: This service provides persistent block storage for ECS tasks (workloads running in containers).

CloudWatch: This service collects metrics from ECS. Based on CloudWatch metrics, ECS services can be scaled up or down.

Virtual Private Cloud: An ECS cluster runs within a VPC. A VPC can have one or more subnets.

CloudTrail: This service can log ECS API calls. Details captured include type of request made to Amazon ECS, source IP address, user details, etc.

Elastic File System EFS: One can use EFS file system to mount the Volumes across the Instances running under ECS Cluster to have logs as an example at one place. Note: An Amazon EFS File System can only have mount targets in one VPC at a time.

ECS, which is provided by Amazon as a service, is composed of multiple built-in components which enable us to create clusters, tasks, and services:

State Engine: A container environment can consist of many EC2 container instances and containers. With hundreds or thousands of containers, it is necessary to keep track of the availability of instances to serve new requests based on CPU, memory, load balancing, and other characteristics. The state engine is designed to keep track of available hosts, running containers, and other functions of a cluster manager.

Schedulers: These components use information from the state engine to place containers in the optimal EC2 container instances. The batch job scheduler is used for tasks that run for a short period of time. The service scheduler is used for long running apps. It can automatically schedule new tasks to an ELB.

Cluster: This is a logical placement boundary for a set of EC2 container instances within an AWS region. A cluster can span multiple availability zones (AZs), and can be scaled up/down dynamically. A dev/test environment may have 2 clusters: 1 each for production and test.

Tasks: A task is a unit of work. Task definitions, written in JSON, specify containers that should be co-located (on an EC2 container instance). Though tasks usually consist of a single container, they can also contain multiple containers.

Services: This component specifies how many tasks should be running across a given cluster. You can interact with services using their API, and use the service scheduler for task placement.
Note that ECS only manages ECS container workloads – resulting in vendor lock-in. There’s no support to run containers on infrastructure outside of EC2, including physical infrastructure or other clouds such as Google Cloud Platform and Microsoft Azure. The advantage, of course, is the ability to work with all the other AWS services like Elastic Load Balancers, CloudTrail, CloudWatch etc.

Kubernetes Basic Information:
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Kubernetes was introduced in 2014 as an open source version of the internal Google orchestrator Borg. 2017 saw an increase in Kubernetes adoption by enterprise and by 2018, it has become widely adopted across diverse businesses, from software developers to airline companies. 
One of the reasons why Kubernetes gained popularity so fast is its open source architecture and an incredible number of manuals, articles and support provided by its loyal community.
There are a number of components associated with a Kubernetes cluster. 
The master node places container workloads in user pods on worker nodes or itself. 
The other components include the following:

Etcd: This component stores configuration data which can be accessed by the Kubernetes master’s API Server by simple HTTP or JSON API.

API Server: This component is the management hub for the Kubernetes master node. It facilitates communication between the various components, thereby maintaining cluster health.

Controller Manager: This component ensures that the clusters desired state matches the current state by scaling workloads up and down.

Scheduler: This component places the workload on the appropriate node.

Kubelet: This component receives pod specifications from the API Server and manages pods running in the host.

Worker Node(s): A node is a machine – physical or virtual. A node is a worker machine and this is were containers are hosted. It was also known as Minions in the past. It can run multiple PODs depends upon the Instance Configuration.

Cluster: A cluster is a set of nodes grouped together. This way even if one node fails you have your application still accessible from the other nodes. Moreover having multiple nodes helps in sharing load as well.

Master Node: The master is another node with Kubernetes installed in it, and is configured as a Master. The master watches over the nodes in the cluster and is responsible for the actual orchestration of containers on the worker nodes.

Container Runtime Engine: The container runtime is the underlying software that is used to run containers. In our case it happens to be Docker.

Kubelet: Kubelet is the agent that runs on each node in the cluster. The agent is responsible for making sure that the containers are running on the nodes as expected.

Kube-Proxy: An additional component on the Node is the kube-proxy. It takes care of networking within Kubernetes.

The below architecture shows the high-level information about Kubernetes.



Containers within a pod can interact with each other in various ways:
Network: Containers can access any listening ports on containers within the same pod, even if those ports are not exposed outside the pod.
Shared Storage Volumes: Containers in the same pod can be given the same mounted storage volumes, which allows them to interact with the same files.
SharedProcessNamespace: Process namespace sharing can be enabled by setting in the pod spec. This allows containers within the pod to interact with, and signal, one another's processes.

The following list provides some other common terms associated with Kubernetes:
Pods: Kubernetes deploys and schedules containers in groups called pods. Containers in a pod run on the same node and share resources such as filesystems, kernel namespaces, and an IP address.

Deployments: These building blocks can be used to create and manage a group of pods. Deployments can be used with a service tier for scaling horizontally or ensuring availability.

Services: An abstraction layer which provides network access to a dynamic, logical set of pods. These are endpoints that can be addressed by name and can be connected to pods using label selectors. The service will automatically round-robin requests between pods. Kubernetes will set up a DNS server for the cluster that watches for new services and allows them to be addressed by name. Services are the "external face" of your container workloads.

Labels: These are key-value pairs attached to objects. They can be used to search and update multiple objects as a single set. Ex: Labels are properties attached to each item. So you add properties to each item for their class, kind and color.

Selectors: Labels and Selectors are a standard method to group things together. Ex: Lat’s say you have a set of different species. A user wants to be able to filter them based on different criteria using labels based on the colour, type etc and wants to retrieve those filtered items and then this is where the Selectors come in Picture.

Important Terminologies in Kubernetes:
Ingress: Ingress is actually NOT a type of service. Instead, it sits in front of multiple services and act as a smart router or EntryPoint into your cluster and comes under Network Policies.

ReplicaSets: It is one of the Kubernetes controllers used to make sure that we have a specified number of pod replicas running (A controller in Kubernetes is what takes care of tasks to make sure the desired state of the cluster matches the observed state).

Secrets: Secrets are used to store sensitive information, like passwords or keys. They are similar to configMaps, except that they are stored in an encoded or hashed format.

StatefulSets (different ways to deploy your application): StatefulSet is also a Controller but unlike Deployments, it doesn’t create ReplicaSet rather itself creates the Pod with a unique naming convention. e.g. If you create a StatefulSet with name counter, it will create a pod with name counter-0, and for multiple replicas of a statefulset, their names will increment like counter-0, counter-1, counter-2, etc. Every replica of a stateful set will have its own state, and each of the pods will be creating its own PVC(Persistent Volume Claim). So a statefulset with 3 replicas will create 3 pods, each having its own Volume, so total 3 PVCs.

Services: A service creates an Abstraction Layer on top of a set of Replica PODs. You can access the Service rather then accessing the PODs directly, so as PODs come and go, you get Uninterrupted, Dynamic Access to whatever Replicas are up at that time.
Service Types:
ClusterIP: Service is exposed within the cluster using its own IP address and it can be located via the cluster DNS using the service name.
NodePort: Service is exposed externally on a listening port on each node in the cluster
LoadBalancer: Service is exposed via a load balancer created on a cloud platform:
The cluster must be set to work with a cloud provider in order to use this option.
ExternalName: Service does not proxy traffic to pods, but simply provides DNS lookup for an exter- nal address:
This allows components within the cluster to lookup external resources in the same way they look up internal ones: through services.

ConfigMaps: ConfigMaps are used to pass configuration data in the form of key value pairs in Kubernetes (stores configuration data in plain text). When a POD is created, inject the ConfigMap into the POD, so the key value pairs are available as environment variables for the application hosted inside the container in the POD.

Deployments (different ways to deploy your application): Deployment is the easiest and most used resource for deploying your application. It is a Kubernetes controller that matches the current state of your cluster to the desired state mentioned in the Deployment manifest. e.g. If you create a deployment with 1 replica, it will check that the desired state of ReplicaSet is 1 and current state is 0, so it will create a ReplicaSet, which will further create the pod. If you create a deployment with name counter, it will create a ReplicaSet with name counter-, which will further create a Pod with name counter--. Deployments are usually used for stateless applications. However, you can save the state of deployment by attaching a Persistent Volume to it and make it stateful.

Containers: In our case it's a docker images which will run as a container.

Persistent Volumes: A Persistent Volume is a Cluster wide pool of storage volumes configured, to be used by users deploying applications on the cluster. The users can now select storage from this pool using Persistent Volume Claims. Kubernetes persistent volumes remain available outside of the pod lifecycle → this means that the volume will remain even after the pod is deleted.

Volumes: These provide storage to containers that is outside the container and can therefore exist beyond the life of the container. Containers in a pod can share volumes allowing them each to interact with the same files.

Volume Types: Kubernetes supports several types of standard storage solutions such as NFS, glusterFS, Flocker, FibreChannel, CephFS, ScaleIO or public cloud solutions like AWS EBS, Azure Disk or File or Google’s Persistent Disk.

Security: To run the container with least container or POD with least privileges, need to use this feature. If setting at the container level security context it can override the POD settings.

Taints & Tolerations: Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node, this marks that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.

NameSpaces: To isolate different set of services/resources. They are a way to divide cluster resources between multiple users.

CronJobs: Executes jobs on a schedule. Set of batch jobs which needs to do perform set of task(s) and then stop.

Federation: Kubernetes Federation gives you the ability to manage deployments and services across all the Kubernetes clusters located in different regions.

Load Balancing: A LoadBalancer service is the standard way to expose a service to the internet helps in forwarding the traffic to the concerned service.

DaemonSets (different ways to deploy your application): A DaemonSet is a controller that ensures that the pod runs on all the nodes of the cluster. If a node is added/removed from a cluster, DaemonSet automatically adds/deletes the pod. Ex: Monitoring Exporters, Logs Collection Daemon etc.

Jobs: Creates one or more pods to do work and ensures that they successfully finish.

NodeAffinity: To get pods to be scheduled to specific nodes Kubernetes provides nodeSelectors and nodeAffinity. Since node affinity identifies the nodes on which to place pods via labels, we first need to add a label to the required node.

Pod-Affinity and Anti-Affinity: Pod affinity and anti-affinity allows placing pods to nodes as a function of the labels of other pods. These Kubernetes features are useful in scenarios like: an application that consists of multiple services, some of which may require that they be co-located on the same node for performance reasons; replicas of critical services shouldn't be placed onto the same node to avoid loss in the event of node failure.

Egress: Egress rules provide a whitelist for traffic coming out of the pod. Egress rules specify a list of destinations under to, as well as one or more ports.

Network Policy(Security): A Network policy is another object in the kubernetes namespace. Just like PODs, ReplicaSets or Services. You apply a network policy on selected pods. We can link a network policy to one or more pods. One can define rules within the network policy.
Solutions that Support Network Policies:
• Calico
• Romana
• Weave, Cilium and Kube-router
Solutions that Do Not Support Network Policies: Flannel

Readiness Probes: Determines whether the container is ready to serve requests → Requests will not be forwarded to the container until the readiness probe succeeds.

Liveness Probes: Determines whether the container is running properly → When a liveness probe fails, the container will be shut down or restarted, depending on its RestartPolicy.

Service Accounts: ServiceAccounts are used for allowing pods to interact with the Kubernetes API and for controlling what those pods have access to do using the API. 

Resource Request: The amount of resources a container needs to run → Kubernetes uses these values to determine whether or a not a worker node has enough resources available to run a pod.

Resource Limit: The maximum resource usage of a container → The container run time will try to prevent the container from exceeding this amount of resource usage.

Multi Container PODs: Multi-container pods are pods that have more than one container. 

Init Container: Tu use them when we wanted to check if our database server is running and post that can start the usual container for the service.

Rolling Update: Gradually rolling out a change to a set of replica pods to avoid downtime.

Rollback Update: Reverting to a previous state after an update if any issues.


Common design patterns for multi-container pods:
Ambassador: An haproxy ambassador container receives network traffic and forwards it to the main container. Ex: An ambassador container listens on a custom port and forwards the traffic to the main container's hard-coded port. A concrete example would be a configmap storing the haproxy config. Haproxy will listen on port 80 and forward the traffic to the main container, which is hard-coded to listen on any given port number.

Sidecar: A sidecar container enhances the main container in some way, adding functionality to it. Ex: A sidecar periodically syncs files in a webserver container's file system from a Git repository.

Adapter: An adapter container transforms the output of the main container. Ex: An Adapter container reads log output from the main container and transforms it.

cPanel Script to Dump Data and RYSNC



cPanel Script to Migrate Data In One Shot

cPanel Script to Migrate Data In One Shot

What Does This Script Do :

"cPanel Migration Script" This script is used to take the Input when you Initiate the Script and Prompts for the Directory to provide and to store the cPanel backup files under the same. Once the backup for the accounts has been done which are mentioned in the file name which is "/root/rohit/24.txt".This File contains the Usernames (cPanel accounts) which needs to be moved from the Source Server to the Target Server.



Once the Backup is done then it uses the command known as "sshpass" to rsync the data from the source server to the target server. In "sshpass" we need to provide the destination server password as an argument to that command, so that it will simultaneously move the accounts without asking for the Password.

Note : SSHPASS can be downloaded from the below mentioned link which is -

http://www.nextstep4it.com/categories/unix-command/sshpass/

However, More Information regarding the Working of the Script is mentioned below -

Defacing Sites via HTML Injections (XSS)



Defacing Sites via HTML Injections

Defacing Sites via HTML Injections


What Is HTML Injection:

"HTML Injection" is called as the Virtual Defacement Technique and also known as the "XSS" Cross Site Scripting. It is a very common vulnerability found when searched for most of the domains. This kind of a Vulnerability allows an "Attacker" to Inject some code into the applications affected in order to bypass access to the "Website" or to Infect any particular Page in that "Website".

HTML injections = Cross Site Scripting, It is a Security Vulnerability in most of the sites, that allows an Attacker to Inject HTML Code into the Web Pages that are viewed by other users.

XSS Attacks are essentially code injection attacks into the various interpreters in the browser. These attacks can be carried out using HTML, JavaScript, VBScript, ActiveX, Flash and other clinet side Languages. Well crafted Malicious Code can even hep the "Attacker" to gain access to the entire system.

The most common tags that are used for the HTML Injections are are mentioned below -

1. script ==> This tag contains the code referred in the body of the document.

2. meta ==> This tag defines document's meta Information such as keywords, expiry date, author, etc..

3. html ==> This tag starts an HTML Document

4. body ==> This tag contains the body of the HTML document, which includes the entire content that will appear in the Web Browser.

5. embed ==> This tag is used to embed multimedia in HTML.

6. frame> ==> This tag is used to place the contents into the frame.

7. frameset ==> This tag is a container for frames and replaces the body tag.

8. img ==> This tag provides a facility to put into a web page.

9. style ==> This tag is used to specify the style Information for the document.

10. link ==> This tag is used to specify the color of the unvisited link.

Thus these are the above basic Tag that are used to perform HTML Injections on the site which is vulnerable to the Cross Site Scripting or HTML Injections.

How To Perform HTML Injections : 1, Over here we will use the Google Dork to find out the Vulnerable Site's  and we will use the below mentioned Dork :

GOOGLE DORK : inurl:"search.php?q="

Refer the Image below -


2, Now as we have surfed over the Internet and have found the Sites that are Vulnerable to the XSS that is HTML Injections then in this case we will be using the following simple scripts and the html tags as in detail mentioned above.

Hacking DNN Based Web Sites



Hacking DNN Based Web Sites

Hacking DNN Based Web Sites


Hacking DNN (Dot Net Nuke) CMS based websites is based on the Security Loop Hole in the CMS. For using that exploit we will see the few mentioned points which illustrates us on how to hack any live site based on Dot Net Nuke CMS.

Vulnerability : This is the know Vulnerability in Dot Net Nuke (DNN) CMS. This allows aone user to Upload a File/Shell Remotely to hack that Site which is running on Dot Net Nuke CMS. The Link's for more Information regarding this Vulnerability is mentioned below -
                                http://www.exploit-db.com/exploits/12700/

Getting Started : Here we will use the Google Dork to trace the sites that are using DNN (Dot Net Nuke) CMS and are vulnerable to Remote File Upload.

How To Do It : Here, I an mentioning the few points on how to Search for the existing Vulnerability in DNN.



Let's Begin with the Tutorial...

Step 1. Go To http://www.google.com

Step 2. Now Enter this Dork -  :inurl:/tabid/36/language/en-US/Default.aspx

Now you will see the following as mentioned in the below Image -


The above Dork is used for tracing the sites running on the Dot Net Nuke CMS and will provide us the URL which are Vulnerable and which can be manipulated further to Upload Files Remotely.

PWNing A System via (MSF) Metasploit Framework




PWNing A System via (MSF) Metasploit Framework

PWNing A System via (MSF) Metasploit Framework


Lab Requirements : Both OS running under my Virtual Machine.
1, Back Track 5 R3 Machine
2, Windows XP Machine

Vulnerability : This is the know Vulnerability In Windows XP and Server 2003, MS08-067 vulnerability that uses the netapi module in the Windows SMB Protocol that may be used for arbitrary code execution. The Link's for more Information regarding this Vulnerability is  -

http://blogs.technet.com/b/srd/archive/2008/10/23/more-detail-about-ms08-067.aspx
http://www.symantec.com/security_response/attacksignatures/detail.jsp?asid=21702
http://www.rapid7.com/db/modules/exploit/windows/smb/ms08_067_netapi


Effect Of MS08-067 NetAPI Vulnerability :
This module exploits a parsing flaw in the path canonicalization code of NetAPI32.dll through the Server Service. This module is capable of bypassing NX on some operating systems and service packs. The correct target must be used to prevent the Server Service (along with a dozen others in the same process) from crashing. Windows XP targets seem to handle multiple successful exploitation events, but 2003 targets will often crash or hang on subsequent attempts. This is just the first version of this module, full support for NX bypass on 2003, along with other platforms, is still in development.

Here, I have mentioned below the basic steps to perform the MSF via Modules, Payloads and Exploits -



1, Workspaces (Information stored here in form of database) :
Logical Section of the Metasploit database so that we can Logically divide the discovered Hosts. It means that each discovered hosts is separated and stored here in Workspaces.

2, Scanning For Hosts and Services  :
It means that we will discover here the services and the ports open on the Target Host.

3, Loading a Module with "use" :
Here using the module we will use the specific vulnerability for PWNing a System

4, Specify a Payload with "set" :
Now in this we will set a Payload against a Victim Machine to gain access over it.

5, Identify Targets with "RHOST" and "LHOST" :
Over here we will be using our Source IP as LHOST which is Local Host and Victim IP will be RHOST which means Remote Host.

6, Launching the Exploits :
Once all set and done then we are ready to Exploit the Victim Machine.

Introduction :
When I say "Penetration Testing Tool" the first thing that comes to your mind is the world's largest Ruby project, with over 700,000 lines of code 'Metasploit' . No wonder it had become the de-facto standard for penetration testing and vulnerability development with more than one million unique downloads per year and the world's largest, public database of quality assured exploits.

The Metasploit Framework is a program and sub-project developed by Metasploit LLC. It was initially created in 2003 in the Perl programming language, but was later completely re-written in the Ruby Programming Language. With the most recent release Metasploit has taken exploit testing and simulation to a complete new level which has muscled out its high priced commercial counterparts by increasing the speed and legality of code of exploit in shortest possible time.

In this article, I will walk your through detailed step by step sequence of commands along with graphical illustrations to perform effective penetration testing using Metasploit framework.

Working with Metasploit :
Metasploit is simple to use and is designed with ease-of-use in mind to aid Penetration Testers. Metasploit Framework follows these common steps while exploiting a any target system

Select and configure the exploit to be targeted. This is the code that will be targeted toward a system with the intention of taking advantage of a defect in the software.Validate whether the chosen system is susceptible to the chosen exploit.

Select and configure a payload that will be used. This payload represents the code that will be run on a system after a loop-hole has been found in the system and an entry point is set.
Select and configure the encoding schema to be used to make sure that the payload can evade Intrusion Detection Systems with ease.

Execute the Exploit :
Metasploit framework has three work environments, the msfconsole, the msfcli interface and the msfweb interface. However, the primary and the most preferred work area is the 'msfconsole'. It is an efficient command-line interface that has its own command set and environment system.

Before executing your exploit, it is useful to understand what some Metasploit commands do. Below are some of the commands that you will use most. Graphical explanation of their outputs would be given as and when we use them while exploiting some boxes in later part of the article.


MSF Commands and Usage :
1, search <keyword>: Typing in the command 'search' along with the keyword lists out the various possible exploits that have that keyword pattern.

2, show exploits: Typing in the command 'show exploits' lists out the currently available exploits. There are remote exploits for various platforms and applications including Windows, Linux, IIS, Apache, and so on, which help to test the flexibility and understand the working of Metasploit.

3, show payloads: With the same 'show' command, we can also list the payloads available. We can use a 'show payloads' to list the payloads.

4, show options: Typing in the command 'show options' will show you options that you have set and possibly ones that you might have forgotten to set. Each exploit and payload comes with its own options that you can set.

5, info <type> <name>: If you want specific information on an exploit or payload, you are able to use the 'info' command. Let's say we want to get complete info of the payload 'winbind'. We can use 'info payload winbind'.

6, use <exploit_name>: This command tells Metasploit to use the exploit with the specified name.

7, set RHOST <hostname_or_ip>: This command will instruct Metasploit to target the specified remote host.

8, set RPORT <host_port>: This command sets the port that Metasploit will connect to on the remote host.

9, set PAYLOAD <generic/shell_bind_tcp>: This command sets the payload that is used to a generic payload that will give you a shell when a service is exploited.

10, set LPORT <local_port>: This command sets the port number that the payload will open on the server when an exploit is exploited. It is important that this port number be a port that can be opened on the server (i.e.it is not in use by another service and not reserved for administrative use), so set it to a random 4 digitnumber greater than 1024, and you should be fine. You'll have to change the number each time you successfully exploit a service as well.

11, exploit: Actually exploits the service. Another version of exploit, rexploit reloads your exploit code and then executes the exploit. This allows you to try minor changes to your exploit code without restarting the console

12, help: The 'help' command will give you basic information of all the commands that are not listed out here.

Now that you are ready with all the basic commands you need to launch your exploit, lets get in action with live target system using Metasploit.

Step 1, On Backtrack 5 machine follow the steps mentioned via GUI Interface -
Application > BackTrack > Exploitation Tools > Network Exploit Tools > Metasploit Framework > msfconsole.



Hacking via Cloning Site Using Kali Linux



Hacking via Cloning Site Using Kali Linux


Hacking via Cloning Site Using Kali Linux 

SET Attack Method :
SET stands for Social Engineering Toolkist, primarily written by David Kennedy. The Social-Engineer Toolkit (SET) is specifically designed to perform advanced attacks against the human element. SET was designed to be released with the http://www.social-engineer.org launch and has quickly became a standard tool in a penetration testers arsenal. The attacks built into the toolkit are designed to be targeted and focused attacks against a person or organization used during a penetration test.

Actually this hacking method will works perfectly with DNS spoofing or Man in the Middle Attack method. Here in this tutorial I’m only writing how-to and step-by-step to perform the basic attack, but for the rest you can modified it with your own imagination.


In this tutorial we will see how this attack methods can owned your computer in just a few steps.
1, Click on Applications >> Kali Linux >> Exploitation Tools >> Social Engineering Toolkit >> then Select  "se-toolkit".


Hacking via BackTrack using SET Attack Method



Hacking via BackTrack using SET Attack Method

Hacking via BackTrack using SET Attack 

1. Click on Applications, BackTrack, Exploit Tools, Social Engineering Tools, Social Engineering Toolkit then select set.