Regulating Web Content Access using Web Content Filtering

Introduction to Web Content Filtering

If you are looking for a solution on how you can regulate Web content accessed for user in your organization even if they are not in the organization secured network perimeter. Web Content Filtering is a component of Web Protection Capability of Microsoft Defender for Endpoint. . Try out Web Content Filtering part of Web Protection Capability of Microsoft Defender for Endpoint.

Web Content Filtering can help you

  1. Preventing users from accessing websites belonging to a specific category even if they not in your organization secured perimeter network.
  2. Giving a flexibility to the security teams to filter web content for a group of users

Web Content Filtering licensing requirement 

Your subscription includes one of the following:

  1. Windows 10 Enterprise E5,
  2. Microsoft 365 E5,
  3. Microsoft 365 E5 Security,
  4. Microsoft 365 E3 + Microsoft 365 E5 Security add-on,
  5. Microsoft Defender for Endpoint standalone license

Demonstration

Kubeapps : Application Dashboard for Kubernetes environment

Introduction to Kubeapps

Kubeapps is a web-based UI provides a complete application delivery environment that empowers users to launch, review and share applications. Kubeapps help organization to have their own application dashboard, allowing them to deploy Kubernetes-ready applications into your cluster with a single click.

Kubeapps allows to:

  • Browse and deploy Helm charts from chart repositories
  • Inspect, upgrade and delete Helm-based applications installed in the cluster
  • Add custom and private chart repositories
  • Secure authentication and authorization based on Kubernetes Role-Based Access Control

Assumptions and prerequisites

  1. Kubernetes Cluster of v 1.8.4 and above.
  2. Helm
  3. A locally installed copy of kubectl.

Why Helm is needed ?

Just like Apt / YM / Homebrew package manager, Helm is an another package manager that streamlines installing and managing Kubernetes applications in a K8S environment. It uses packaging formats called Charts which describes a set of Kubernetes resources. A single chart can be used to deploy simple single tier application or something  complex, like a full web app stack with HTTP servers, databases, caches, and so on.

Before proceeding Kubeapps lets install Helm

  1. Download your desired version
  2. Unpack it (tar -zxvf helm-v3.x.x-linux-amd64.tar.gz)
  3. Find the helm binary in the unpacked directory, and move it to its desired destination (mv linux-amd64/helm /usr/local/bin/helm)
  4. Run the command helm version to verify the installation

Installing Kubeapps

As now we have our prerequisite ready, lets go ahead and install Kubeapps. You can either leverage Len : The Kubernetes IDE or you can directly install using command line. In both the cases, Helm will be used to install Kubeapps. Lens provides a nice interface to deploy application using Helm charts,

In case you don’t have Lens installed, please follow my another post “Manage, Monitor & Troubleshoot your Kubernetes Cluster using Lens, the Kubernetes IDE.”  

In this blog, I will be using Lens to deploy KUBEAPPS 

Step 1. Open Lens application and Click on Charts, search for Kubeapps

Step 2 : Click on Kubeapps and click on Install

Step 3 : As we are using Helm 3.x make sure you change the parameter useHelm3: false to true in the script shown. Post changing the useHelm3:true click on Install

It would take some time to get the all pods deployed

Step 4 : Once successfully deployed, the release apps will be visible under Apps –> Releases

Step 5: Select the released application and follow the instruction to setup port forwarding to access KUBEAPPS web interface from your local computer.

 

Creating Kubernetes API Token

For trying out Kubeapps, access to the Dashboard requires a Kubernetes API token to authenticate with the Kubernetes API server as shown below, but for any real installation of Kubeapps you should instead configure an OAuth2/OIDC provider.

kubectl create serviceaccount kubeapps-operator
kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=default:kubeapps-operator

kubectl get secret $(kubectl get serviceaccount kubeapps-operator -o jsonpath={range .secrets[*]}{.name}{“\n”}{end} | grep kubeapps-operator-token) -o jsonpath={.data.token} -o go-template={{.data.token | base64decode}} && echo

Copy the key and provide an an input KUBERNETES API TOKEN window.

Once authenticated, you can access the kubeapps console. Now you can leverage kubeapps to deploy applications to you kubernetes cluster.

Now, under catalog you can see the list of available apps you can deploy using kubeapps.

In you want to add any specific Helm repository to the list, you can do using APP REPOSITORIES under CONFIGURATION

 

Deploying wordpress using kubeapps

Now we have kubeapps configured, lets deploy our fist app using kubeapps. Search for the application under catalog. If you wish to get your application deployed under any specific namespace, you can choose from the list. You can also create a new namespace from kubeapps console and deploy your application there.

Lets create a new namespace

Click on WordPress application and hit Deploy

Do the necessary changes as per your organization requirement.

Navigate to Bottom and click on submit.

Wait for the time, the both the pods status change to RUNNING

As we don’t have load balancer configured for pods, we can directly access the website accessing IP address of worker node along with the port number of wordpress.

Conclusion

This concludes the deployment of Kubeapps in a kubernetes cluster along with the WordPress application using Kubeapps. Kubeapps provides a web-based UI with the complete application delivery environment that empowers users to launch, review and share applications. Hope this will be informative for you, please do share if you find worth sharing it.

 

 

 

 

 

Manage, Monitor & Troubleshoot your Kubernetes cluster with Lens : The Kubernetes IDE

These days Kubernetes is everywhere and there are many Kubernetes administration tools to choose from either command-line or a graphical user interface. While exploring the options to manage and monitor my kubernetes cluster, I came across two very nice tools  k9s (a command line kubernetes cluster monitoring tool) and Lens (a GUI based tool). As most of the admins love to work on GUI based colourful  solution, i thought of covering Lens a Kubernetes IDE tool first and will be covering the k9s in my next blog.

Introduction to Lens

Lens, which bills itself as “the Kubernetes IDE,” is a useful, attractive, open source user interface (UI) for working with Kubernetes clusters. Lens is a standalone application available for MacOS, Windows and Linux Operating Systems. Lens can connect to you Kubernetes Cluster using kubeconfig file and display deep level of visibility and real time statics of your Kubernetes Cluster.  Lens can also connect to—or install—a Prometheus stack and use it to provide metrics about the cluster, including node information and health.

You can access and work with multiple kubernetes cluster from a single unified IDE. The Kubernetes cluster can be local or external (public cloud hosted, Rancer or Openshift). You can add your Kubernetes Cluster by simply importing the kubeconfig with cluster details. You can even access you Kubernetes cluster using the built-in kubectl to enforce the Kubernetes RBAC.

Lens Features

  1. Monitor your Kubernetes Cluster
  2. Collects Metrics such as CPU, Memory, Network and Disk Utilization
  3. Scale up or Scale Down Kubernetes Cluster
  4. Single Plane of Glass for multiple kubernetes Cluster
  5. Inbuilt Kubectl tool
  6. Integration with Helm repositories.

Installing Lens

You can download Lens for Linux, macOS, or Windows from either its GitHub page or its website. In this post, I will be Installing Lens on my Ubuntu Server.

Step 1 : You can install Lens on Ubuntu using following command on an Ubuntu Server.

sudo snap install kontena-lens –classic

Step : 2 Launch the Lens Application

Step : 3 : Fetch kubeconfig file details of the cluster you want to add

kubectl config view –minify –raw

Step 4 : Click on Plus Sign to add cluster you want to Monitor. Click on Custom and paste the Kubecofig file data and click on Add Cluster

Once added you will see the details of cluster, nodes, pods and other components of kubernetes cluster.

In case you don’t see the resource utilization  metrics you need to enable from the cluster settings. Right Click on Cluster and click on settings

Click on Install under Features –> Metrics

Reviewing Kubernetes Cluster Events

 

Lens provides integration with different repositories of Helm Charts, You can install any app from the list of Helm Charts available. Lens also provides you an option to integrate with other Helm Repositories

Deploying Harbor Repository using Lens.

As we can see, there are only two applications deployed under Releases.

Click on Charts and Search for the application you want to deploy.

Click on Install.

Click on Install in bottom right corner.

Click on Helm Release

You can see third application is added to the list.

Under Pods you can see the progress of Pods

Once application get deployed completely. Click on the Application under release to view process to login application.

Scaling up an Deployed Application

Using Lens you can scale up and scale down an application anytime you want with just few click.

Inbuilt Kubectl Command

Lens provides terminal along with Kubectl command line tool to manage kubernetes cluster.

This concludes the installation of Lens, The Kubernetes IDE. In this blog, we covered how you can leverage, Lens to Monitor, Manage & Troubleshoot a kubernetes cluster. Hope this will be informative for you. Please do share if you find worth sharing this.

 

 

 

 

 

Deploying Kubernetes Workload Cluster using Tanzu Kubernetes Grid

Introduction

In my previous blog, I covered the deployment process of Tanzu Management Cluster using Tanzu Kubernetes Grid. In this blog I will be demonstrating how you use Tanzu Kubernetes Grid to deploy and manage Tanzu Kubernetes Workload clusters in a vSphere environment. The Tanzu Kubernetes Grid provides commands and options to perform lifecycle management operations like the Creation, Deletion, Scaling of a Kubernetes Workload Cluster.

Prerequisite:

Before you can create Tanzu Kubernetes clusters, you must have Tanzu Kubernetes Grid management cluster deployed and is up and running in healthy state.

Procedure

To deploy Tanzu Kubernetes clusters, you run the tkg create cluster command, specifying different options to deploy the Tanzu Kubernetes clusters with different configurations. When you deploy a Tanzu Kubernetes cluster using the Tanzu Kubernetes Grid CLI by default Calico networking is automatically enabled in the cluster.

Demonstration


Hope this will be informative for you, please do share if you find worth sharing it. Happy learning 🙂

Deploying VMware Tanzu Kubernetes Grid Management Cluster

Introduction to VMware Tanzu Kubernetes Grid

What is VMware Tanzu Kubernetes Grid?

VMware Tanzu Kubernetes Grid provides a consistent, upstream-compatible implementation of Kubernetes, that is tested, signed, and supported by VMware. VMware Tanzu Kubernetes Grid allows organization to run Kubernetes with consistency and make it available to developers as a utility. VMware Tanzu Kubernetes Grid supported by VMware provides organizations with a consistent, upstream-compatible, regional Kubernetes substrate across on premise software-defined datacenters and public cloud environments. VMware TKG provides the services such as networking, authentication, ingress control, and logging that a production Kubernetes environment requires.

VMware TKG Components

  1. Bootstrap Environment : The bootstrap environment any physical or virtual server on which you download and run the Tanzu Kubernetes Grid CLI. This is where the initial bootstrapping of a management cluster occurs, before it is pushed to the platform where it will run.
  2. TKG Management Cluster : The management cluster will be the first element that need to be deployed when you create a Tanzu Kubernetes Grid instance. The management cluster is a Kubernetes cluster that performs the role of the primary management and operational center for the Tanzu Kubernetes Grid instance. You can either leverage TKG CLI or Web Interface to deploy a TKG Management Cluster.
  3. Tanzu Kubernetes Cluster: are the clusters that are deployed from the management cluster by using the Tanzu Kubernetes Grid CLI.  You can have multiple clusters running different versions of Kubernetes, depending on the needs of the applications they run. By default Tanzu Kubernetes clusters implement Calico for pod-to-pod networking.

Deploying TKG Management Cluster

Prerequisite

  • Internet Connectivity on the Bootstrap server.
  • Install docker & kubectl on bootstrap server.
  • Download, Unpack & Install the Tanzu Kubernetes Grid CLI
  • On Linux use gunzip tkg-linux-amd64-v1.1.0-vmware.1.gz
  • mv ./tkg-linux-amd64-v1.1.0-vmware.1 /usr/local/bin/tkg
  • chmod +x /usr/local/bin/tkg
  • Verify by executing tkg version

  • Generate Public & Private Key on Bootstrap server

ssh-keygen -t rsa -b 4096 -C “email@example.com
ssh-add ~/.ssh/id_rsa

  • SSH to TKG Bootstrap VM.
  • Run the following command to start the UI wizard: tkg init –ui

  • Open up another terminal session on your workstation and run the following command to use SSH port forwarding so we can connect to the TKG UI from your local workstation:ssh root@192.168.1.199 -L 8080:127.0.0.1:8080 -N Open Internet browser and open URL http://127.0.0.1:8080

Deploying Tanzu Kubernetes Grid Management Cluster

Conclusion:

This conclude the process of deploying a #VMware #Kubernetes Grid Management Cluster in a vSphere environment. Hope this will be informative for you. Please do like and share if you worth sharing this. Happy Learning 🙂

 

Deploying consideration for a Stateful application in Kubernetes Cluster

Introduction

After the first release in June, 2014 in just a span of approx. 6 years, Kubernetes a CNCF project has become the standard of container orchestration with almost all major technology giants like AWS, Azure, GCP, IBM, Redhat, VMware (Project Pacific) and many more started supporting it.   It would not be wrong to say, Kubernetes is the fastest growing project in the history of Open Source Software.

Stateless vs Stateful

Initially, Kubernetes was primarily considered a platform to run stateless applications where application is not required to hold any data. The server processes requests based only on information relayed with each request and doesn’t rely on information from earlier requests. On the other hand, Stateful services like database, analytics where server processes requests based on the information relayed with each request and information stored from earlier requests will run either in virtual machines or managed services by any cloud provider.

In this article, I will be focussing on the key points you need to keep in consideration before deploying a stateful application. As we are now clear, stateful application require information to be stored. In a Kubernetes cluster, there are multiple approaches to store the data.

  1. Using Shared storage for the Kubernetes cluster
  2. Using Kubernetes StatefulSets

Let’s discuss the two approaches….

Stateful application using Shared filesystem 

By design Docker containers are ephemeral in nature and require persistent disk storage i.e. persistent volumes to store the data. A persistent volumes can either be created manually or dynamically. A manual persistent volume or static provisioning, will be created before application provisioning whereas in dynamic provisioning of storage, the cluster can automatically deploy storage in response to the persistent volume claims it receives and then permanently bind the resulting persistent volume to the requesting pod. In Kubernetes, dynamic provisioning can be done using StorageClass.

You can create a persistent volume either by

  1. Directly creating persistent volume on shared file system. These days most of the shared file system providers i.e. samba, NFS, iSCSI, Amazon EFS, Azure Files, Google Cloud Filestore provides volume drivers or CSI (Container Storage Interface) to enable cluster admins to directly provision persistent volume on the shared storage.
  2. Mounting shared storage on the Kubernetes nodes and creating persistent volume on the mounted volume. Once mounted directly on the Kubernetes nodes, persistent Volume can be pointed to the host directory through hostPath or Local PV.

Stateful applications using Kubernetes Statefulset controller

In case of shared file system, durability and persistence is of data provided by the underlying storage as the workload is completed decoupled from it. This provides flexibility to get the pods scheduled on any node of the Kubernetes cluster . As the workload is completely decoupled from the underline storage this approach is not right fit for the applications like noSQL relational databases which requires high I/O throughput.

 

For the stateful application requiring high I/O throughput, Kubernetes Statefulsets are the recommended method. Leveraging Statefulsets along with Persistent Volume Claim you can have applications that can scale up automatically with unique Persistent Volume Claims associated to each replica Pod. StatefulSets are suitable for deploying Kafka, MySQL, Redis, ZooKeeper, and other applications needing unique, persistent identities and stable hostnames.

There are three major components underlying a StatefulSet:

  1. A Headless Service, a service with a service IP but instead of load-balancing it will return the IPs of our associated Pods. This allows direct interaction with the Pods instead of a proxy.
  2. The StatefulSet, Pods belonging to a StatefulSet are guaranteed to have stable, unique identifiers and follow a naming convention and also support ordered, graceful deployment and scaling.
  3. Persistent volume Claims, pods participating in a Statefulset is required to have a persistent volume claim following the similar naming convention. If in case a pod get terminated and get restarted on another node, Kubernetes controller will ensure to associate the new pod with its corresponding existing Persistent Volume Claim.

 

Conclusion

In this articles we discussed on two approaches of deploying a stateful application in a Kubernetes cluster. Deploying a stateful application using shared filesystem is best fit for the application which don’t require high I/O throughput. On the other hand Deploying a stateful application using Kubernetes Statefulsets is right fit of applications requiring high I/O throughput. You can choose from a wide set of storage choices like GlusterFS, Samba, NFS, Amazon EFS, Azure Files, Google Cloud Filestore.

I hope this will be informative for you. Please do share if you find worth sharing this.

Serverless or Containers, what to choose ?

Introduction

These days most of the public Cloud Providers are offering #Serverless services also known as Function as a Service (FaaS). Serverless provider trying delivers more value to the business by minimising the time and resources spend by an organization on underline infrastructure requirement. Serverless provider gives a flexibility to users to write and deploy code instead of worrying about the deployment, scalability and manageability of the underline infrastructure required to run that code. and user is charged based on their computation.

History

Serverless architecture is not new, In 2008 Google offered Google App Engine ( GAE or simply App Engine) as a PaaS platform for developing and hosting web application on Google Managed Data Centres. Google App Engine primarily supports Go, PHP, Java, Python, Node.js, .NET, and Ruby applications. However AWS accelerated mainstream use of the serverless in 2014 with the introduction of AWS Lambda the first serverless computing offering by a public cloud vendor. Today, the serverless offering is primarily available cloud providers like Google Cloud Functions by Google, Microsoft Azure Cloud Function, IBM Functions but there are great number of serverless providers today supporting various technologies at different price points. Additionally softwares like OpenFasS, Kubeless, Apache OpenWhish, Knative and many more are available for on-premises deployment for the companies that don’t want to go to cloud but want some additional flexibility. 

How Serverless is different from containers ?

Next question which may come in mind is, how serverless computing is different from containers? When both the architectures are independent of underline infrastructure and enables developers to build applications with more flexibility and less overhead in comparison to traditional servers or virtual machines, what makes them different?

Let’s understand containers first

A container contains applications and all the dependencies required to run an application properly. You can run almost any kind of application in a container no matter where it is hosted.

In simple terms we can say, containers are a way to partition a machine or server into multiple environment’s. Each environment runs only one application in its own partition and doesn’t interact with application running in another partition. Every container share’s the machine’s kernel with other container but run as if it were the only system on the machine.

Key differences

  1. Allocating Server space, although both the technologies need server space to run the application but in a serverless computing it is up to the serverless vendor to provision space as needed by the application. On the other hand each container lives on one machine at a time and uses the operating system of that machine.
  2. Backend Infrastructure scalability, In a serverless architecture backend automatically scales up or scales down as and when needed on the other hand in case of containers developer is required to forecast infrastructure team in advance on the number of containers needed.
  3. Backend Infrastructure Cost, as serverless architecture can scale up and scale down as and when needed. Developers are only charged for server capacity that their application have actually used. On the other hand containers are constantly running and cloud providers will charge the server space even if no one is using the application at the time.
  4. Backend Infrastructure Maintenance, In case of a serverless architecture there is no backend to manage and serverless architecture provider takes care of the management and software updates for the servers that run the code. On the other hand, even if containers are hosted in the cloud, cloud provider do not update or manage them. Management of containers comes under the developer’s responsibility.
  5. Deployment time, Initially deployment of containers take longer to setup up as it is required to configure required system settings, libraries and other dependencies. Post configuration, deployment take only few seconds to deploy. On the other hand, serverless functions do not come bundled with system dependencies and take milliseconds to deploy.

With all the benefits like lesser deployment time, low maintenance, no upfront cost and scalability, there may be scenarios where serverless architecture might not be right fit. Few of them are

  1. It is difficult to test a serverless application because the backend environment is hard to replicate locally. On the other hand, containers will remains the same no matter whether deployment on cloud or on-premise, making it simple to test a container based application before production deployment.
  2. In a serverless architecture, there may be some latency involved in executing tasks because servers will sit cold until pinged by an application. Such architecture may not be an ideal solution for applications where speed is the primary requirement like e-commerce and search sites.
  3. Migration of code base to another cloud service provider can be a big challenge because of lack of interoperability and ability to communicate between implementations in multi-cloud deployments. You might have to make major changes to your code which can take a lot of time and money. Moving to serverless may not be the right choice if vendor lock-in is the primary concern.
  4. As serverless functions have time limits before they get terminated, serverless computing might not be the best choice for long-running apps like online games and apps that keep performing analysis on large datasets.
  5. In a serverless architecture, you don’t have much control over the server. If you select an amount of memory your function should get but CSP assigns a small amount of disk storage and decides the rest of the specifications for you. This can be a hindrance if you need something like a GPU to process large image or video files.

When should you use each?

Both serverless and containers can serve specific use cases. Serverless architecture is best for the use cases where development speed, cost optimization, manageability and scalability is primary requirements. Serverless is best suited for the apps that need not be always running but should be ready to perform tasks.

On the other hand, Containers are best for complex, long-running applications where you need a high level of control over your environment and you have the resources to set up and maintain the application. Containers are also very useful if you are planning to migrate monolithic legacy applications to the cloud. You can split up these apps into containerized microservices and orchestrate them using a tool like Kubernetes or Docker Swarm.

Integrating vRealize Log Insight with Operations Manager

When vRealize Operations Manager is integrated with vRealize Log Insight, you can search and filter log events. From the Interactive Analytics tab in the Log Insight page, you can create queries to extract events based on timestamp, text, source, and fields in log events . vRealize Log Insight presents charts of the query results.

To access the Log Insight page from vRealize Operations Manager, you must either:

  • Configure the vRealize Log Insight adapter from the vRealize Operations Manager interface, or
  • Configure vRealize Operations Manager in vRealize Log Insight.

Procedure

  1. In the menu, select Administration, and then from the left pane, select Management > Integrations.
  2. From the Integrations page, click VMware vRealize Log Insight.
  3. In the VMware vRealize Log Insight page complete the following steps:
    • Enter the IP address or FQDN in the Log Insight server text box of the vRealize Log Insight you have installed and want to integrate with.
    • Select the collector group from the Collectors/Groups drop-down menu.
    • Click Test Connection to verify that the connection is successful.
    • Click Save.

Demonstration