DevSecOps – Integrating Security in the Development Pipeline

DevSecOps: Is your organization ready for it?

Lately, we’ve been hearing a lot of discussions around the use of DevOps and DevSecOps. Although these are considered very good practices, I believe many organizations aren’t doing it properly or even aren’t prepared for them just yet. These practices demand many cultural changes in organizations, which most of them aren’t prepared for or are welcome to changes. Just using tools and automation is not enough to achieve the cultural change needed. Many researchers and authors believe that DevOps itself already includes security in its core and although they may be right since software security is closely related to software quality. Unfortunately, that’s not the case in most of the companies adopting DevOps yet. 

Many organizations are applying DevOps without thinking about Security. That’s why, I believe, a new term was created, first called Rugged DevOps, or SecDevOps, or DevOpsSec and lately DevSecOps (other terms you may found out here are Agile Security, Agile SDLC or DevOps Security). The goal of DevSecOps, in my opinion,  is to raise awareness of the increasing need for security in this shifting culture using the agile and automation mindset. DevSecOps is an evolution to DevOps to make sure that security teams stop being the bottleneck for digital transformation.

DevSecOps is changing the Application Security industry by integrating security verifications and checks within the development process and reducing the vulnerabilities found before launching into production. According to Gartner, here’s their definition of DevSecOps:

“DevSecOps can be depicted graphically as the rapid and agile iteration from development (the left side of Figure 1) into operations (the right side of Figure 1), with continuous monitoring and analytics at the core. Our goal as information security architects must be to automatically incorporate security controls without manual configuration throughout this cycle in a way that is as transparent as possible to DevOps teams and doesn’t impede DevOps agility but fulfills our legal and regulatory compliance requirements as well as manages risk” Gartner – DevSecOps: How to Seamlessly Integrate Security Into DevOps

Figure 1 – DevSecOps – Source: Gartner (September 2016) 

Currently, security testing is generally performed at the end of the application development process, which increases the risks of deployment delays and project costs. Especially if vulnerabilities that require recoding or redesign are found. Companies can overcome this problem by using the “Shift Security Left” (SSL) approach, which according to Accenture and AWS basically says that it “introduces security at the inception of the development lifecycle and automates testing within the DevOps workflow.” Please don’t confuse that approach with the SSL/TLS encryption protocols. Let’s discuss some of these practices and how you can apply them in your software development scenario.

Outdated and vulnerable libraries (SCA)

One of the biggest problems related to security that organizations face is knowing what’s exactly inside their applications and how to properly and securely patch them. We can all remember the damage that was caused by Wannacry in 2017, which exploited a vulnerability that was fixed three months before the attack was launched. In the same year, Equifax was also breached,  in another big attack that was news all over the world, because of an outdated and vulnerable version of the Apache Struts library, which allowed the execution of remote commands on their systems4.

Nowadays it is very rare, if not never, for a developer to write code from scratch, meaning without using any frameworks or libraries. And that also has become a big problem on applications since most of the code being used comes from third-party and is rarely verified or tested for security issues. Recent studies have shown that more than 90% of applications are made up of open source and that 70% of those are outdated or have a public available vulnerability. The OWASP Top 10 2017 addresses this issue in its A9 – Using Components with Known Vulnerabilities:

“Components, such as libraries, frameworks, and other software modules, run with the same privileges as the application. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications and APIs using components with known vulnerabilities may undermine application defences and enable various attacks and impacts.” – OWASP Top 10 2017

A great example of this issue is WordPress, a famous CMS system that has a strong and well-tested core, at least nowadays, but its themes and plugins are very problematic and unreliable since they are developed by third-party developers that don’t always follow the same security procedures and verifications in their development process as the core WordPress team. 

There are many tools out there that check for outdated software on servers and even update those for you. But that’s not an easy task especially because of the incompatibility issues with legacy applications and the lack of proper regression testing to make sure everything is working properly once the updates are made. That becomes even harder when the outdated library is embedded in a custom or in-house application and the tools mentioned earlier won’t work on them. So, what can be done is to verify these issues before the application is even built, generally during the development phase or during pre-build tasks. 

There are many tools available that will perform those tasks for you in many different languages and frameworks, some of them can even be integrated into your developers’ IDE and they check and fix these issues before submitting any new code. OWASP has a great free tool available for Java and .NET libraries called OWASP Dependency Check, which also has plugins for CI tools like Jenkins. Other commercial tools that can help you in this process are Snyk, WhiteSource, Synopsis BlackDuck, Veracode SCA, Conviso AppSec Flow and Sonatype Nexus IQ Server.

Automated security scans (DAST)

As we mentioned before security scans are mostly performed at the end of the application development process, usually after the application has been done and it is properly working at least on a Dev or QA environment. So any issues found would either need a system update of some kind or recode to fix the application. It has been proven on many occasions that fixing bugs earlier in the development lifecycle is much cheaper and faster than after the application is in production. So why not run our security scans as soon as possible to find the problems at an earlier stage? 

With virtualization, containers, orchestrators and serverless it has become very quick and easy to create and destroy environments for testing or proof-of-concept (PoC) purposes. Security teams can use those techniques to create temporary servers to be tested with the new updates as soon as possible and make sure their applications still work. You can also integrate security testing in the build process, so every time a build is generated and all those Q&A testing is done, some security testing is also performed, giving fast feedback for the developers to fix it. Another great tool from OWASP is OWASP ZAP, which is a web proxy, and like Dependency Check, it also has a Jenkins plugin to integrate your security scans in the build process. Other tools that can help you in this process are w3af, Arachni, BurpSuite Enterprise, Acunetix, Netsparker, WebInspect, AppScan, Conviso AppSec Flow and Veracode.  For a full list, please check the OWASP Vulnerability Scanning Tools list.

Security code reviews (SAST)

Code reviews and refactoring have been in place for a long time, but mostly focusing on code quality and performance. Security code reviews focus on vulnerabilities and security issues, regardless of how the code has been written. Although there are many tools available already, there are also many different languages and also false positives to deal with. In the DevSecOps mindset code reviews should be done at each commit, preferably automated since the objective is to commit small pieces of code many times a day or a week, to make sure that if something happens, it would be easier to debug and fix. In that case, you can integrate your security code review tool with your SCM and create alerts or even triggers that execute a scan in your source code every time there is a commit to your SCM. 

Performing those checks frequently will significantly reduce the number of vulnerabilities in your software after deployment that would need any code change. It will also give your developers fast feedback about the mistakes they are making and how to avoid them. Some tools to help in this process are Checkmarx, Fortify, HuskyCI, Horusec, AppScan, SonarQube, Conviso AppSec Flow, and many others. Please check out this detailed list of Source Code Analysis tools by OWASP for more options.


Developers are not security experts, they rarely hear or learn about security during their learning, so don’t expect them to understand and fix all the security issues at once. This process takes time and requires training. The security team has to understand how the development flow works and try to include security in a way that is transparent and fluent, without adding new barriers. Try to use security verifications and checks inside already well-known tools like IDE, CI/CD, SCM or ALM tools using add-ons or plugins.

Applying those three practices effectively will have a huge impact on the security of your overall applications. Make sure you automate those tasks as much as possible so that your security team can focus more on manual testing and other issues and the developers can have fast feedback about the security of their applications to take action. Remember what matters to the business and try to balance security with new features.

Kubernetes Security 101: Best Practices to Secure your Cluster

Article originally posted on Dec 17th, 2020 at:

Container adoption has surged in recent years, with the “2019 Cloud Native Computing Foundation survey” reporting 84% of their respondents use some type of containerization in production. The same survey also found 78% of respondents use Kubernetes in production, making it the market leader container orchestration solution and widely adopted across global tech companies and startups. 

With clear benefits and rising adoption, it is critical that the security of Kubernetes is well-understood by any Developer or DevOps person implementing this service in their environment. To help those engineers distill all the information available on this topic, here are the key steps for securing your clusters. 

What Needs to Be Secured? 

Any part of a cluster could be abused if accessed by an attacker. Let’s look at some specific steps to ensure your deployments are secure, focusing on the control plane and worker nodes along with the sub-parts within each. 

The Control Plane 

If you run your clusters using managed services such as AKS, EKS or GKE, the cloud provider handles control plane security. This is based on the Shared Responsibility Model for the Cloud

The Master Node, aka the Control Plane, serves as the brain that keeps the complex system of nodes running. It serves as the main node(s) of your cluster as it manages the worker nodes. 

Figure 1 – A diagram of a Kubernetes Architecture and its main components 

There are attackers and bots constantly searching the internet for exposed Kube API Servers. It is critical that the kube-apiserver is not left publicly exposed otherwise it may be an easy target for those bots. Although the default setting on unmanaged clusters is that the API server is not exposed, that’s not the case for managed Kubernetes services such as EKS (Elastic Kubernetes Service). Exposed API servers are still the main entry point for attackers to compromise a K8s cluster. The basic recommendation for securing API servers is to only allow engineers to access the API via the internal network or corporate VPNs and even then, you should limit the access for specific users and machines. 

Using RBAC 

RBAC (Role Based Access Control) authorization is the next step in creating a more secure cluster now that access to the API server is restricted. RBAC allows you to configure who has access to what in a cluster. It also allows you to restrict users from accessing the kube-system namespace, which houses all the control plane pods (see Figure 1 above). 

When using RBAC authorization, there are four kinds of API objects that you can use: 

  • Role. Contains rules that represent a set of permissions within a namespace. 
  • RoleBinding. Grants the permissions of a Role to one or more users. 
  • ClusterRole. Contains rules that represent a set of permissions, but it is not tied to a namespace, and it will be applied on the cluster level. 
  • ClusterRoleBinding. Grants permissions for a ClusterRole to a set of users. 

The permissions for the Role and the ClusterRole are usually formed with a combination of a verb and a noun, which represents an object and a resource. Some examples include: 

  • Get Services 
  • List Pods 
  • Watch Secrets 

Figure 2 – How users are related to Roles via the RoleBindings (same thing for ClusterRoles and ClusterRoleBindings) 

What is etcd? 

The etcd is the main data storage location for your cluster, which means all the cluster objects are saved here. Leaving the etcd exposed can potentially leak critical data. Unfortunately, etcd misconfiguration remains rampant as we’ve seen more than 2,300 exposed etcd services on Shodan in December alone.  

Figure 3 – Total number of etcd instances currently exposed to the internet. Shodan query used: etcd port:”2379″ (December 1st, 2020) 

The same security principles should be applied to etcd as with any data storage system. Encryption should be implemented in transit and at rest. The latest default installation, using kubeadm, sets up the proper keys and certificates with TLS encryption for etcd. However, if an attacker somehow bypasses the API server and can manipulate objects directly into etcd, it would be the same as having full access to the entire cluster as he or she would have the ability to change all the configurations of the cluster and the controller manager would reflect those changes in the actual cluster itself. 

Network Policies 

Network policies can help address the issue of open pod communication. By default, the cluster network allows all pods on the same cluster to communicate with each other, including pods from different namespaces. A network policy specifies how groups of pods can communicate with each other and with other network endpoints. NetworkPolicy API resources use labels to select pods and define rules that specify what type of traffic is allowed for the selected pods. These policies can help you restrict access between pods or namespaces. All the access can be configured via labels in YAML files, allowing you to block pods from accessing other pods on the kube-system namespace, for example. 

The Worker Nodes 

If the control plane is the brain, worker nodes are the muscle of a cluster. They run and control all the pods and containers in your cluster. While worker nodes are not required, it is not recommended to run and control all pods on the same node as the control plane, so you should have at least one. 

The main components of a worker node are: the kubelet, the kube-proxy and the container runtime. The kubelet is the agent that runs on each node on your cluster to make sure all containers are running in a pod, it is responsible for managing the container runtime executing containers when necessary and collecting execution information. The kube-proxy manages network communication allowing different containers to communicate with each other in addition to being responsible for external requests. Even the Master Node has a kubelet and a kube-proxy, although they usually don’t show that on the architecture diagrams. The container runtime, is actually the component that creates and executes the containers themselves Also, the default container runtime for a Kubernetes cluster, up until v1.19.5, was Docker, but that changed on December 8th 2020, when v1.20.0 was released, causing major panic to everyone, since it was announced that Dockershim, the CRI (Container Runtime Interface) shim for Docker, was being deprecated. Although it is not the goal of this post, you do not need to panic!

Really, don’t go crazy about it! Not a lot will change with this, but if you want to know more and be prepared, please read this post by the Kubernetes community: Don’t Panic: Kubernetes and Docker. Other CRIs that you could use with your Kubernetes cluster are containerD or CRI-O, which are both projects under the CNCF. I highly recommend that you read the post above (after this one, of course!) if you want to know more details about these changes and what will happen in the future.

There are three main steps to ensure the minimum level of security for the pods themselves. 

1. Limit resources to ensure all pods can perform as needed. If one pod starts consuming all the computing resources available, it could cause a Denial of Service (DoS) on the node. ResourceQuotas are the solution, allowing you to set hard and soft resource limits in a namespace.  

2. Create and apply a Security Context to define privilege and access control permissions for a pod or container. A few to always include are: 

  • AllowPrivilegeEscalation – controls whether a process can gain more privileges than its parent process. This should be set to false. 
  • ReadOnlyRootFileSystem – defines whether the container has a read-only root filesystem or not. The default setting is false, but we recommend that you set it to true. 
  • RunAsNonRoot – indicates if the container must run as a non-root user and should be set to true. By doing this, in any event that the container tries to run as a root user UID 0 (user ID), the kubelet will validate it and fail to start the container. 

3. Use a Linux kernel security feature like Seccomp, AppArmor or SELinux. These can be set up via the Security Context as well. 

Audit Logs 

Audit logs are an important part of a cluster, as they can record all the requests made to the Kube API Server. Audit logs are disabled by default since they increase memory consumption; however, we highly recommend you enable them before putting your cluster in production. Audit logs will help you detect any security issues and will help your developers with debugging and troubleshooting.  

Enabling logs and having a proper policy can increase the likelihood of identifying a misconfiguration before a breach occurs IF (yes, big if) a person or system is tasked with analyzing them to look for suspicious activity. There is no point in having logs if no one or no system is looking and analyzing those. 

Don’t Forget the Basics! 

All the above configurations and tweaks are important for maintaining a secure cluster. But amidst all this, don’t forget some basic rules for day-to-day work with Kubernetes: 

  1. Update your environment version as early and as often as possible using this command: kubectl update. Obs.: Please apply this in a test environment before applying in production. 
  2. Don’t use the admin user for your daily work, the admin should only be used by CI/CD tools. 
  3. If you can, use managed services such as AKS, EKS and GKE. They usually have better defaults for your security posture and the costs for maintaining the control plane can be extremely low. But also, be aware of their security defaults! 
  4. Check out the CIS Kubernetes Benchmark document for more security best practices. They also have specific benchmarks for EKS, GKE and OKE (from Oracle Cloud).  

For more information, recommendations, and to learn more about Kubernetes and Kubernetes Security, please visit this Awesome Kubernetes Security list on GitHub that I created, which contains blogs, books, articles, presentations, videos, training, and tools about attacking and defending your clusters.