Software Composition Analysis 101: Knowing what is inside your apps

The term Software Composition Analysis (SCA) is relatively new to the security world. But similar approaches have been used since the early 2000s to indicate security verifications on open source components. SCA has become an evolution of that. It is the process of identifying and listing all the parts and versions present in the code and checking each specific service and looking for outdated and/or vulnerable libraries that may impose security risks to the application. These tools can also check for legal issues regarding the use of open-source software with different licensing terms and conditions.

But how those SCA tools work, and how can they help identify and remediate issues on open source libraries that are being used in your codebase? Well, first, to be able to generate a graph of the components that are part of specific software and any issues related to them, we need to rely on at least three pieces of information: 

  • Application Manifest – a file that gives instructions on how the software should work, provides a list of dependencies required and contains required permissions and version compatibility.
  • Vulnerability Data Sources – a database of vulnerability information, they can be private or public, the most common public one is the National Vulnerability Database (NVD)
  • Dependency Metadata – it is the metadata related to the dependencies you have on your code, such as version, packaging, license, etc.

With the information above, you can better understand how your application is built and its open-source software. You can also identify which libraries are outdated or have a known vulnerability in them. Ok, that’s great! But there are some other rather complex issues that you need to be aware of when applying this technique or using an SCA tool to identify the problems on 3rd party software.

First, there are the direct dependencies, which are the ones you call directly in your code. Those are easier to list, identify their versions and fix. But then, they can also depend on other libraries that they use, and so on. These are called indirect or transitive dependencies. If those are outdated or vulnerable, it will be hard for someone to fix them directly unless they are the owner or maintainer of that code. 

Second, it is ubiquitous for security researchers to make a vulnerability public by creating a CVE and providing details on how the attack works and how to fix it. But in the open-source world, not every vulnerability gets a CVE. Why? Mainly because the CVE process is prolonged and centralized. Usually, there is no benefit for a developer to report a security bug unless they need someone to fix it for them. 

And last but not least, the effort to remediate those issues found on 3rd party software is much higher than the effort to identify them. It requires the execution of a series of unit and regression tests to ensure that everything is still working as intended and that most of the time either doesn’t exist or isn’t automated.

In a nutshell, SCA tools and techniques are here to stay, and their usage is increasing in many organizations where security is a priority. AppSec teams can’t keep up with all the new vulnerabilities being published daily. Make sure to look for solutions that can adequately adapt to your own way of building software. They need to cover the programming languages used in your organization and identify indirect dependencies, not just relying on public CVEs to find those issues.

DevSecOps – Integrating Security in the Development Pipeline

DevSecOps: Is your organization ready for it?

Lately, we’ve been hearing a lot of discussions around the use of DevOps and DevSecOps. Although these are considered very good practices, I believe many organizations aren’t doing it properly or even aren’t prepared for them just yet. These practices demand many cultural changes in organizations, which most of them aren’t prepared for or are welcome to changes. Just using tools and automation is not enough to achieve the cultural change needed. Many researchers and authors believe that DevOps itself already includes security in its core and although they may be right since software security is closely related to software quality. Unfortunately, that’s not the case in most of the companies adopting DevOps yet. 

Many organizations are applying DevOps without thinking about Security. That’s why, I believe, a new term was created, first called Rugged DevOps, or SecDevOps, or DevOpsSec and lately DevSecOps (other terms you may found out here are Agile Security, Agile SDLC or DevOps Security). The goal of DevSecOps, in my opinion,  is to raise awareness of the increasing need for security in this shifting culture using the agile and automation mindset. DevSecOps is an evolution to DevOps to make sure that security teams stop being the bottleneck for digital transformation.

DevSecOps is changing the Application Security industry by integrating security verifications and checks within the development process and reducing the vulnerabilities found before launching into production. According to Gartner, here’s their definition of DevSecOps:

“DevSecOps can be depicted graphically as the rapid and agile iteration from development (the left side of Figure 1) into operations (the right side of Figure 1), with continuous monitoring and analytics at the core. Our goal as information security architects must be to automatically incorporate security controls without manual configuration throughout this cycle in a way that is as transparent as possible to DevOps teams and doesn’t impede DevOps agility but fulfills our legal and regulatory compliance requirements as well as manages risk” Gartner – DevSecOps: How to Seamlessly Integrate Security Into DevOps

Figure 1 – DevSecOps – Source: Gartner (September 2016) 

Currently, security testing is generally performed at the end of the application development process, which increases the risks of deployment delays and project costs. Especially if vulnerabilities that require recoding or redesign are found. Companies can overcome this problem by using the “Shift Security Left” (SSL) approach, which according to Accenture and AWS basically says that it “introduces security at the inception of the development lifecycle and automates testing within the DevOps workflow.” Please don’t confuse that approach with the SSL/TLS encryption protocols. Let’s discuss some of these practices and how you can apply them in your software development scenario.

Outdated and vulnerable libraries (SCA)

One of the biggest problems related to security that organizations face is knowing what’s exactly inside their applications and how to properly and securely patch them. We can all remember the damage that was caused by Wannacry in 2017, which exploited a vulnerability that was fixed three months before the attack was launched. In the same year, Equifax was also breached,  in another big attack that was news all over the world, because of an outdated and vulnerable version of the Apache Struts library, which allowed the execution of remote commands on their systems4.

Nowadays it is very rare, if not never, for a developer to write code from scratch, meaning without using any frameworks or libraries. And that also has become a big problem on applications since most of the code being used comes from third-party and is rarely verified or tested for security issues. Recent studies have shown that more than 90% of applications are made up of open source and that 70% of those are outdated or have a public available vulnerability. The OWASP Top 10 2017 addresses this issue in its A9 – Using Components with Known Vulnerabilities:

“Components, such as libraries, frameworks, and other software modules, run with the same privileges as the application. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications and APIs using components with known vulnerabilities may undermine application defences and enable various attacks and impacts.” – OWASP Top 10 2017

A great example of this issue is WordPress, a famous CMS system that has a strong and well-tested core, at least nowadays, but its themes and plugins are very problematic and unreliable since they are developed by third-party developers that don’t always follow the same security procedures and verifications in their development process as the core WordPress team. 

There are many tools out there that check for outdated software on servers and even update those for you. But that’s not an easy task especially because of the incompatibility issues with legacy applications and the lack of proper regression testing to make sure everything is working properly once the updates are made. That becomes even harder when the outdated library is embedded in a custom or in-house application and the tools mentioned earlier won’t work on them. So, what can be done is to verify these issues before the application is even built, generally during the development phase or during pre-build tasks. 

There are many tools available that will perform those tasks for you in many different languages and frameworks, some of them can even be integrated into your developers’ IDE and they check and fix these issues before submitting any new code. OWASP has a great free tool available for Java and .NET libraries called OWASP Dependency Check, which also has plugins for CI tools like Jenkins. Other commercial tools that can help you in this process are Snyk, WhiteSource, Synopsis BlackDuck, Veracode SCA, Conviso AppSec Flow and Sonatype Nexus IQ Server.

Automated security scans (DAST)

As we mentioned before security scans are mostly performed at the end of the application development process, usually after the application has been done and it is properly working at least on a Dev or QA environment. So any issues found would either need a system update of some kind or recode to fix the application. It has been proven on many occasions that fixing bugs earlier in the development lifecycle is much cheaper and faster than after the application is in production. So why not run our security scans as soon as possible to find the problems at an earlier stage? 

With virtualization, containers, orchestrators and serverless it has become very quick and easy to create and destroy environments for testing or proof-of-concept (PoC) purposes. Security teams can use those techniques to create temporary servers to be tested with the new updates as soon as possible and make sure their applications still work. You can also integrate security testing in the build process, so every time a build is generated and all those Q&A testing is done, some security testing is also performed, giving fast feedback for the developers to fix it. Another great tool from OWASP is OWASP ZAP, which is a web proxy, and like Dependency Check, it also has a Jenkins plugin to integrate your security scans in the build process. Other tools that can help you in this process are w3af, Arachni, BurpSuite Enterprise, Acunetix, Netsparker, WebInspect, AppScan, Conviso AppSec Flow and Veracode.  For a full list, please check the OWASP Vulnerability Scanning Tools list.

Security code reviews (SAST)

Code reviews and refactoring have been in place for a long time, but mostly focusing on code quality and performance. Security code reviews focus on vulnerabilities and security issues, regardless of how the code has been written. Although there are many tools available already, there are also many different languages and also false positives to deal with. In the DevSecOps mindset code reviews should be done at each commit, preferably automated since the objective is to commit small pieces of code many times a day or a week, to make sure that if something happens, it would be easier to debug and fix. In that case, you can integrate your security code review tool with your SCM and create alerts or even triggers that execute a scan in your source code every time there is a commit to your SCM. 

Performing those checks frequently will significantly reduce the number of vulnerabilities in your software after deployment that would need any code change. It will also give your developers fast feedback about the mistakes they are making and how to avoid them. Some tools to help in this process are Checkmarx, Fortify, HuskyCI, Horusec, AppScan, SonarQube, Conviso AppSec Flow, and many others. Please check out this detailed list of Source Code Analysis tools by OWASP for more options.


Developers are not security experts, they rarely hear or learn about security during their learning, so don’t expect them to understand and fix all the security issues at once. This process takes time and requires training. The security team has to understand how the development flow works and try to include security in a way that is transparent and fluent, without adding new barriers. Try to use security verifications and checks inside already well-known tools like IDE, CI/CD, SCM or ALM tools using add-ons or plugins.

Applying those three practices effectively will have a huge impact on the security of your overall applications. Make sure you automate those tasks as much as possible so that your security team can focus more on manual testing and other issues and the developers can have fast feedback about the security of their applications to take action. Remember what matters to the business and try to balance security with new features.

Analyzing Web Malware – 2016 Edition

This article was originally written in September 2016. Read with caution! =)

In this article, we will be looking at a modified version of the WSO FilesMan backdoor, which is a PHP webshell designed to control the whole compromised system. WSO stands for “Web Shell by Orb” and has the ability to masquerade as an error page containing a hidden login form. Here is just a piece of the code that was named prbnts.php and placed inside the /wp-includes/js/jquery/ui/ folder, which usually only holds JavaScript files (.js):


As you can see, the file is encoded in base64 and compressed with the PHP core gzinflate function. When we decode and decompress it we get something in a more human-readable format:

$color = “#df5”;
$default_action = ‘FilesMan’;
$default_use_ajax = true;
$default_charset = ‘Windows-1251’;

if(!empty($_SERVER[‘HTTP_USER_AGENT’])) {
  $userAgents = array(“nouseragenthere”);
  if(preg_match(‘/’ . implode(‘|’, $userAgents) . ‘/i’, $_SERVER[‘HTTP_USER_AGENT’])) {
    header(‘HTTP/1.0 404 Not Found’);

This code gives us some clues that the file seems malicious and shouldn’t be there. Even if you aren’t tech-savvy, just searching for the first four lines online will already show you some hints that this file is a backdoor and the first findings date back to the end of 2010. More specifically, the malicious file is a PHP Web Shell, or just PHP Shell, which is a shell wrapped in a PHP script and that it uses built-in PHP functions to execute commands on the system. With it, we can do anything on the server where it is located like upload and download files, install, run or delete programs and sometimes even create or delete users, depending on the webserver user’s permissions. It is similar to having an SSH (Secure SHell) connection to the server.

Can you imagine the damage it can do? If you have your web shell on a server you literally “own” that server. Got it? Now you know why people say someone got owned when they were hacked. =)

Here are some of the functions used to execute system commands with PHP, according to the official documentation:

  • system() – Execute an external program and display the output
  • exec() – Execute an external program
  • shell_exec() – Execute command via shell and return the complete output as a string
  • passthru() – Execute an external program and display raw output

Well, and why would someone want to put a webshell and control my server? That is a great question! An attacker might need your server for many different reasons.

First and foremost, a web shell is a backdoor, it gives them direct access to your server without having to exploit a vulnerability over and over again, so it gives them a persistent way to access the server (unless you find out about it and remove the file). Another reason would be to use your server as a zombie computer that is part of a botnet and force it to execute attacks along with other infected machines. Some attackers also use infected machines to hide from the police by pivoting their connection through these servers and making it harder for law enforcement to investigate and detect the source of the attacks.

This brings us to the question: Why was my website hacked? How did it get infected? Well, just like viruses, web malware has many different variations as well. Malware developers might change a thing or two to avoid detection from pattern-based tools. In this particular case, the victim was using the WordPress version 4.4.2 which was already 7 months old at the time of the attack and could very well be one of the causes of the infection. But let’s dig deeper before jumping to conclusions.

Looking to find the source of the hack on WordPress websites, we usually look first at the themes and plugins folders since those are the most common way to exploit a vulnerability and place a web shell or some other kind of malware. The WordPress development team has done a great job at hardening the WordPress core (which includes wp-admin and wp-includes folders, and root files), but themes and plugins are mostly third-party code developed by other people or organizations that you don’t know and can’t trust. That’s why it is important to validate those before you start using in on your site! Application Security tools that do Software Composition Analysis (SCA) may be able to help with that task.

The contents of /wp-content/themes

As we can see the themes folder has only a few default WordPress themes that come with the software installation. The first thing I’d recommend here is removing anything from the website that you are not using, including those themes. Why you may ask? Well, in an event that someone finds a vulnerability in a plugin or theme that you have on your website but are not using it at that time, they might still be able to exploit the vulnerability and compromise your site! Removing unused themes and plugins reduces the attack surface that a malicious user may have on your website.

The contents of /wp-content/plugins

Now, the plugins folder has something that caught our attention: the jetpack plugin. This plugin is made by the team and it is widely used on many WordPress websites. It is well maintained and updated very often by their development team. The problem is that Jetpack has multiple public vulnerabilities as we can see at WPScan:

Search results for “jetpack” on

Can you guess what version of the Jetpack plugin was on the website? Congratulations if you guessed version 2.5.2! It was released in September 2013. Three years before when this analysis was originally made! While it is hard to detect the specific flaw an attacker was able to exploit on the website without going through all the logs (if you have access to them) I’m confident this outdated version of the plugin was probably one of the causes of this compromise.

What could you do if your site is infected? Well, if you’d like to check if your website might have one of these malicious files you can do this:

  1. Sign in to your server using SSH (Secure Shell)
  2. Go to your WordPress folder
  3. Run this command: “grep -r eval(gzinflate(base64_decode *”
  4. If any of the results have a long encoded string after this code, then it is best to investigate it further.

If you are already infected, but don’t have access to your server via SSH or don’t know how to do that, and only have FTP access to it, you can do this:

  1. Download the same version of WordPress you use on your site via
  2. Remove the wp-admin and wp-includes folders from your site (either using SSH or SFTP/S)
  3. Extract the zip file and upload only the new wp-admin and wp-includes folders to your website (Warning: Do not just replace the folders as it may have new files in there, and they won’t be removed! Remove the old ones before uploading the new folders )
  4. If that didn’t fix your site yet at least now you have narrowed the problem down to the wp-content folder and can start looking for suspicious files there. Check your plugins and themes folder. Remove anything that you are not currently using on your site. Make sure your plugins are up to date and don’t have any publicly known vulnerabilities.
  5. Most FTP clients have search mechanisms that you can use to search for strings such as “base64_decode”. Start with that!

Well, we hope this helps! If you have any other tips and tricks feel free to send them my way and I’ll update this article! There are some reference links below if you would like more information on protecting against webshells and learn how to do a proper hardening on your WordPress site.

Stay safe! #staysafe #wearamask


Tips on Hardening WordPress

Kubernetes Security 101: Best Practices to Secure your Cluster

Article originally posted on Dec 17th, 2020 at:

Container adoption has surged in recent years, with the “2019 Cloud Native Computing Foundation survey” reporting 84% of their respondents use some type of containerization in production. The same survey also found 78% of respondents use Kubernetes in production, making it the market leader container orchestration solution and widely adopted across global tech companies and startups. 

With clear benefits and rising adoption, it is critical that the security of Kubernetes is well-understood by any Developer or DevOps person implementing this service in their environment. To help those engineers distill all the information available on this topic, here are the key steps for securing your clusters. 

What Needs to Be Secured? 

Any part of a cluster could be abused if accessed by an attacker. Let’s look at some specific steps to ensure your deployments are secure, focusing on the control plane and worker nodes along with the sub-parts within each. 

The Control Plane 

If you run your clusters using managed services such as AKS, EKS or GKE, the cloud provider handles control plane security. This is based on the Shared Responsibility Model for the Cloud

The Master Node, aka the Control Plane, serves as the brain that keeps the complex system of nodes running. It serves as the main node(s) of your cluster as it manages the worker nodes. 

Figure 1 – A diagram of a Kubernetes Architecture and its main components 

There are attackers and bots constantly searching the internet for exposed Kube API Servers. It is critical that the kube-apiserver is not left publicly exposed otherwise it may be an easy target for those bots. Although the default setting on unmanaged clusters is that the API server is not exposed, that’s not the case for managed Kubernetes services such as EKS (Elastic Kubernetes Service). Exposed API servers are still the main entry point for attackers to compromise a K8s cluster. The basic recommendation for securing API servers is to only allow engineers to access the API via the internal network or corporate VPNs and even then, you should limit the access for specific users and machines. 

Using RBAC 

RBAC (Role Based Access Control) authorization is the next step in creating a more secure cluster now that access to the API server is restricted. RBAC allows you to configure who has access to what in a cluster. It also allows you to restrict users from accessing the kube-system namespace, which houses all the control plane pods (see Figure 1 above). 

When using RBAC authorization, there are four kinds of API objects that you can use: 

  • Role. Contains rules that represent a set of permissions within a namespace. 
  • RoleBinding. Grants the permissions of a Role to one or more users. 
  • ClusterRole. Contains rules that represent a set of permissions, but it is not tied to a namespace, and it will be applied on the cluster level. 
  • ClusterRoleBinding. Grants permissions for a ClusterRole to a set of users. 

The permissions for the Role and the ClusterRole are usually formed with a combination of a verb and a noun, which represents an object and a resource. Some examples include: 

  • Get Services 
  • List Pods 
  • Watch Secrets 

Figure 2 – How users are related to Roles via the RoleBindings (same thing for ClusterRoles and ClusterRoleBindings) 

What is etcd? 

The etcd is the main data storage location for your cluster, which means all the cluster objects are saved here. Leaving the etcd exposed can potentially leak critical data. Unfortunately, etcd misconfiguration remains rampant as we’ve seen more than 2,300 exposed etcd services on Shodan in December alone.  

Figure 3 – Total number of etcd instances currently exposed to the internet. Shodan query used: etcd port:”2379″ (December 1st, 2020) 

The same security principles should be applied to etcd as with any data storage system. Encryption should be implemented in transit and at rest. The latest default installation, using kubeadm, sets up the proper keys and certificates with TLS encryption for etcd. However, if an attacker somehow bypasses the API server and can manipulate objects directly into etcd, it would be the same as having full access to the entire cluster as he or she would have the ability to change all the configurations of the cluster and the controller manager would reflect those changes in the actual cluster itself. 

Network Policies 

Network policies can help address the issue of open pod communication. By default, the cluster network allows all pods on the same cluster to communicate with each other, including pods from different namespaces. A network policy specifies how groups of pods can communicate with each other and with other network endpoints. NetworkPolicy API resources use labels to select pods and define rules that specify what type of traffic is allowed for the selected pods. These policies can help you restrict access between pods or namespaces. All the access can be configured via labels in YAML files, allowing you to block pods from accessing other pods on the kube-system namespace, for example. 

The Worker Nodes 

If the control plane is the brain, worker nodes are the muscle of a cluster. They run and control all the pods and containers in your cluster. While worker nodes are not required, it is not recommended to run and control all pods on the same node as the control plane, so you should have at least one. 

The main components of a worker node are: the kubelet, the kube-proxy and the container runtime. The kubelet is the agent that runs on each node on your cluster to make sure all containers are running in a pod, it is responsible for managing the container runtime executing containers when necessary and collecting execution information. The kube-proxy manages network communication allowing different containers to communicate with each other in addition to being responsible for external requests. Even the Master Node has a kubelet and a kube-proxy, although they usually don’t show that on the architecture diagrams. The container runtime, is actually the component that creates and executes the containers themselves Also, the default container runtime for a Kubernetes cluster, up until v1.19.5, was Docker, but that changed on December 8th 2020, when v1.20.0 was released, causing major panic to everyone, since it was announced that Dockershim, the CRI (Container Runtime Interface) shim for Docker, was being deprecated. Although it is not the goal of this post, you do not need to panic!

Really, don’t go crazy about it! Not a lot will change with this, but if you want to know more and be prepared, please read this post by the Kubernetes community: Don’t Panic: Kubernetes and Docker. Other CRIs that you could use with your Kubernetes cluster are containerD or CRI-O, which are both projects under the CNCF. I highly recommend that you read the post above (after this one, of course!) if you want to know more details about these changes and what will happen in the future.

There are three main steps to ensure the minimum level of security for the pods themselves. 

1. Limit resources to ensure all pods can perform as needed. If one pod starts consuming all the computing resources available, it could cause a Denial of Service (DoS) on the node. ResourceQuotas are the solution, allowing you to set hard and soft resource limits in a namespace.  

2. Create and apply a Security Context to define privilege and access control permissions for a pod or container. A few to always include are: 

  • AllowPrivilegeEscalation – controls whether a process can gain more privileges than its parent process. This should be set to false. 
  • ReadOnlyRootFileSystem – defines whether the container has a read-only root filesystem or not. The default setting is false, but we recommend that you set it to true. 
  • RunAsNonRoot – indicates if the container must run as a non-root user and should be set to true. By doing this, in any event that the container tries to run as a root user UID 0 (user ID), the kubelet will validate it and fail to start the container. 

3. Use a Linux kernel security feature like Seccomp, AppArmor or SELinux. These can be set up via the Security Context as well. 

Audit Logs 

Audit logs are an important part of a cluster, as they can record all the requests made to the Kube API Server. Audit logs are disabled by default since they increase memory consumption; however, we highly recommend you enable them before putting your cluster in production. Audit logs will help you detect any security issues and will help your developers with debugging and troubleshooting.  

Enabling logs and having a proper policy can increase the likelihood of identifying a misconfiguration before a breach occurs IF (yes, big if) a person or system is tasked with analyzing them to look for suspicious activity. There is no point in having logs if no one or no system is looking and analyzing those. 

Don’t Forget the Basics! 

All the above configurations and tweaks are important for maintaining a secure cluster. But amidst all this, don’t forget some basic rules for day-to-day work with Kubernetes: 

  1. Update your environment version as early and as often as possible using this command: kubectl update. Obs.: Please apply this in a test environment before applying in production. 
  2. Don’t use the admin user for your daily work, the admin should only be used by CI/CD tools. 
  3. If you can, use managed services such as AKS, EKS and GKE. They usually have better defaults for your security posture and the costs for maintaining the control plane can be extremely low. But also, be aware of their security defaults! 
  4. Check out the CIS Kubernetes Benchmark document for more security best practices. They also have specific benchmarks for EKS, GKE and OKE (from Oracle Cloud).  

For more information, recommendations, and to learn more about Kubernetes and Kubernetes Security, please visit this Awesome Kubernetes Security list on GitHub that I created, which contains blogs, books, articles, presentations, videos, training, and tools about attacking and defending your clusters.