The practice of improving and ensuring the security of software is generally referred to as (the field of) application security, or "AppSec" for short. In a traditional waterfall system development lifecycle (SDLC), AppSec was often an afterthought, with someone (a penetration tester) being hired to come in just before release to perform last-minute security testing, or not at all.
Slowly, many development shops started adding more AppSec activities such as secure code reviews, providing secure coding guidance or standards, giving developers security tools, and introducing many other great ideas that improved the overall security of the end product. Some companies even went so far as to create their own team dedicated to application security. However, there is currently no agreed-upon standardization on what defines a complete AppSec program, nor a definition of when someone can say that "the job is done" or that they have done "enough" in regard to the security of software. The line seems to vary greatly from team to team, business to government, and country to country, which makes it a difficult thing to measure.
Our industry is now moving toward acceptance of DevOps as the new norm: people, processes, and products with the aim of better resilience, faster fixes, and extremely quick time-to-market. Although the most visible sign of DevOps tends to be an automated continuous integration and/or continuous delivery pipeline (CI/CD), there is actually quite a bit more to creating a DevOps shop and culture.
First, a quick review of The Three Ways of DevOps, as stated in
The DevOps Handbook:
- Prioritizing the efficiency and speed of the entire software creation system, not just your part(s). This means that if you work very hard to make your area of responsibility more efficient, but there are other areas delaying the system as a whole, your work has not added value.
- Working to shorten and amplify feedback loops so that flaws (design issues) and bugs (code issues) are fixed as early as possible, when it's faster, cheaper, and easier to do so.
- Continuous learning: Allocating time for the improvement of your daily work.
DevOps + Application Security = DevSecOps
As the industry has changed with the move towards DevOps, AppSec has had to change as well, weaving itself through The Three Ways to ensure that the highest quality software is produced. With dev and ops now working hard to increase the efficiency of the entire system (the first way), security has had to learn to sprint alongside them, adding security checks to the pipeline and breaking their activities into smaller and faster pieces. With the new requirement for faster feedback (the second way), the security team can no longer come in at the end of the SDLC. They need to be present from the start. Lastly, DevOps requires a culture of constant learning (the third way), experimentation, and risk-taking, which means making every moment a teaching moment for the security team.
Adding security to The Three Ways — performing AppSec within a DevOps environment — is often referred to as DevSecOps. With this in mind, this article will take a look at some tactics to address the first and second way, using tools to change our DevOps pipeline into a DevSecOps pipeline.
DevSecOps in Practice: Five Ways to Build Your DevSecOps Pipeline
Tactic #1: Weaponize Your Unit Tests
The first tactic is weaponizing your unit tests; you already have them, so why not reuse them? A regular unit test is generally a "positive test," meaning that it verifies that your code does what it should do. Let's say you're reading records from a table in a database. You would likely want to ensure that you can search for one, many, or zero records. Those are generally positive use cases — making sure it does what you hope it will do.
Let's look at negative use cases. What sorts of attacks are possible in this situation? Will your application handle attacks (and failures) gracefully? Some standard payloads (malicious requests) that could be added would include a single quote, two single quotes, a double quote, and appended calls to commands (create, delete, update, union). If the application issues an unhandled error or reacts in a different way than expected, it fails the test and breaks the build.
For example, if one single quote creates a different reaction in your application than two single quotes, that means that you are communicating directly with the database and under no uncertain terms should you allow that code out the door.
Tactic #2: Verify the Security of Your Third-Party Components
The second tactic is verifying the security of your third-party components, libraries, application dependencies, or any other code that you call as part of your application that was written by someone else (not from your dev team). Third-party components currently make up over 50% of code in all applications, and 26% of those components contain known vulnerabilities (source). When you add dependencies to your project, you are accepting the risk of every vulnerability they may include.
This issue, using components with known vulnerabilities, has remained on the OWASP Top Ten for many years. Luckily for us, MITRE created the Common Vulnerability Enumerator database (CVE), and the USA government created the National Vulnerability Database (NVD), both of which contain a list of all publicly known (i.e. publicly disclosed) vulnerabilities, which can be searched quickly and efficiently in any pipeline.
There are several paid and free tools available that perform this function of varying quality and usability. If possible, two tools should be used in case there are errors or something is missed, as each tool uses different methods to verify the components and there are more than just these two databases that can be searched. No matter what type of app you write, you should verify your third-party components for vulnerabilities; this check is too fast and easy to miss. No pipeline should skip this step; it's such a huge win.
Tactic #3: Audit the State of Your System(s) and Settings
The third tactic is verifying the state of your server's or container's patches and config, your encryption status (key length, algorithms, expiration, health, forcing HTTPS, and other TLS settings), and security headers (browser/client-side hardening). Although your sys admins likely believe they have applied patches or various settings, this step is to audit and verify that the policy matches your application's reality. Some tools can do all three of these in one pass, and some tools specialize in only one or more of these. No application should be released onto a platform that has security misconfigurations, missing patches, or poor encryption, nor should any user be subjected to a website that doesn't use the security features available in the browser they use to access it.
Tactic #4: Add Dynamic Application Security Testing (DAST) to Your Pipeline
The fourth tactic is adding dynamic application security testing (DAST) to our pipeline by launching scripted attacks and fuzzing (finding bugs using malformed data injection in an automated fashion) against your application running while it's on a web server as a step in your pipeline. DAST, unlike the three previous suggestions, is not a quick process and thus, one or both of the following options should be employed; run only a baseline scan and/or run it in a parallel security pipeline that does not publish to prod, is run only after hours, and has an unlimited amount of time to finish.
The baseline scan would be limited in nature, only performing passive analysis (missing headers and other obvious issues) and a small subset of dynamic tests, looking for only the worst offenders. The parallel security pipeline is one that circles back or has a dead end, never breaks the build, and is run only with the purpose of performing long and in-depth security tests, delivering the results to the AppSec team for further investigation. Developers would ignore the results of the parallel security pipeline, as many would be false positives or require further investigation.
Both the baseline scan and the parallel security pipeline should be tuned by the application security team to minimize false positives.
Tactic #5: Add Static Application Security Testing (SAST) to Your Pipeline
The last tactic is adding static application security testing (SAST) of your code to your pipeline (also known as static code analysis). Generally, SAST tools are not only extremely slow (running for hours or even days) and can be quite expensive, but they also usually have a false positive rate of over 90%, which may make this suggestion a surprising one. However, if you only search for one type of vulnerability (e.g. XSS or injection) per code sprint and you carefully tune the tool, you can potentially wipe out an entire bug class from your application(s). Sharing a lesson or reference material with developers before the sprint begins would lead to faster and fewer fixes, and performing this early activity in the pipeline (or even giving access to developers before their code hits the pipeline) is another way to ensure good results. SAST could (and should) also be run to completion in the parallel security pipeline.
To summarize, this article suggests five tactics for adding security to your DevOps pipeline:
- Weaponizing your unit tests
- Verifying that your project dependencies are not known to be vulnerable
- Verifying your server's/container's patches and config, encryption, and browser hardening settings
- Dynamic application security testing (DAST)
- Static application security testing (SAST)
Adding these processes to your pipeline will result in the production of infinitely more secure applications.
And in the spirit of continuous learning (the third way of DevOps), as part of the OWASP DevSlop Project, we host a weekly live-stream on Mixer and Twitch where we investigate these tactics and more, and I delve into all areas of AppSec and Cloud Native Security on my personal blog.
This article is featured in the new DZone Guide to DevOps: Implementing Cultural Change.
Images: DevSecOpDays and George Becker