WP “Oops, I insecurely coded again!” | Imperva

“Oops, I insecurely coded again!”

“Oops, I insecurely coded again!”

The call is coming from inside the house

It’s no secret that companies need to be vigilant about application security. However, frequently the source of application vulnerabilities may come as a surprise to security teams. While zero-day exploits are a principal focus of vulnerability mitigation strategies, they are just the tip of the iceberg when it comes to flaws in most application security postures. Companies should also address vulnerabilities created by their own developers. A 2021 study revealed that 78% of vulnerabilities were related to flaws in application code. According to OWASP, over 2,500 CVEs have been linked to insecure design; this is such a prevalent issue that they have created a new category for this in the OWASP Top 10 for 2021.

No organization is immune to creating insecure code; this is an issue that impacts even the most established institutions. Recently, the IRS disclosed that they accidentally exposed taxpayer data on their website due to a human coding error. This brings up an important question: Why do so many organizations struggle with developers adhering to secure coding standards? In this post, we’ll review why insecure code is so prevalent and what companies can do to address this.

Lack of education in writing secure code

Developers are trained to write code, but are rarely educated in how to write secure code. A recent study by Forrester Research revealed that the top 50 undergraduate computer science programs in the United States, as ranked by the US News and World Report, did not require any secure coding or secure application design classes in order to graduate. Forty-six of these programs offered at least one course on security, but only one of these programs requires a security course to graduate with an undergraduate degree in computer science.

Over the last decade, there has been a shift in how developers build a code-writing skill set and enter the field of IT. Instead of obtaining an undergraduate degree in computer science, more people have been attending coding bootcamps. These bootcamps promise to teach students everything they need to know to get their first programming job for a fraction of the cost and time required for an undergraduate education. Rather than spending four years earning a college degree, students can fast-track their career by attending one of these programs; most of which last eight to twelve weeks. While top technology companies may not heavily recruit from these programs, coding bootcamp graduates have high job placement rates. A 2021 study conducted by the Council on Integrity in Results found that around 71% of graduates from coding bootcamps found jobs within 180 days of completing the course.

Like computer science undergraduate programs, coding bootcamps also overlook secure coding standards in their curricula. The primary focuses of study include the fundamentals of object-oriented programming, unit testing, database design, and creating web applications. Having attended a coding bootcamp myself, I can attest that security was not a focus in the program. Outside of salting a password before storing it in a database, and learning about SQL injection attacks, instructors and course materials provided me with no guidance on how to securely develop an application. Without a focus on security in either college or bootcamps, this new wave of developers lack the necessary understanding of how to secure their code, leaving a huge skill gap for organizations to address.

Security is not a priority

If new software developers aren’t learning how to securely code while in their education programs, they’ll need to learn on the job. Unfortunately, security is rarely a priority for software developers. A recent survey conducted by Secure Code Warrior reveals that 86% of developers do not view security as a priority when writing code. This survey also revealed that 67% of developers admitted that they have routinely left known vulnerabilities and exploits in their code. This was attributed to deadlines, prioritizing functionality over security, or lack of knowledge on how to fix security problems. When developers don’t get the necessary training to create secure applications, security becomes an afterthought.

Fixing vulnerabilities is not a priority for many organizations. Often, developers are put under strict deadlines and forced to make compromises to get the job done. Since the business is primarily focused on new feature sets to appease customers, enforcing that developers employ secure coding standards falls by the wayside. Application vulnerabilities discovered in production are usually added to a development team’s backlog; however, if the vulnerability is not critical then it’s unlikely to be be addressed any time soon.

Code analysis tools are lacking

Since most new developers aren’t educated on secure coding standards, companies must rely on other measures to prevent security flaws before code is deployed to production. Static Application Security Testing (SAST) is a commonly used tool for these kinds of scenarios. Unfortunately, SAST tools require a lot of manual effort, are time-consuming, lack automation, and have a high false positive rate. Industry-leading SAST tools have an average of a 5% false positive rate, meaning that developers spend a large portion of their time investigating false positives instead of remediating true positives.

SAST tools can also lack guardrails. If something has been identified as a potential bug or vulnerability, and a developer marks it as a false positive, do organizations have any processes to ensure that it is a false positive? Speaking from personal experience, not always. As a former developer, I was required to scan an application on which I made a few minor changes using a SAST tool. After the application was scanned, I was notified of hundreds of bugs and vulnerabilities ranging from minor to critical that needed to be fixed. Not knowing how to fix most of these, I reached out to a senior developer to find out how to proceed. The senior developer told me to close each bug out and comment that it was a false positive; since this was an internal application, these bugs were not that important. With my deadline quickly approaching, I went against my better judgment and did as I was instructed. To my surprise, no one followed up, and my code was pushed to production.

Empowering developers to secure code

More organizations adopting a “shift-left” mentality means that more trust is being put in developers. With great power comes great responsibility: developers need to incorporate secure coding practices into their projects. How can developers be expected to securely code when they lack the resources and support?

Organizations should empower their developers by providing them with the necessary training and support to create secure code. One of the most impactful ways to do this is to adopt a Secure Software Development Lifecycle (SSDLC). The goal of SSDLC is to inject security into every step of the development lifecycle, changing security from an afterthought to a core part of the development process. In order to have a successful implementation and adoption of SSDLC, organizations should focus on creating secure coding guidelines, invest in security awareness and secure coding training, have clear requirements, and make security a priority throughout each project.

Incorporating security throughout an entire project minimizes vulnerabilities and costs in the long run. A study conducted by Consortium for Information and Security Quality (CISQ) revealed that poor software quality is costly: it was estimated that it would cost the US approximately $607 billion to find and remediate bugs in 2020. Creating better quality code minimizes these costs. Another study reveals that it is 100x more expensive to fix a bug after it has been deployed to production. If a defect has been found in the design, implementation, or testing phase, addressing it at this time is much less expensive. Implementing SSDLC in an organization educates developers on how to write quality, secure code.

In addition to implementing some form of SSDLC, organizations should look for tools that provide protection to applications and secure them by default. Imperva’s Runtime Protection (RASP) is a technology that was intentionally created to fill any gaps left by the SSDLC. RASP secures applications by default. RASP solutions are integrated into web applications, web services, and microservices, becoming just another part of the application’s core functionality. RASP solutions inspect the data that flows into and out of the application, detecting and neutralizing threats to vulnerable code in real-time, with no perceivable performance impact. Learn more.