A Modern Approach to Address New SEC Cybersecurity Rules

Chris Koehnecke writer profile image
By Chris Koehnecke

Updated February 28, 2024.

a modern approach to address new sec cyber security rules

The December 2020 SolarWinds hack, AKA "UNC2452"––a backdoor used to gain access to its systems through SolarWinds as "Sunburst", was probably one of the largest and most publicized breaches in recent years. Virtually anyone in the tech space remembers this breach as an ominous reminder of the importance of supply chain security.  

It turns out that it was such a big deal that it became the backbone of the recent SEC Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure Rule, that had direct implications for the CISO of SolarWinds, the company who was held accountable for the security breach.  This breach that began with malware that infected FireEye, but was later discovered to have impacted Microsoft, Intel, Cisco and Deloitte as well, was the impetus for the SEC to codify what have to date only been good security practices, that are subject to the CISO’s discretion.

In this post we’ll take a quick look at what the SEC Cybersecurity Rule now dictates, and why this is a good opportunity for everyone involved in security engineering to reassess their AppSec program to ensure it’s providing real security coverage, and not just ticking regulatory compliance boxes.

Out with the Old AppSec in with New Cloud Native Sec

If we look at the two main elements in the SEC Rule, it focuses on two crucial elements modern AppSec programs needs to deliver:

  1. Having an efficient program that you have done the proper due diligence on, that provides the relevant coverage for your product stack.

  2. A robust (and timely) incident response plan in the event of a hack or breach.

While we’ve all been doing DAST scans since the 90s, where the SEC Rule comes in to shake things up, is around our intentionality and the actual operationalizing of our security process.  While AppSec is nothing new, the security industry as a whole has matured and evolved tremendously since the onset of cloud and infrastructure as code, and now more recently cloud native, serverless, containers and more.  

If you’re still doing things pretty much the same, with all of this change, then in the words of Albert Einstein - that just might be the definition of insanity.  If you are not continuously reassessing and leveling up your risk and vulnerability management programs, then it’s highly likely that you are not currently addressing novel risks and issues.

The SEC Rule comes in to remind us that ultimately, there are consequences for poor security hygiene, or just the checkbox compliance kind of security.  It’s a great opportunity to revisit our security programs, and ensure they are providing real-world security coverage. So what does this practically mean?

Digging Deeper - Securing the Full Stack

While this doesn’t mean that we no longer need compliance or DAST, it does mean taking a look at your holistic security program and making sure you’re doing the best you can for each area, and having sufficient confidence in the security of each layer.  

A modern product stack needs to have scanning tools on every product layer - from the code, to the runtime (DAST!) and APIs, infrastructure and cloud, container security, secret management, code hosting (Git repository security), and a host of other tools.  Even if you are scanning all of these layers, unfortunately it doesn’t mean you’ll never have a security issue again––but it does mean you will be able to at least know about security issues more rapidly across your stack––and be able to work towards addressing these issues.  

The known challenge though, is that the more scanners you employ, the more vulnerabilities and alerts these scanners output––making them extremely overwhelming, particularly when security isn’t the developer’s domain expertise.  This results in most engineering organizations just ignoring these endless lists of security issues SAST, SCA, DAST scanners alert to, and that’s almost no better than not having scanners at all.   

Which leads to the other half of the SEC Rule, the remediation, and incident management part.

Context and Priority in Security

The critical piece of practical security is eventually in the outcomes.  If you are scanning every last part of your stack, but all of the findings remain forever in the backlog, then you aren’t really managing or improving your security hygiene.  If you’re scanning every single part of your stack, but you don’t have a remediation program or an incident response program for when a breach happens, then you are not closing the security loop.

For security to be manageable and continuously improving, engineering teams need to leverage tooling that helps them make decisions based on the priority and context of vulnerabilities––essentially a real risk score.  Not all vulnerabilities are exploitable in your stack, and therefore even if they receive a “high” or “critical” severity level rating, they may not always truly be high priority to remediate.  Modern security tooling now makes it possible to identify real risk and kill chains that are relevant to your particular stack––in order to prioritize remediation.  

This enables your engineering teams to not constantly have to be chasing Sisyphean lists of vulnerabilities that aren’t posing any real risk to your systems.  Patches, updates and even added security controls can be integrated into CI/CD and pipelines, to operationalize your security process into actual developer workflows––from code to production, based on real risk and context-based security.

This also includes the ability to detect and respond quickly if a zero-day vulnerability is discovered in your stack.  With the right cloud native security tooling in place––not point in time security done once a year or some vague compliance list––but an actual security suite tailored to your stack, you can detect, prioritize and remediate issues in near real time, or as quickly as it takes to fix and patch.  With SaaS-based cloud native systems, this also makes it possible to push newly patched versions to clients and users with an SLA that matches today’s engineering velocity.  This means that if a security fix is available and the end user doesn’t choose to upgrade or fix, the security decision has now been pushed into their risk management program.

It is our responsibility as software creators to ensure that our end-users are constantly secure, and in such an event for SolarWinds, had they pushed out a fix, or been aware of the known vulnerability, the onus of fixing, patching and upgrading would have been on the end-user, such as FireEye––and may eventually have had less severe outcomes for SolarWinds.

Third-Parties and the Supply Chain

All of that is great when it comes to our own code and stacks, but what about the parts of our stack that are out of our control?  While we can continuously scan our stack and code, and roll out version updates to mitigate security risks, how can we ensure that the third-party software packaged into our stack receives the same attention and security SLAs?

There are tools today that commit to security SLAs, as well as uptime, and CISOs and engineering managers should prioritize this in the due diligence when selecting tooling––particularly for mission critical packages in your stack. We should select and work with vendors we trust to mitigate security as rapidly as possible, and this also includes open source packages.  We should do sufficient security hygiene and posture research before integrating these packages into our stack.  This includes how quickly maintainers respond to and assess severity, as well as remediate security issues found in their open source packages. 

It is also our responsibility as product managers and producers, to have an exit plan for third-party software that does not fulfill reasonable security remediation requirements. While this is easier said than done, as vendor lock-in is a real issue, it is still a security risk that needs to be factored in. 

These of course are subject to the criticality of the third-party software, and your own organizational risk appetite.  However, if there are critical issues in third-party software, there should be a migration path away from the exploitable software, so that your organization is not exposed to unnecessary risk in software that is out of your control, as your CISO may ultimately be held accountable.

Keep Calm and Carry On: Putting the SEC Rule in Context

In summary, if you are leveraging modern security tooling that is tailored to your specific stack, aligned with best practice for covering the critical parts of the product stack (code, runtime, APIs, infrastructure, secrets et al), you are most of the way to mitigating personal & organizational risk and exposure with regards to the SEC rule.

The other––and possibly more important half, is ensuring that you actually operationalize this security data.  This means understanding which vulnerabilities are truly high priority, and having reasonable SLAs in place for remediating these issues. It doesn’t stop there though.  

It’s important to also have a robust incident management program in place to ensure that even if a zero-day vulnerability is discovered, you have the proper plan in place to manage and remediate the risk as quickly as possible.  Finally, let’s not forget our supply chain.  

While third-party software and packages are out of our control for remediation and incident management, we are able to first ensure ahead of time that these tools and packages are sufficiently security-minded before adopting them. Next, we need to ensure that they uphold security SLAs when needed, and if not, always have an escape plan from risky unpatched software.

All of this together, will enable you to abide by the new SEC Cybersecurity Rule, and not expose yourself as a CISO or your company to unnecessary security risk or federal fines.