Why does software development need zero trust?

Disclaimer: The following article is a blog post I wrote for Nexor, based on things I’ve been working on over the past few years.

The last few weeks have continued to demonstrate the impact of a compromised software supply chain – whether that’s realised by malicious threat actors or lack of software quality control.


CrowdStrike, one of the leading endpoint detection and response platform vendors, released a software update into the wild that effectively disabled computers running Windows globally. It is simple to summarise that a poorly tested piece of code made its way into the wild and caused a global catastrophe by stopping over 24,000 firms from functioning effectively over a period roughly lasting 72 hours for most firms, with repercussions felt weeks later. Not bad for a simple pointer error.


Guy Carpenter’s report estimates that the insured loss of the CrowdStrike incident is worth between $300 million and $1 billion US dollars. What does that mean to society? Over 7,000 flights were cancelled or delayed over the next week. Thousands of holidaymakers suddenly found themselves without a holiday. Employees suddenly found themselves delaying meetings, missing key milestones and unable to meet deadlines for stakeholders. Hospitals and other healthcare providers were forced to cancel surgeries – even for those who critically needed those treatments to survive. Critical National Infrastructure operators had to deal with the potential of being unable to control critical systems. Banks and other financial firms were unable to process transactions. This is the sort of impact that governments expect from sophisticated threat actors embedding zero-day exploits into software – just like when the Ukrainian power grid was attacked nearly ten years ago. These events don’t just have an impact on business, they impact society and they influence the global economy too.


The reality is, that impact was unintentional. CrowdStrike released some software that by mistake had a catastrophic global impact. This has once again shown what can happen if poorly validated software is released upstream in the supply chain – it can be a fast way to wreak havoc with organisations downstream. This demonstrates how the software supply chain can cause a significant impact to the security of a nation.


What industries leveraging information technology need to remember is sophisticated threat actors are using the same approaches to embed zero-day exploits in the software we consume. These threat actors want to exfiltrate sensitive information for intelligence and gain the information advantage. They want to inject disruptive and de-stabilising misinformation to compromise your decision-making capability. They want access to systems that can cause damage to their adversaries should they manipulate them (think of Stuxnet). The reality is any firm that is involved in the supplying software and services to the nation can become a target for sophisticated threat actors.


The CrowdStrike outage comes just weeks before other headlines have hit the news of other ‘DevSecOoops’ events performed by sophisticated threat actors.
A Chinese state-sponsored hacking group – StormBamboo – compromised an undisclosed internet service provider to poison the automatic updates that are shipped to their customers’ equipment. While it’s not clear whether this was realised by a software supply chain attack or an adversary-in-the-middle attack, it is itself a software supply chain attack designed to open a backdoor for spying on that ISP’s customers.


In another escapade of ‘DevSecOops’, a firm employed by a UK prime defence contractor to create a new staff intranet for submarine engineers, managed to outsource to developers in Siberia and Minsk. The possible security implications are that Russia, and its allies may have been able to plant zero-day exploits in the software to use in future to reveal the security-cleared personnel who maintain British submarines, and potentially locate those submarines. This not only puts the personnel maintaining submarines at risk – as they may become targets for espionage themselves – but the security of our nation too. While the Ministry of Defence has supply chain rules in place with threats of fines and sanctions against industry players to provide a motivation to adhere to these rules, the problem is that the MOD now must face the challenge of a potential security breach due to this happening in the first place.


These attack methods are not new. We saw a sophisticated threat actor target the Solarwinds development environment in 2020 so they could embed their malware into an application that thousands of businesses rely on, globally.


The question I’m asking, is why is the IT industry, still not vetting how software, libraries, and updates that are built and shipped to us, in complete transparency? We’re still blindly trusting software that came from a vendor, as these recent events have demonstrated. Why are we still burying our heads in the sand when we know the risks of using poor quality or vulnerable software?


A Zero-Trust way of thinking at its core principle is just like saying “prove it” before granting the access required to a system. We ask users of IT systems to prove who they are and where they are before we grant them access to any of our data. Usually, if they aren’t where they should be or can’t prove their identity, we deny them access to that data.


How do we apply this principle to software? Software suppliers need to provide us with evidence as to the quality and reliability of their software, so a risk management decision can be made on whether to consume, use it and inherently trust it. One way of producing this evidence may be to test the software ourselves in a ‘sandbox’ or ‘pre-production’ test environment. Another may be to start collecting some software provenance information, using software bills of materials (SBOMs) to build a picture of the known vulnerabilities an item of software may inherit or incorporate. Tools like slsa.dev allow developers to incorporate this attestation information in the software through the development environment, so consumers can understand if the toolchain used to build a software package may have been compromised and potentially malicious code injected into the resulting software package.


Nexor led a research project in conjunction with the UK Government and partners, which proved how a development environment can be made to adopt a Zero-Trust approach, by implementing a policy decision point that would analyse the embedded records of a software development project against a given security and development policy, across several development tools, at each stage in the software project. This research proved that making a security decision on importing past or previous software packages and the history of how they were produced could significantly reduce the likelihood of importing vulnerable software.


The learning from this research in hindsight provides some valuable insight as to how we could reduce the likelihood of negative software supply chain security events in the future. If a zero-trust development environment could inspect the provenance of the CrowdStrike software, analysing the embedded attestations, it would demonstrate that tests were not fully performed on the software. The development environment would enforce the policies, not permitting the software to be bundled together for release beyond the test environments. Similarly, if the software did manage to make its way into the wilds of production usage and provenance records of that detail were embedded in it, a software consumer would not allow it to be imported, deployed and used.


While you may argue this can be done in a (human) process, human processes are fallible to not being carefully followed. A zero-trust, provenance analysing development environment builds the process into the technology and automates it – and thus makes it much more reliable than any developer-intensive human process. It continuously enforces the security policy to ensure compliance during each interaction with the development environment. It is like using robots to build machines – much more reliable and efficient than ‘manual construction’. When software automates so much in the modern world why should the processes of quality software production be so human intensive? I strongly believe as a cybersecurity professional that you are more likely to achieve policy and process compliance people find it easy and simple to comply with.

Back to the point of zero-trust and software. You should:
• Prove that you’ve vetted your software updates before you import and execute them. Make sure they haven’t been tampered with and test them in a reference environment first.
• Prove that the code you are intending to release has complied with your security and development policies – and let the consumers in your supply chain know it as openly and transparently as possible.
• Embed the provenance information of where your toolchain and libraries came from in your software package, to enable consumers to manage the risks and vulnerabilities of that software.
• Provide the data of how that software has been built and tested, so that the consumers can do their own checks on the software quality and reliability – according to their policies.
• Use modern software development techniques and tools – such as embracing DevSecOps.
• Do not blindly trust your software suppliers. You should implement a delivery and quality checking procedure typical of any “Goods-In” process but for software as much as any material goods and pay close attention to the supply chain of the vendor.
• Reject any software that does not meet the quality criteria and send it back to the supplier.
• Implement Zero-Trust enabled Development Environments.


Posted

in

, ,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *