Very recently, 3CX, a VOIP and PBX software developer was compromised. According to their website, over 600,000 businesses globally use 3CX solutions to power voice communications in their business. Some customers include the NHS, Air France, PWC, just to give some context.
Policy For Suspected Attacks
My jaw hit the floor and bounced back, causing my teeth to shatter when the approach for verifying a suspected supply chain attack – when notified from SentinelOne and CrowdStrike, both incredibly reputable vendors – was to simply upload a binary to VirusTotal.
SentinelOne and CrowdStrike sell EDR and XDR products, and thus were able to capture what the malware-ridden update was trying to do on their customer’s systems and networks. VirusTotal can only do static analysis on known malware signatures. It is entirely feasible that malware written by foreign nation state threat actors would not be in the VirusTotal database. These sorts of threat actors are well-funded, and typically with their targets would not be using common-off-the-shelf malware when they could be making their own to bypass static signature checks.
My view on this is quite dim as you can imagine. Illuminating a football stadium with a matchstick comes to mind.
If you get a report from credible sources that you’re shipping malware-ridden updates, you need to cease your release process immediately. You need to stop distributing your releases and updates. You need to perform a comprehensive investigation. You need to inform your supply chain and distribution mechanism immediately. You need to perform an appropriate analysis of the situation to work out what’s really going on. If your policy doesn’t permit that, then you can very easily continue distributing tampered artefacts for as long as the internet can ridicule you for it, and continue causing damage to your customers.
Why Code Signing isn’t what it’s cracked up to be in the age of the supply-chain attack
Code signing is meant to prove that the software binaries that are being shipped originate from a particular developer or organisation, and thus prove its legitimacy. However, if you inject code during the build process, suddenly you have these signed software binaries that contain malicious code, that really didn’t originate from the signing organisation in principle.
An example of how code signing may be used can be seen in the Apple App Store. Apple have an App Store root certificate authority to sign developer certificates with. This means that any certificates signed with Apple’s keys will be trusted for distribution by the App Store, which is all great and groovy in principle.
Now, tamper with the build environment. In the App Store’s view, every line of code in that binary, whether or not it originated from a developer, is now signed as being originated by the developer.
Relating this back to the 3CX attack, that malicious binary was still being distributed by the Apple App Store and built-in update mechanism for quite a while after it was identified to be malicious. One may argue that this could be halted in other mechanisms, but to me it highlights a huge flaw with only keeping the compiled binary in scope of signing. It’s not a method that can provide assurance that the binary really did fully originate from a developer, not when supply-chain attacks are becoming increasingly common.
One might argue that certificate revocation exists for a reason, yet this didn’t seem to happen in this instance. One could argue that’s a policy thing on either Apple’s or 3CX’s end. One also might argue that a lot of industry doesn’t quite understand or misuses PKI.
That’s not to say I think signing binaries is completely useless, it is still preventing an attack vector that less capable and potentially opportunistic threat actors may attempt to take advantage of, however it’s not infallible and isn’t suitable at providing higher levels of assurance by itself in its current form. My challenge here is that certificate revocation is merely reactive, and quite frankly a means of limited damage control in these sorts of attacks. The fact Apple was effectively distributing malware for quite some time after it was known the binary was compromised is a highlight that code signing alone just does not solve the problem.
Building software securely
Having spent a few years having a crack at this, I can say it’s really not a simple walk in the park, and in fact quite difficult if you need to provide a high level of assurance. Commercial vendors of bolt on repository scanners and code analysis tools will be quick to point out the security posture benefits, but realistically, industry needs foundational principles to really follow best practices from the outset.
In my environments, there are a few principles I’ve personally developed over the years that I try to religiously stick to:
- Immutability
- Repeatability
- Accountability
- Traceability
Apply these thoughts almost as a matrix to each artefact in your build environment, and suddenly you can start asking yourself common-sense questions to work out whether something is a bad idea or not. Or if you are getting paid, plumb it into your risk assessment and threat model.
The other way to view those principles is to also keep a zero-trust perspective on them.
Closing thoughts
With supply chain attacks becoming increasingly prominent in the last 3 years, my fear is industry will keep attempting to ‘bolt-on’ security with the way software is being built. Security works best when a ground-up approach is taken, and when policy and procedures have a focus on mitigating these sorts of attacks and not just brushing them under the rug.
Build environments should start being less risk tolerant, and that really comes from the business understanding who their customers are and what sort of information they’re storing. If you will, it is expanding the business context to really understand who their customers are, how and where their product is being used, and what their customer’s risk tolerance is. It is entirely plausible that if you have high profile customers – like 3CX did – you could be the target of the next capable threat actor.
And perhaps one for another day… Zero-trust is not a bolt on product. It is not an IDAM or a policy engine. It is a superbly powerful principle that helps you understand why you would trust a given artefact, and what processes you would go through to start to establish that trust. Now think about that when you think about your build pipelines in conjunction with my principles!
Leave a Reply