3CX Supply Chain Attack: Lessons Learned

Software Supply Chain Attacks Analysis

3CX is a well-known company providing VoIP and Unified Communications products. They claim to have over 600,000 installations and 12M daily users. Undoubtedly a tempting target for bad actors.

By the end of March 3CX was attacked in a sophisticated software supply chain attack, which managed to inject malware into their desktop software. The malware aims at stealing information, but it also drops a backdoor.

In this post we will examine the attack, following the sequence of events from the different points of view. In the process we might learn how the attack could be blocked or mitigated.

Table of Contents

How the Attack Was Done

3CX SUPPLY CHAIN ATTACK: LESSONS LEARNED

The bad actors first compromised a software package, X_TRADER, from a company named Trading Technologies headquartered in Chicago, IL. The software had malware ( VEILEDSIGNAL backdoor) injected. The installer was digitally signed with a valid code signing cert. Bad actors possibly broke into the build pipeline by 2021 and modified the software before it is packaged and signed. This was the first hop in the chained supply chain attack. By March 2022 this was reported in the context of operations DreamJob aka BLINDINGCAN and AppleJeus, targeting US based organizations in news media, IT, cryptocurrency, and fintech industries.

This software was retired in 2020 (!), but it was available for download as of 2022. A 3CX employee installed it on a personal computer. The bad guys stole the employee’s corporate credentials from the compromised computer. Interestingly enough, the VEILEDSIGNAL used a URL under the Trading Technologies website for command and control (a tactic often used to avoid detection).

Two days later after the initial compromise, the bad actors used the corporate VPN to access 3CX’s internal systems. They used the FRP reverse proxy, a public tool that could be used for the good and the bad, to move laterally. They knew what to do with the gained foothold: they compromised the Windows and macOS build systems. Each environment with different tools. For example, they used the SigFlip tool to inject the RC4-encrypted shellcode into the signature appendix of the d3dcompiler_47.dll.

Software updates for the 3CX Desktop app for Windows and macOS were infected. This was the second hop in this (first?) multistep supply chain attack. The software update for Windows, also properly signed with a 3CX code signing certificate, drops two malicious files, ffmpeg.dll and d3dcompiler_47.dll (with equivalent .dylib libraries for macOS), which were loaded and executed by the benign 3CXDesktopApp executable (via side-DLL loading technique). The shellcode encrypted in the second DLL loads another DLL that downloads from IconStorages GitHub repository a .ico file containing the encrypted Command and Control (C2) server.

There were over 20 domains provisioned for this, which tells us about how the attack was carefully planned.

Connections from victims’ PCs to the C2 domains started on March 06, 2023, and ended on March 29.

Please note that (1) no source code in a code repository was modified but instead, the malware payload was injected during the build, (2) a legit ffmpeg DDL was replaced with a malicious one and loaded via side-DLL loading, and (3) a popular VoIP app was turned into a weapon with multistage malware.

How the Incident Was Handled

It seems that on March 29th 3CX received reports from third parties of a malicious actor “exploiting a vulnerability in their product”. Anti-malware platforms did (partially) their job.

On March 30th, 2023 the 3CX CISO, Pierre Jourdan, published a security alert informing customers and partners that their Electron app (desktop client) was shipped after an update with malware. The post lists the affected versions (for Windows and Mac), a short statement about the initial containment measures taken, and a quick recommendation to continue working with the web client instead of the tainted desktop client.

IMO the initial communication, albeit a bit terse, was to the point. It is not easy for any vendor to recognize that there is a serious security issue potentially affecting many customers. The company recognized the problem, and immediately pointed to a potential culprit:

“Worth mentioning – this appears to have been a targeted attack from an Advanced Persistent Threat, perhaps even state sponsored, that ran a complex supply chain attack and picked who would be downloading the next stages of their malware.”

The company offered an alternative to keep the product working: To use the Progressive Web App or PWA, instead of the desktop app, based on the Electron framework.

“We strongly suggest that you use our PWA app instead. The PWA app is completely web based and does 95% of what the electron app does. The advantage is that it does not require any installation or updating, and Chrome web security is applied automatically.

The reason we have two apps is that when we started the Electron App, the PWA technology was not available yet. Now it’s mature and working really well.”

Well, the web app could be also infected, and a PWA by definition may run code in service workers (JavaScript code) with (restricted) access to local resources. The post claimed that the PWA was not infected at that time. We can understand why, in many of the 277+ comments to the post, some asked if the mentioned PWA alternative or the PBX server software itself was compromised.

The same day, 3CX appointed Mandiant for investigation, with more instructions. Basically, it install update for

Win/Mac Desktop App at on-prem installation, and uninstall the electron app for affected versions, and go to the PWA alternative. That is not necessarily enough, as the malware may have access to affected systems, persist the malware, or even move laterally. Full removal of malware is not that easy. We can understand that, given the urgency, the best way to deal with an incident is not always found… unless the scenario was previously imagined and proper remediation measures identified.

The following post, now by the company’s CEO, tries to reassure customers with the announcement of a security-only update for the DesktopApp. Some basic protection techniques, like password hashing, were implemented (“For better than never is late; never to succeed would be too long a period.”, Chaucer dixit in 1386.) Specific, implemented protections are better than good intentions and projected plans… But how were passwords stored previously?

Anyway, Google invalidated the existing 3CX code signing certificate and antivirus vendors also blocked any software signed with that certificate. The MSI installers for the DesktopApp update needed to be regenerated with a new code signing certificate, which took some hours. This could be expected!

3CX preventively deployed a new build server. This is reasonable: without analysis, how the malware was injected was unknown.

The same April 1st, the CEO posted a Security Incident Update. The post tells what the company was doing (conducting a full investigation, and validating the entire source code of the client-side software.) Security cannot wait even on Saturdays.

When the incident was analyzed and the attacker’s infrastructure was known, the GitHub repo and domains for C2 servers used by the APT were removed, effectively taking down the malware.

On April 11th, the CISO posted the initial results from the incident analysis. The UNC4736 North Korean-linked APT was first cited.

On Friday Apr 20th, Mandiant posted the initial analysis of the incident. YARA and Snort detection rules were published.

The same day, 3CX’s CEO posted a new post, with the suggestive name “Actions, not words – Our 7 Step Security Action Plan!”. The subtitle is quite interesting: “Securing for the Future of 3CX after first-of-a-kind, cascading software-in-software supply chain attack.” The steps of the drafted security charter are:

  • Hardening multiple layers of network security.

  • Revamping builds security.

  • Ongoing product security review.

  • Enhancing product security features.

  • Performing ongoing penetration testing.

  • Refining crisis management and alert handling plan.

  • Establishing a new department for network operations and security.

The analysts will have an eye on this praiseworthy program, indeed…

Aftermath: How the industry reacted

Just a few weeks passed since the incident was published. But the industry is reacting.

CVE-2023-29059 was assigned. As of today it has a 7.8 (HIGH) CVSSv3 risk score. The 3CX products have only 16 CVEs listed, which looks good when compared with vendors with similar implementation levels. But this incident is not the usual vulnerability indeed. This is an active targeted attack by an Active Persistent Threat (APT), as the CISO’s most recognized.

The CWE-506 “Embedded Malicious Code” that categorizes this incident does not help us much in understanding how to prevent such attacks. Unfortunately, the bad guys have many avenues to deliver malware, with supply chain attacks the new crown jewel in their arsenal. CWE-506 should be improved to reflect the current scenarios, including those based on breaching the build system.

The APT, named UNC4736 by Mandiant, uses tactics and techniques similar to the widely known Lazarus Group (APT38), baked by the North Korean State, and the perpetrators of the 2017 WannaCry ransomware campaign. Such ties between bad actors in Stalinist states with regular working hours are difficult to establish, but researchers from CrowdStrike and ESET found that the tactics, techniques, and procedures (TTPs) used resemble those of common use by Lazarus APT38. Understanding how they operate is essential for detection, prevention, and eradication.

Knowing their goals, shrouded in mystery, is even harder.

This attack is important enough that CISA, the US CyberSecurity Agency, emitted an advisory with the vendor posts and links to multiple analyses on the attack. Reading them is relevant for cyber sec practitioners.

The incident reminds us of the SolarWinds attack. Given the huge impact, many vendors rushed to analyze the infected VoIP desktop product installers. But this is the last step. Further details are needed to understand what failed with the security of the build systems of the two affected organizations, and what steps the bad actors took to affect those build systems and managed to have the malware inserted during the build. Otherwise, this incident will repeat again and again.

And now, lessons learned!

Playing a team sport listening to someone in your team explaining every minute play flaw afterward shatters the most tempered spirit. But here in Spain, everybody is a national soccer coach! So let me act as Pep Guardiola, José Mourinho, Alex Ferguson, and Arrigo Sacchi all together.

Act as if your build system is under heavy artillery fire. Perhaps you cannot prevent an employee from installing unauthorized software on work PCs. Probably, engineering teams may require more strict policies. But be vigilant on the build & deploy pipeline. If you deliver software to customers, security on how you build the software should be as important as the software product’s security.

Incident preparedness. It is difficult, perhaps sterile, to anticipate the most likely scenarios for security incidents. Should I learn to use a defibrillator? No, unless I live with a propensity to cardiac arrest. But for heaven’s sake! Be prepared to “rebuild your build” from scratch, with revocation + renewal of keys, access tokens, public key pairs, and certificates.

Build systems must have certain characteristics: The current trend towards builds in ephemeral environments, and with isolated/parameterless / hermetic and even reproducible could be the framework to prevent attacks like the one on 3CX when complemented with additional requirements on other systems in the build & deploy pipeline. See Google SLSA’s build requirements for reference.

Proper communication is key. Transparency is mandatory. Minimizing the issue, or hiding it under the rug is the worst thing any organization can do. Security incidents often hit harder on reputation than on other assets, and bad communication after the breach could be catastrophic.

Turn problems into opportunities. A good incident can trigger improvements in security controls that never found their way into the tight product plan. Even essential things like proper password hashing, and security configuration / hardening documentation could be scheduled for the short term. But for supply chain attacks, hardening the build systems, and adding attestations and integrity checks on every step of the build & deploy pipelines are essential. It should not be that easy for an attacker to implant such changes in the build system.

Be prepared. Otherwise, your organization may face an existential threat.

 

Unifying Risk Management from Code to Cloud

with Xygeni ASPM Security