Currently, integrity checking is the standard methodology for firmware security validation and threat detection. This article details the different scenarios where firmware integrity is necessary, but insufficient from the threat analysis and incident response perspective.
The typical update cycle workflow relies on the firmware image digital signature and integrity validation which can frequently fail in the case of insecure implementations (“Betraying the BIOS: Where the Guardians of the BIOS are Failing”). This validation process is focused on authenticating the source of an update capsule, however it doesn’t assess on what is actually inside the package. The integrity checks in firmware monitoring solutions are the accepted practice today and are typically focused on the update package itself and on particular components of the package.
In the event when the integrity of the firmware image or of a module inside a firmware is broken, the root cause of the problem is not evident to incident responders.
The whitelisting/blacklisting approach is still the industry best practice for firmware monitoring solutions. Creating these lists represents a necessary baseline for detecting anomalies at the firmware image and module level; a good example being the Intel Chipsec blacklist. This approach has limitations. Blacklisting is not a proactive way of detecting new types of malicious activity, mainly because it is based on the knowledge of previously discovered threats. Both blacklisting and whitelisting of a single component hash from a firmware image can create several visibility gaps on its functionality by design.
Integrity checks and blacklisting/whitelisting approaches are still useful; however they are simply not enough to empower your Incident Response and Security Operations Center teams.
The supply chain path for both firmware and hardware is very complex. This year, in particular, has generated multiple discussions around software supply chain security problems which indicate the industry acknowledging the need for more comprehensive solutions. The primary requirements are to analyze both the vendor firmware updates and firmware snapshots extracted directly from physical devices.
In the hardware world, it is difficult for a single vendor to control all the pieces of firmware across the hardware platform. Hardware complexity has increased due to diversity of supported features and components. A good example of this complexity is the supply chain risk diagram in Figure 1, created by NIST, where it is clearly visible how many different parties can be involved in the production and manufacturing of a single device.
Figure 1: Supply Chain Assurance | NCCoE (nist.gov)
Each single block of this diagram is a different supplier of components or third-party company providing a firmware.
The firmware and hardware supply chain is complicated, with every hardware component and its corresponding firmware are typically not developed by a single vendor.
All these supplier blocks create multiple points of security failures where a threat actor can introduce a malicious firmware implant or alter the platform security configuration. The most critical point of failure (highlighted by the red dot) is when the device gets to the end customer and hasn’t passed any additional security provisioning before deploying it into company infrastructure.
We blindly trust everything that is signed, as long as it comes from a trusted source.
Any type of integrity verification has a static nature by design mainly because it is based on a snapshot of verifiable artifacts provided by the hardware or firmware vendors. These artifacts are used to verify the trust of the initial source of the firmware update or the authenticity of the extracted firmware. This begs the question: who watches the watchers? Can the source of the provided artifacts be compromised? The short answer is probably yes, but it’s more complicated than it looks. This whole integrity approach has one significant limitation — it doesn’t provide the visibility on where exactly the failures happen.
Another perspective on why integrity checks or hashes are blind is their limitations regarding known vulnerabilities detection. A simple module recompilation can destroy the integrity confidence level and create a blind spot for exactly the same vulnerable code. The distribution of the blacklists with the hashes of known vulnerable components will be useful only in the case of highly distributed libraries shipped as pre-compiled binaries. This becomes highly important for system firmware which NIST classifies as critical software.
The Software Bill of Material (SBOM) for firmware components becomes critically important for better visibility and transparency of the firmware supply chain. The recent presentation “DHS CISA Strategy to Fix Vulnerabilities Below the OS Among Worst Offenders” highlights the importance of code analysis capabilities to verify vendor code claims.
Figure 2: Roots of Trust and Attestation | Trammell Hudson (media.hardwear.io/roots-of-trust-and-attestation)
The firmware SBOM should be provided by the vendor, however its consistency on the binary level (actual implementation) needs to be verified to make sure everything is aligned.
The most important takeaway from this blog post is that we need to have tools to verify SBOM on the binary level; responding to alerts without code-level visibility is not enough.
At the upcoming Black Hat Conference in Las Vegas 2021, the hardware/embedded track echoes the topics discussed in this blog. One talk in particular “Safeguarding UEFI Ecosystem: Firmware Supply Chain is Hard(coded)” promises to highlight the problems in supply chain and complexity of the ecosystem in general. Join us for the conversation, as Alex Matrosov will be one of the presenters who details the problems and solutions in protecting Firmware Supply Chains.