Secure hardware: a high level introduction
4 min read

Secure hardware: a high level introduction

Secure hardware: a high level introduction

Recently there has been a lot of buzz around the topic of using secure hardware to govern and control IT systems, from AI to weapons systems. But how feasible is this approach really? The topic is complex, and most resources online on secure hardware are either very specific and technical or too high level to be conclusive. I hope the following answers and sources help you as much as they helped me!

What are some different ways in which secure hardware can increase security?

Well, ultimately that depends on your goal and threat model. Let's assume for now that our goal is to limit the use of hardware which can train very large ML models. One can try the following high level methods:

  • Usage limitations (Limiting use in clusters, Limit sensitive data access, Limit to only run approved code)
  • Timed off switch, by counting clock cycles and self-destroying when the power is off.
  • Usage verification to ensure only authorised parties have access
  • Location verification to ensure only parties in certain regions have access
  • Secure boot
  • Remote attestation: which requires hardware to use and not leak a private key.
  • Attest to device identity

Why is making perfectly secure hardware so difficult?

Some of the above properties are easier to achieve than others. However, the problem with giving someone your hardware is that they have physical access. Attackers could fully analytically inspect the hardware in order to find holes in the defenses. And there are countless examples of ingenious and scary breaches.

This is in stark contrast to cryptographic software defenses, which are modeled to be unbreakable against adversaries which have (for practical purposes) infinite resources.

Which types of secure hardware components exist?

When you dive deep into the technical details, secure hardware becomes just an umbrella term for a constellation of devices. However, for large groups of software and hardware developers, the following distinction is often made:

  • Security module. These may include their own dedicated processor and are responsible for handling private keys and performing other security-related functions. For an example, here is the specification of the common Trusted Platform Module.
  • Trusted Execution Environments (TEEs). These are isolated environments created within a processor that protect the code and data running inside them from being accessed or modified by other parts of the system.

"The key difference between security modules and TEEs is that TEEs create a protected environment on the main processor cores, whereas a security module is a separate lower-performance processor specialized for security related tasks"

Another key technical distinction is which type of key material the component uses. Some functionality might be unlocked by hardcoding a manufacturer's public key, while other functionality requires hardcoding and not leaking a private key. The latter allows for more powerful control capabilities, but is also harder to secure.

Many organizations, for example cubist, complain that the attack surface of TEEs is too big. Interestingly, recent cryptographic innovations may make it possible to provide TEE-like functionality with significantly less complexity. It is an interesting open problem what the minimum functionality we can get away with to implement extensive compute monitoring plan as outlined in Catching Chinchilla and Tools for Verifying Neural Models' Training Data

What is the simplest technique we can use to secure hardware?

Even though it is really tough to prevent attacks, we can try to detect them. Especially if the inspector has physical access. Some cool ideas to detect attacks or security breaches include:

What is the strongest technique we can use to secure hardware?

Perhaps the strongest and most impressive technique to have been developed recently is that of Physical Unclonable Functions. In a nutshell: they can allow for a private key to be derived directly from enclosures. If an attacker tries to get closer access to a chip which is enclosed with a finegrained wiring - which directly influences the resulting key - even extremely delicate unpermissioned entry could be detected, wipe existing key material and render the secure hardware useless.

Or in other words: this is Da Vinci Code in the modern world.

Can we actually verify the location of hardware?

This recent paper has an in depth discussion to verify whether chips are used within certain regions. The idea is as follows: if we place an auditing device in location A, then we can ensure that hardware B which we want to control is at most X kilometers away. How? By measuring the latency of communication between A and B. However, this comes with a couple of key assumptions:

  • B's private key is protected and does not leak to any attacker.
  • All chips communicate over a network which communicate close to the speed of light
  • All chips communicate over a network which is a perfect straight line to the checking center.

As soon as major imperfections are introduced in the network communication, verification over larger distances becomes difficult, because device A cannot be sure whether the slow traffic is because of a faulty indirect network connection, or because device B is in another country than expected.

What are different ways in which we can invest and improve the security of secure hardware?

Based on the above sources and information, once cannot emphasize enough that no hardware-based solution is perfectly secure. However, the security guarantees they offer are not completely broken either. Even for products used by millions of people such as Apple's iphone, it can take many years for successful attacks to be published.

If we want secure hardware to increase in usage, we can:

  • Throw money at researchers and hope they invent new cryptographic techniques. So far, the last decades have seen some awesome innovations such as Physical Unclonable Functions
  • Invent and optimize manufacturing processes which ensure that breaches of one device are only of limited utility to breach another device.
  • Invest in the analysis and even formal verification of existing secure hardware solutions.

Moreover, I would also advise to limit their use to relatively small, highly concrete use cases, in order to grow capabilities and understanding of their opportunities and limitations.

Further reading

Papers

Truly a fantastic read: Obermaier and Immler, “The Past, Present, and Future of Physical Security Enclosures.”

https://www.cnas.org/publications/reports/secure-governable-chips

https://www.rand.org/pubs/working_papers/WRA3056-1.html

Rants

https://collective.flashbots.net/t/debunking-tee-fud-a-brief-defense-of-the-use-of-tees-in-crypto/2931

https://gist.github.com/osy/45e612345376a65c56d0678834535166

Wiki

https://en.wikipedia.org/wiki/Trusted_execution_environment

https://en.wikipedia.org/wiki/Hardware_security_module