By Paulo Garcia
This article was originally published by The Conversation in 2018.
A few weeks ago, Bloomberg reported that China was spying on American tech firms, including Apple and Amazon, by installing secret microchips on server boards during the production process. These hardware trojans are, like the Greek horse used to sneak in soldiers, designed to appear harmless while in actuality they perform secret malicious operations.
The named tech firms have denied this report; at the moment, we have no way of knowing who is right. If true, this is potentially the greatest malicious hardware security breach we’ve seen. If not true, well… there are still enough hardware security vulnerabilities to go around.
Earlier this year, the Spectre/Meltdown bug was disclosed. This security vulnerability affects virtually every processor, from those powering consumer computers to company servers, and it allows malicious code to access potentially confidential information. This was a fault in the hardware design: software patches (updates intended to correct the fault) were made available soon after and, officially, with negligible performance impact (it’s not really negligible).
But this doesn’t affect you directly, apart from a slightly slower computer… or does it?
Microprocessors are everywhere
The average person interacts with scores of microprocessors every day. This does not include the servers and internet routers that process your email and social media: think closer to home. You likely have a smartphone and a personal computer or tablet.
Perhaps an Amazon Echo or another smart speaker? An electronic doorbell or intercom? Your car alone, if less than ten years old, has dozens of processors responsible for everything from controlling the radio to acting on the brakes. A Spectre/Meltdown-like bug on your car’s breaks is a frightening thought.
These bugs occur because hardware design is hard. As part of my research, I’ve had to design and implement processors. Making them work is challenging enough, but ensuring they are secure? Exponentially harder.
Some might remember that in 1994, Intel had to recall a line of buggy processors, costing them millions of dollars. This was a case where the best chip designers in the world produced a flawed chip. Not a security vulnerability, just an incorrect result on some operations.
This is much easier to detect and correct than a security vulnerability, which is often incredibly nuanced — those interested in reading more about the Spectre/Meltdown exploit will see it’s a very, very sophisticated attack. Last year, a cybersecurity researcher found several undocumented instructions on an Intel i7 processor. Instructions are the atomic operations a processor can perform: for example, adding two numbers, or moving data from one place to another. Every program you run likely executes thousands or millions of instructions. The discovered ones are not disclosed on the official manual, and for some, their exact behaviour remains unclear.
The processor you own and use can do things the vendor doesn’t tell you about. But is this a documentation issue? Or a genuine design flaw? Intellectual property secret? We don’t know, but it is likely another security vulnerability waiting to be exploited.
The vulnerabilities of hardware
Why is hardware so fundamentally unsafe? For one, security is an aspect that is often overlooked in an engineering education across the spectrum from hardware to software. There are so many tools, concepts, paradigms that students must learn, that there is little time to include security considerations in the curriculum; graduates are expected to learn on the job.
The side effect is that across many industries, security is considered the cherry on the cake rather than a fundamental ingredient. This is, fortunately, beginning to change: cybersecurity programs are popping up across universities, and we are getting better at training security-conscious engineers.
A second reason is complexity. Companies that actually fabricate chips don’t necessarily design them from scratch, as the building blocks are bought from third parties. For example, until recently, Apple bought designs for the graphics processor on iPhones from Imagination Technologies. (They’ve since moved to in-house designs). Ideally, specifications perfectly match the design. In reality, undocumented or erroneously documented features across different building blocks may interact in subtle ways to produce security loopholes that attackers might exploit.
Unlike in software, these weak points have long lasting effects and are not easily corrected. Many researchers are contributing to solve these problems: from techniques for verifying that designs match specifications, to automated tools that analyze interactions across components and validate behaviour.
A third reason is economies of scale. From the business perspective, there are only two games in town: performance and power consumption. The fastest processor and the longest battery life win the market. From the engineering perspective, most optimizations are harmful to security.
In safety-critical real time systems (think autonomous cars, aeroplanes, etc.), where how long something takes to execute is critical. This has been a problem for a while. Current processors are designed to execute as quickly as possible most of the time, and will occasionally take lengthy periods; predicting how long something will take is incredibly challenging. We do know how to design predictable processors, but virtually none are commercially available. There’s little money to be made.
Changing focus on cybersecurity
In the long term, the same won’t hold true for security. As the age of the Internet of Things dawns on us, and the number of processors per household, vehicle and among infrastructure continues to increase, companies will undoubtedly move towards security-conscious hardware.
Better-trained engineers, better tools and more motivation for security — when a stamp of security quality means you sell more than your competitors — will push for good cybersecurity at all levels.
Until then? Maybe foreign countries have interfered with it, maybe not; regardless, don’t trust your hardware. That pesky update notification that keeps popping up? Update. Buying a new device? Check the manufacturer’s security record. Complex advice on choosing good passwords? Listen. We’re trying to protect you.
This article is republished from The Conversation under a Creative Commons license. Carleton University is a member of this unique digital journalism platform that launched in June 2017 to boost visibility of Canada’s academic faculty and researchers. Interested in writing a piece? Please contact Steven Reid or sign up to become an author.
All photos provided by The Conversation from various sources.
Thursday, November 1, 2018 in The Conversation
Share: Twitter, Facebook