As I discussed previously, the auto industry and the Department of Transportation (DoT) via the National Highway Traffic Safety Administration (NHTSA) plan to mandate that every new car include a technology called “Dedicated Short Range Communication” (DSRC), a device that talks to every other car with a DSRC unit (something called “vehicle-2-vehicle” or “v2v” communication). The auto industry fully supports this mandate, which is surprising (since industries rarely like mandates) until you (a) read this report by Michael Calabrese showing how the the auto industry hopes to monetize this with new services and harvesting your personal information (while piously claiming the mantle of saving lives); and, (b) the mandate helps DoT and the auto industry avoid sharing the spectrum with potential unlicensed uses (which actually do contribute to saving lives, but I will save that for latter).
As it happens, in addition to being a full time spectrum nut, I spend a fair amount of time these days on privacy, with just a touch of cybersecurity. So I started to dig into the privacy and cybersecurity implications of mandating DSRC on every car. My conclusion, as I discuss below, is that the DSRC mandate as it now stands is a disaster for both cybersecurity in cars and for privacy.
Yes, NHTSA addresses both privacy and cybersecurity in its 2014 Research Report on DSRC in terms of evaluating potential risks and solicited comment on these issues in their “Advanced Notice of Proposed Rulemaking” (ANPRM). It is in no small part from reading these documents that I conclude that either:
(a) NHTSA does not know what it is talking about; or,
(b) NHTSA does not actually care about privacy and cybersecurity; or,
(c) NHTSA is much more interested in helping the auto industry spectrum squat and doesn’t care if doing so actually makes people less safe; or,
(d) Some combination of all of the above.
As for the auto industry and its commitment to privacy and cybersecurity, I will simply refer to this report from Senator Markey issued in February 2015 (and utterly unrelated to DSRC), find that the auto industry (a) remained extremely vulnerable to cyberattacks and infiltration by hackers; (b) the auto industry had no organized capability to deal with this threat; and, (c) the auto industry routinely collected all kinds of information from cars without following basic notice obligations, providing meaningful opt out, or adequately protecting the information collected. (You can read this article summing up the report rather nicely.) For those who think the auto industry has no doubt improved in the last year, I refer you to this PSA from the FBI issued in March 2016 on vulnerabilities of cars to hacking.
I note that these remain problems regardless of whether the FCC permits sharing in the band, although it does call into question why anyone would mandate DSRC rather than rely on the much more secure and privacy friendly technologies already on the market — like car radar and LIDAR systems. But if the auto industry and NHTSA insist on making us less safe by mandating DSRC, the FCC is going to need to impose some serious service rules on the spectrum to protect cybersecurity and privacy the way they did with location data for mobile 911.
And, just to make things even more exciting, as explained in last week’s letter from the auto industry, GM is rushing out a pre-standard DSRC unit in its 2017 model cars. Because which is more important? Creating facts on the ground to help the auto industry squat on the spectrum, or making sure that DSRC units installed in cars are actually secure? Based on past history of the auto industry in the cybersecurity space, this is not a hard decision. For GM, at least, spectrum squatting rules, cybersecurity drools.
On the plus side, if you ever wanted to live through a cool science fiction scenario where all the cars on the highway get turned into homicidal killing machines by some mad hacker baddy, the NHTSA mandate for DSRC makes that a much more likely reality. In fact, it’s kinda like this Doctor Who episode. And lets face it, who wouldn’t want to drive in a car controlled by Sontarans? So, trade offs.
I explain all this in detail below . . . .
Let’s start with the increased cybersecurity vulnerability, and why the DSRC mandate makes this increased vulnerability particularly dangerous. Then we’ll move on to privacy.
How DSRC Makes Our Cars Ripe For Conversion Into Zombie Killing Machines.
As even NHTSA recognized in their 2014 report, when you open a new way to input data into the car, you create a new cybersecurity risk. Nor is NHTSA the only one to make this point. One analyst sharply critical of DSRC, a fellow named Roger Lanctot over at Strategy Analytics, similarly observed that — despite the cybersecurity recommendations in the 2014 NHTSA Report — “the massive security vulnerability created by the implementation of V2V is beyond the capacity of car companies today to cope.” (Lanctot’s blog post makes a lot of other good points about why NHSTA mandating DSRC for every car would be a major, major mistake. But I’ll save these for another time.)
NHTSA’s 2014 Report reviewed the existing V2V security system and noted that it relied primarily on public key infrastructure (PKI). NHTSA’s 2014 Report recommends overlaying this with a set of additional management functions and certificate authorities (which ensure that the source of the communication and/or destination are authorized) which would control different functions in the DSRC system, all managed by a unified security credentials management system (SCMS). It also requires encrypting all the wireless traffic, so that communications between the cars, or between cars and stationary units, can’t be intercepted and read by unauthorized third parties.
All of this is, of course, a high level description with details and implementation to be worked out in the ANPRM that accompanied the 2014 Report.
The Security v. Usability & Utility Trade Off.
We now get into the trade offs in security v. usability & utility. The most secure system is totally disconnected from anything else, which avoids adding the complexity of cooperation between entities outside the control of the system operator and reduces the points of entry into the system. But a system that only talks to itself is limited in many ways, it can’t get information easily, it can’t transmit information easily, it lacks utility for broader purposes, and it significantly reduces the number of people capable of using it.
Additionally, the smaller the number of possible inputs the system will accept, the harder it is for someone to use the interface with the outside world as a way to hack the system. The system just isn’t designed to accept code that could impact the operation of the system through the public facing interface.
As an example, consider your bank’s ATM network. You have restricted access, because you have to go to the bank machine. The bank machine does not communicate with the public Internet, only with the ATM network and the member banks through the ATM network. You cannot easily push malware or a virus through the ATM interface that is exposed to the public.
But even this level of security has problems that must be watched. The more banks and public locations (like kiosks in malls) that have ATMs, the more nodes in the network are available for bad actors to try to penetrate. The more banks or ATM networks join the common network, the greater the coordination needed for the network and the ATMs and the banks to perform their functions securely. Finally, there is always the danger of the human element. The more people who have exposure to the inside of the network, either as employees of one of the network operators, or because they have physical access to an ATM for maintenance, the more people who can act as “moles” and deliberately insert malware directly into the system as a trusted source.
Of course, multilayer security addresses that as well, but it does not come cheap and it needs to be embedded as a value in the institution. It also needs embedding in the network from the beginning. ATM networks came out of very security conscious institutions (banks). Security was embedded in the network from the very beginning.
But there is also a trade off for the lack of utility. An ATM doesn’t lend itself to lots of other services that banks like to offer. Not only would people shopping for home loans on the ATM tie up the line for the machine, but it would require significantly expanding both the number of inputs into the system and the number of other systems with which the network would need to interconnect. Expand the utility and you expand the risk.
A further constraint on strong security is whether it is usable by people and whether the security itself interferes with the ability of the system to function effectively. Think of all those systems you encounter that want multiple passwords with some combination of numbers and letters, where it purges your password every 60 days, and where it will not let you repeat a password — meaning you have to come up with a new complex password every two months. This drives people crazy and makes it very hard to use the system. I have heard horror stories from my wife, who works in medical informatix, of nurses and doctors in hospitals putting the new password to the drug dispensing machines on a post-it note next to the machine because the staff were having so much trouble keeping track of the constantly changing password. The functions meant to enhance security so undermined the usability of the system that people actually hacked their own security so they could get what they needed out of the system.
Additionally, processing multiple certificates or screening inputs can take time. As the DSRC folks constantly remind us, DSRC is supposed to respond with lightning swiftness to avoid potentially fatal accidents. NHTSA’s 2014 Report notes the problem of processing multiple certificates in terms of time. Additionally, the more traffic you need to send over the network to establish identity and security, and the more frequently you need to reestablish the links, the greater the energy drain (in this case, from your car’s battery) and the less capacity you have to transmit the actual content you are trying to protect.
Finally, there is the problem of the environment in which the secured function is operating. Lets go back to banks as an example. Banks, like every other business, wanted to expand to online services. The easier you make it for customers to use your services at home 24/7 through their home broadband connection, or anywhere using their mobile device, the happier they are. No more going to bank or finding an ATM, now I can deposit a check using my cell phone! For banks, providing that convenience is a huge competitive advantage. Additionally, if I am thinking of refinancing my house or applying for a credit card, I’m much more likely to use my bank if I can easily access the bank from the comfort of my own home when I happen to have the urge.
But now the bank has new headaches on security. It is no longer riding on a network designed for security by paranoid security freaks who control access and every function of the network. It is riding on the public Internet — a general purpose network with no one in charge. Worse, the banking application sits on the customer laptop or mobile device. Who the heck knows where that browser has been or what viruses or malware that device has picked up. How do I, the bank, know if the machine contacting me is authorized? Or that the user showing up electronically as “John Smith” with all the right passwords isn’t an identity thief with a stolen set of data? Finally, to take advantage of the flexibility of the network in the way the customer wants, my online banking system needs to take a lot more types of inputs than my ATM.
Again, banks are not stupid and work to solve the problem. But as the problem becomes more complicated, it takes more complicated solutions and greater cooperation among the network participants. At every new level of competition, it becomes easier to accidentally leave open holes, or create glitches between different systems trying to work together that a skilled hacker can exploit.
The Auto Industry Has Proven To Be Really, Really Bad At Cybersecurity.
As noted, banks are very good at being paranoid security freaks, and even they have significant problems and incur huge expenses trying to keep their systems secure. As demonstrated by several recent hack attacks on cars, and as summed up in the Markey Report last year, the auto industry absolutely sucks at this. The auto industry has zero history of the kind of security expertise, security culture or coordination necessary to make this happen effectively. NHTSA’s 2014 theoretical concept sure does look pretty, but no one outside the auto industry thinks the auto industry is in any way shape or form prepared to pull it off.
Until now, according to this fairly good piece on the risks of car hacking to date from Washington Post, what has saved us from a mass attack of viruses and malware in our cars is that you can only hack cars one at a time. “They haven’t been able to weaponize it. They haven’t been able to package it yet so that it’s easily exploitable,” said John Ellis, a former global technologist for Ford. “You can do it on a one-car basis. You can’t yet do it on a 100,000-car basis.”
Fortunately for the hackers, but unfortunately for us, DSRC changes all that and makes it easy for hackers to infect thousands of cars, even millions of cars, simultaneously.
How DSRC Makes It Easy For Hackers To Infect Lots and Lots of Cars Through One Attack.
DSRC, to do its job, must communicate from car to car. NHTSA’s proposed security system relies on making sure that the device or entity sending the transmission — whether a software update, location data, engine data, whatever — is authorized to send the information. It does not look at the information contents, and screening those inputs would add a new layer of complexity, cost and potential for screw ups to the system.
Furthermore, for DSRC to do what it is supposed to do, it must interact with car systems. This is true even if DSRC is simply in “warning” mode and does not do things like autobrake or turn the steering wheel to avoid an accident. For example, NHTSA’s 2014 Report recommends that, to provide other cars with alerts that the the driver ahead or across the intersection or whatever intends to make a left turn without signalling, the DSRC system should monitor things like angle and pressure on the steering wheel to make an advance prediction that the car will turn left, even if it is not signalling a left turn. Even if the car is signalling, the signaler needs to communicate that information to the DSRC system (in case the driver of the other car does not see the signal). That requires an input/output into the car’s operating system.
Similarly, for DSRC to convey the information to the driver, even in warning mode, it must interact with the car’s dashboard display system. If DSRC is to replicate the autobraking and other safety features already incorporated with car radar and LIDAR systems, it will need to issue overide commands directly to the engine and breaking and steering systems.
And, of course, most importantly, DSRC talks to every single car it passes with another DSRC unit. That’s the entire point of DSRC, to enable communication between cars and pass on information.
An Intel report from last year found 14 different ways a hacker can get into the operating system of the automobile. The systems don’t have to be obviously related to the engine or other critical operating system (last year’s famous remote hack of a 2014 Jeep Cherokee came in through the satellite radio receiver). Once an Internet worm or other malware infects any DSRC connected vehicle, it has the capacity to infect all connected vehicles. For you Internet old timers. this should remind you of the famed Morris Worm back in 1988 when the Internet was beginning and everyone trusted each other.
Finally, a “DSRC worm” can always enter through the inside by someone with the right certificate. One of the most successful attacks on multiple cars to date originated from a hacker in a service garage who decided to install infected hardware in cars that came in for an upgrade. That attack impacted about 200 cars simultaneously. Fortunately. it was limited to the prank of turning on headlights, car alarms, and blaring radios in the middle of the night.
Imagine the same kind of rogue technician infecting a car not with not only with malware to give the hacker access, but with a DSRC worm that would cause the malware to spread from car to car. The NHTSA proposed security system does not address this, even as a concept, because it assumes that if a signal originates with a properly certified device, it is safe. Again, as noted above, DSRC systems must assume transmissions from other DSRC systems are reliable when properly authenticated in order to function with the speed and reliability needed for effective life and safety uses. It must communicate with other car systems in order to perform the functions NHTSA says make DSRC actually worth doing.
Furthermore, because NHTSA wants to encourage the auto industry to get creative with safety applications, and leave space for the auto industry to monetize the spectrum by streaming entertainment or harvesting your personal data, NHTSA is not limiting the kind of inputs or nature of traffic that can be transmitted over DSRC. To allow automakers to monetize this with advertisements or entertainment services, NHTSA must allow DSRC to hook into any system and pass along any type of data to any other car or other authorized devices. While NHTSA provides for a minimum of functions, it is deliberately trying to make this a more open and generic platform. Which, if you will recall our previous security v. utility tradeoff, opens the system to many more security risks than it would have if DSRC had a limited set of inputs it would accept and functions it could perform. For the system to remain flexible and evolve — especially as a profit center for the auto industry — DSRC must permit the additional security risk of penetration, and subsequent spread to all connected cars using DSRC.
But wait! It gets worse! The desire of the auto industry and NHTSA to squat on the 5.9 GHz spectrum is leading to NHTSA accelerating the roll out, and apparently O.K.ing a “Pre-Security Standards” release of DSRC by GM this fall.
GM Is Releasing A “Pre-Security Standards” DSRC Unit, Locking In Existing Security Holes For Future Iterations of DSRC.
In order to create “Facts on the ground,” GM is rushing out its first ever DSRC connected car this fall, before the NHTSA publishes its rules or finishes developing its cybersecurity standards. Remember, in the 2014 ANPRM, NHTSA only started the process and said it would take solicitations for the relevant certificating authorities that would ensure standard security across the entire auto industry after it released the final rule.
But spectrum squatting matters more than cybersecurity, apparently, to both the auto industry and NHTSA. Unlike the FCC, which actually required the industry/public safety team working on the new mobile 911 geolocation standards to come back with proof that the system was both secure and privacy friendly before allowing the wireless industry to take it live (see FCC Order here), NHTSA is apparently allowing GM to deploy units without waiting for any of the supporting certification authorities or security standards in place.
One of the major reasons the internet is so incredibly insecure, and why creating real security is so very hard, is because the original Internet release was totally open. That embedded essential features in the system that work well to facilitate the exchange of information. This amazing utility and ease of use made the Internet enormously popular and allowed us to go from virtually no users in the early 1990s to today’s world where the vast majority of critical functions and daily communications touch the Internet in one way or another. At the same time, it made it possible for remote hackers to do things like steal our identities and run up thousands of dollars of fake credit charges, steal money out of our banks, disable major ecommerce systems, or potentially shut down our electric grid with hack attacks.
As this article in the Washington Post notes, one of the major security challenges for cars in particular is the long replacement cycle. People hold on to their cars for 10-15 years, sometimes even longer. Cars also have an incredibly long design lead time. As a result, according to the article: “If a hacker-proof car was somehow designed today, it couldn’t reach dealerships until sometime in 2018, experts say, and it would remain hacker-proof only for as long as its automaker kept providing regular updates for the underlying software — an expensive chore that manufacturers of connected devices often neglect. Replacing all of the vulnerable cars on the road would take decades more.”
But GM and NHTSA are rushing to release the first DSRC model car, the car that can talk to other cars, and whose crappy, undeveloped pre-standard security will set the pattern for future DSRC release. Despite the fact that GM’s new release models will remain vulnerable to the other hacks identified by car security experts. And because future DSRC iterations will need to be backwards compatible with all existing DSRC deployed, even if future cars have better security, the versions rushed out by GM today and approved by NHTSA for the purpose of squatting on the 5.9 GHz band will remain a massive security hole in the system for the foreseeable future.
Conclusion — DSRC Makes Us Much Less Safe Than Existing Technologies.
The existing radar and LIDAR systems accomplish everything DSRC is supposed to do in terms of collision avoidance — although they do not allow the auto industry to bombard you with new products and harvest your personal information. But for the same reason, they are much more secure and do not allow hackers to insert DSRC worms into cars. Why? Go back to our security v. utility trade off. Car radars and LIDAR systems have a very limited number of inputs they accept into the system. They get a reflection off the object ahead and respond to that very limited input. They don’t even need complex data like the make of the car or the species of deer that just jumped in front of your car. They just get a reflection back of ‘big thing in front of you, STOP!” Because we limit the nature and type of inputs that can access the system, the system becomes easier to secure — just like an ATM is easier to secure than a cell phone.
More importantly, neither car radar nor LIDAR actually communicate with other cars. Even if my car is infected, it cannot spread the malware through the car radar. My car is still vulnerable, but I have not taken on the same vulnerability to worms as DSRC introduces.
Which leads us back to the risk assessment. What, exactly, does DSRC do for us on life and safety that these other, more secure systems do not? And it can’t just be some minor incremental change. It has to be enough of an improvement to actually justify the risk of installing a system that could spread malware to millions of cars, and to justify the cost to the auto industry (and, ultimately, consumers who pay for this) of constantly improving and maintaining that security.
Whatever our ultimate risk analysis, one thing seems certain. It makes absolutely no sense for GM and NHTSA to rush to market a highly vulnerable first generation DSRC system, attached to a highly vulnerable car, for the sole purpose of creating “facts on the ground” to justify spectrum squatting on 5.9 GHz.
In Part 2, I will explain how DSRC makes us worse off for privacy, despite NHTSA’s repeated assurances that they have consumer’s privacy interests at heart.
In Part 3, I will explain how the FCC can and should address both the cybersecurity and the privacy issues, regardless of whether it authorizes sharing in the band.
Stay tuned . . .