You are hereBlogs / WcP.Scientific.Mind's blog / Qs to self-driving cars: who controls the code? Zero glitch? fend off invisible hack? Human driver required to be behind wheel
Qs to self-driving cars: who controls the code? Zero glitch? fend off invisible hack? Human driver required to be behind wheel
(quote)
Car Hacking: What Every Connected Driver Needs to Know - many new cars are equipped with wireless technology that can make a driver's time on the road more stress-free and entertaining, but the technology can also bring a dark side. Two hackers were able to take control of a connected Jeep Cherokee from their living room as a Wired reporter, who agreed to be their test case, drove the SUV down the highway at 70 mph, according to the article.
Charlie Miller and Chris Valasek, the two hacking experts behind the stunt, were able to access the SUV's Internet connected computer system and then rewrite the firmware to plant the malicious code allowing them to commandeer the vehicle, including everything from the air conditioning and music to the Jeep's steering, brakes and transmission, according to Wired.
TheGuardian - The problem with self-driving cars: who controls the code? Every locked device can be easily jailbroken
Should autonomous vehicles be programmed to choose who they kill when they crash? And who gets access to the code that determines those decisions? The Trolley Problem is an ethical brainteaser that’s been entertaining philosophers since it was posed by Philippa Foot in 1967: a runaway train will slaughter five innocents tied to its track unless you pull a lever to switch it to a siding on which one man, also innocent and unawares, is standing. Pull the lever, you save the five, but kill the one: what is the ethical course of action?
Now it’s found a fresh life in the debate over autonomous vehicles. The new variant goes like this: your self-driving car realizes that it can either divert itself in a way that will kill you and save, say, a busload of children; or it can plow on and save you, but the kids all die. What should it be programmed to do?
There’s an obvious answer, which is the iPhone model. Design the car so that it only accepts software that’s been signed by the Ministry of Transport (or the manufacturer), and make it a felony to teach people how to override the lock.
This is the current statutory landscape for iPhones, games consoles and many other devices that are larded with digital locks, often known by the trade-name “DRM”. Laws like the US Digital Millennium Copyright Act (1998) and directives like the EUCD (2001) prohibit removing digital locks that restrict access to copyrighted works, and also punish people who disclose any information that might help in removing the locks, such as vulnerabilities in the device.
There’s a strong argument for this. The programming in autonomous vehicles will be in charge of a high-speed, moving object that inhabits public roads, amid soft and fragile humans. Tinker with your car’s brains? Why not perform amateur brain surgery on yourself first?
But this obvious answer has an obvious problem: it doesn’t work. Every locked device can be easily jailbroken, for good, well-understood technical reasons. The primary effect of digital locks rules isn’t to keep people from reconfiguring their devices – it’s just to ensure that they have to do so without the help of a business or a product. Recall the years before the UK telecoms regulator Ofcom clarified the legality of unlocking mobile phones in 2002; it wasn’t hard to unlock your phone. You could download software from the net to do it, or ask someone who operated an illegal jailbreaking business. But now that it’s clearly legal, you can have your phone unlocked at the newsagent’s or even the dry-cleaner’s.
If self-driving cars can only be safe if we are sure no one can reconfigure them without manufacturer approval, then they will never be safe.
But even if we could lock cars’ configurations, we shouldn’t. A digital lock creates a zone in a computer’s program that even its owner can’t enter. For it to work, the lock’s associated files must be invisible to the owner. When they ask the operating system for a list of files in the lock’s directory, it must lie and omit those files (because otherwise the user could delete or replace them). When they ask the operating system to list all the running programs, the lock program has to be omitted (because otherwise the user could terminate it).
All computers have flaws. Even software that has been used for years, whose source code has been viewed by thousands of programmers, will have subtle bugs lurking in it. Security is a process, not a product. Specifically, it is the process of identifying bugs and patching them before your adversary identifies them and exploits them. Since you can’t be assured that this will happen, it’s also the process of discovering when your adversary has found a vulnerability before you and exploited it, rooting the adversary out of your system and repairing the damage they did.
When Sony-BMG covertly infected hundreds of thousands of computers with a digital lock designed to prevent CD ripping, it had to hide its lock from anti-virus software, which correctly identified it as a program that had been installed without the owner’s knowledge and that ran against the owner’s wishes. It did this by changing its victims’ operating systems to render them blind to any file that started with a special, secret string of letters: “$sys$.” As soon as this was discovered, other malware writers took advantage of it: when their programs landed on computers that Sony had compromised, the program could hide under Sony’s cloak, shielded from anti-virus programs.
A car is a high-speed, heavy object with the power to kill its users and the people around it. A compromise in the software that allowed an attacker to take over the brakes, accelerator and steering (such as last summer’s exploit against Chrysler’s Jeeps, which triggered a 1.4m vehicle recall) is a nightmare scenario. The only thing worse would be such an exploit against a car designed to have no user-override – designed, in fact, to treat any attempt from the vehicle’s user to redirect its programming as a selfish attempt to avoid the Trolley Problem’s cold equations.
Whatever problems we will have with self-driving cars, they will be worsened by designing them to treat their passengers as adversaries.
It’s likely that we’ll get calls for a lawful interception capability in self-driving cars: the power for the police to send a signal to your car to force it to pull over. This will have all the problems of the Trolley Problem and more: an in-built capability to drive a car in a way that its passengers object to is a gift to any crook, murderer or rapist who can successfully impersonate a law enforcement officer to the vehicle – not to mention the use of such a facility by the police of governments we view as illegitimate – say, Bashar al-Assad’s secret police, or the self-appointed police officers in Isis-controlled territories.
That’s the thorny Trolley Problem, and it gets thornier: the major attraction of autonomous vehicles for city planners is the possibility that they’ll reduce the number of cars on the road, by changing the norm from private ownership to a kind of driverless Uber. Uber can even be seen as a dry-run for autonomous, ever-circling, point-to-point fleet vehicles in which humans stand in for the robots to come – just as globalism and competition paved the way for exploitative overseas labour arrangements that in turn led to greater automation and the elimination of workers from many industrial processes.
If Uber is a morally ambiguous proposition now that it’s in the business of exploiting its workforce, that ambiguity will not vanish when the workers go. Your relationship to the car you ride in, but do not own, makes all the problems mentioned even harder. You won’t have the right to change (or even monitor, or certify) the software in an Autonom-uber. It will be designed to let third parties (the fleet’s owner) override it. It may have a user override (Tube trains have passenger-operated emergency brakes), possibly mandated by the insurer, but you can just as easily see how an insurer would prohibit such a thing altogether.
Police in California have pulled over one of Google's self-driving cars after it was driving far slower than the speed limit. An officer in Mountain View, California, near Google's headquarters, stopped one of the company's prototype vehicles after it was holding up traffic by driving 24 mph in a 35 mph area.
California is trying to do something unusual in this age of rapidly evolving technology - get ahead of a big new development before it goes public. By the end of the year, the Department of Motor Vehicles must write rules to regulate cars that rely on computers - not the owner - to do the driving. That process began Tuesday, when the DMV held an initial public hearing in Sacramento to puzzle over how to regulate the vehicles that haven't been fully developed yet.
Among the complex questions officials sought to unravel:
How will the state know the cars are safe?
Does a driver even need to be behind the wheel?
Can manufacturers mine data from onboard computers to make product pitches based on where the car goes or set insurance rates based on how it is driven?
Do owners get docked points on their license if they send a car to park itself and it slams into another vehicle?
Once the stuff of science fiction, driverless cars could be commercially available by decade's end. Under a California law passed in 2012, the DMV must decide by the end of this year how to integrate the cars - often called autonomous vehicles - onto public roads. That means the regulation's writers will post draft language regulations around June, then alter the rules in response to public comment by fall in order to get them finalized by the end of 2014.
Three other states have passed driverless car laws, but those rules mostly focus on testing. California has mandated rules on testing and public operation, and the DMV expects within weeks to finalize regulations dictating what companies must do to test the technology on public roads. Those rules came after Google Inc. had already sent its fleet of Toyota Priuses and Lexuses, fitted with an array of sensors including radar and lasers, hundreds of thousands of miles in California. Major automakers also have tested their own models. Now, the DMV is scrambling to regulate the broader use of the cars. With the federal government apparently years away from developing regulations, California's rules could effectively become the national standard.
Much of the initial discussion Tuesday focused on privacy concerns.
California's law requires autonomous vehicles to log records of operation so the data can be used to reconstruct an accident.
But the cars "must not become another way to track us in our daily lives," John M. Simpson of the nonprofit Consumer Watchdog said at the hearing. Simpson called out Google, saying the Internet giant rebuffed attempts to add privacy guarantees when it pushed the 2012 legislation mandating rules on testing and public operation.
Seated across from Simpson at the hearing's head tables was a representative from Google, who offered no comment on the data privacy issue.
Discussion also touched on how to know a car is safe, and whether an owner knows how to properly operate it.
Ron Medford, Google's director of safety for its "self-driving car" project, suggested that manufacturers should be able to self-certify that their cars are safe. He cautioned that it would get complicated quickly if the state tried to assume that role.
In initial iterations, human drivers would be expected to take control in an instant if the computer systems fail. Unlike current technology - which can help park a car or keep it in its freeway lane - owners might eventually be able to read, daydream or even sleep while the car did the work.
Responding to a question received over Twitter, DMV attorney Brian Soublet acknowledged that the department is still grappling with the most fundamental question of whether a person will need to be in the driver's seat.
Maybe not, by the time the technology is safe and reliable, he said.
Soublet asked who would ensure that owners know how to use the new technology. Should the onus be on dealers, manufacturers, owners? Representatives of automakers suggested they shouldn't be asked to guarantee the capability of owners. John Tillman of Mercedes-Benz said the DMV could test owners on basics such as starting and stopping the automated driving function. Automaker representatives also expressed concerns that other states could pass regulations that were substantially different from California, creating the kind of patchwork rules that businesses hate. States outside California have been in touch and are following California's rule-making process closely, said Bernard Soriano, a deputy director at the DMV.
Other discussion centered on how vulnerable the cars could be to hackers, who might wrest control of the vehicles. Industry representatives said that while that's a concern, they would vigilantly guard against such vulnerability because it would be disastrous
California requires autonomous cars to have humans behind the wheel and those humans need to get licenses especially for driverless vehicles. The California DMV has already started preparing for the arrival of driverless cars by writing up draft regulations to govern them. While that's a step forward for manufacturers working on the technology, the proposed rules are rather strict and will force Google (and maybe even other manufacturers) to change its car design. See, the DMV wants a human driver behind the wheel despite driverless cars' capabilities. That driver has to undergo training from car companies on how to use autonomous vehicles and get a special state-issued license. If you recall, the big G has decided to remove steering wheels from its prototypes.
Besides requiring a human driver, the DMV wants driverless cars to undergo testing by a third-party organization to assess their performance and to verify that they work as intended. It plans to require automakers to submit monthly reports detailing the performance and safety of their products. Finally, the DMV wants car manufacturers to disclose if they're collecting information from users and to make sure their vehicles are equipped with the technology to detect and fend off cyberattacks.
(unquote)
Photo courtesy engaget and Mercedes Benz
this is very great article.the content of this post is really very amazing.it really helped me to solve many problems.thanks for sharing such a amzing stuff.
https://healthcarepublic.com/symptoms-of-high-blood-pressure">symptoms of high blood pressure
https://www.movers5th.in/packers-and-movers-mumbai/">Packers and Movers Mumbai
https://www.movers5th.in/packers-and-movers-pune/">Packers and Movers Pune
https://www.movers5th.in/packers-and-movers-delhi/">Packers and Movers Delhi
i really appreciate your work.it is really a great post.you shared a very informative post.thanks for this
https://www.goodnightmessagebox.com">good night messages
Really structured and useful information. And everything is clear, thanks. I looked through some posts and must say, they are very interesting. Best regards, David.
https://www.emblemzone.com">Emblem Zone
I personally would not trust a self driven car. I would rather a person be at the wheel. Who cares about the hierarchy of who is in charge of setting the controls and who should have managed the instructions to the machine better? It doesn't matter who accepts the blame when life is lost in a car crash at the end of the day.
That is, they are posing ethical questions raised by the presence https://www.groovyessays.co.uk/">groovy essay of self-driving cars. Their paper on arXiv poses traffic situations involving.
This is new cars are equipped with wireless technology that can make a driver's time on the road more stress-free and entertaining, but the technology can also bring a dark side.https://www.essayvalley.co.uk/buy-essay/">Order essay online
Bonjour et merci pour votre contribution,https://awriter.org/">essay writerspuis-je vous soumettre une question : me permettez-vous de faire un lien par mail vers cet article ? Merci pour tout.
Thank you for the information by reading this article I know a lot of things.