Friday, November 25, 2016

We Probably Should Trust Driverless Cars - But We Don't.

More than 30,000 people are killed each year in car crashes in the United States. In 90% of crashes, human error is to blame. And so most experts agree that self-driving car technology will reduce the number of crashes and fatalities. Self-driving cars could save up to 1.5 million lives just in the United States and close to 50 million lives globally in the next 50 years. Yet in a March 2016 poll by the American Automobile Association, 75% of respondents said they are not ready to embrace self-driving cars. It's understandable that people are sceptical of handing their keys over to an algorithm. But algorithms have come a long way in the last decade: they can take in data, learn, and generate more sophisticated versions of themselves. We rely upon algorithms for many of our decisions and actions these days, from low-risk activities such as deciding what to watch on Netflix or buy on Amazon to high-stakes decisions such as how we should invest our savings. We are even OK with autopilot features controlling airplanes. This current skepticism for self-driving cars thus raises a question: Why do we trust algorithms in some cases, but not in others?


It all comes down to this: people trust algorithms for more objective decisions, and trust them less for subjective ones. font-family: Guardian; font-size: 17px;">Could it be that people are hesitant about self-driving cars because they view driving as a more subjective, personal experience? Consider the findings described by the Wharton School of the University of Pennsylvania: Their research showed that people lose confidence in algorithms much more than in human forecasters when they observe the two make the same mistake. Furthermore, people were less likely to choose an algorithm over a human forecaster even if the algorithm outperformed the human on the whole. In short, we are not very forgiving of mistakes made by algorithms even if we make the same mistakes more often. Maybe we think that an algorithm's mistake is baked into the program and is untrustworthy. The implication is chilling for self-driving car manufacturers and proponents: People might rapidly lose trust in the technology if there are enough incidents, even when the technology is proven to be safer in the aggregate. Early fatalities could turn the general public against self-driving cars very quickly. Manufacturers have to think harder about when and how to introduce driverless features.

As Artificial Intelligence (AI) advances and deep learning – a branch of machine learning that aims to recreate the actual processes of neurons in the brain – matures, algorithms will run a greater share of our lives. That said, scepticism only shows that good technology alone does not ensure success. AI and smart algorithms need to be introduced in ways that win the trust and confidence of their human users.

No comments:

Post a Comment