Client: Artificial Solutions
(Published on the Artificial Solutions blog on October 24, 2012. Read the original here.)
Assignment: Blog Post
Sometimes the future arrives without us realizing it. Driverless cars are no longer a subject for science fiction writers but a reality in three US states: Nevada, Florida and most recently California.
The advent of driverless vehicles is likely to provide a big boost for natural language interaction as carmakers look for easier ways for human occupants to interact with their computer-driven vehicles.
The incorporation of California to the select club of states whose laws permit driverless vehicles is of critical importance for the uptake of this technology.
California is by far the most populous U.S. state, and the third most extensive. It is also home to some chronic traffic congestion, particularly in the Los Angeles area.
So if driverless cars are going to be shown to be useful in solving real-world problems, the best place to prove their value is not the empty desert roads of Nevada but LA’s congested freeways or the Paris Périphérique.
But what exactly are the benefits of driverless cars?
There are several, of course, but the one that I would argue is of overriding importance is reducing accidents due to driver distractions.
In 2011, more than 30,000 people died on the roads of the European Union and a significant proportion of these deaths were due to driver distraction.
One US study found that distractions internal to the vehicle were a critical factor in 11 percent of crashes studied.
Cell phones often get blamed for distraction-related crashes but a lot of distractions predate today’s smartphone era: talking to passengers, adjusting the radio, or wasps are all significant causes of in-car distraction and crashes.
A driverless car, needless to say, does not have a human driver and so is immune to distractions such as wasps flying in through open windows or kids fighting on the back seat.
Autonomous cars use a combination of radar sensors in the front, video cameras to scan the surrounding area, and other sensors and artificial intelligence software to help steer and navigate the road.
As well as reducing accidents due to distractions, autonomous vehicles will allow independence for people with disabilities, such as those with vision problems. It will generate productivity for commuters stuck in traffic and create less congestion and pollution. In general, it can help minimize traffic and reduce accidents caused by human error.
Too good to be true? For some time, Google has been testing driverless cars on California’s roads. Seven test cars have driven 1,000 miles without human intervention and more than 140,000 miles with only occasional human control. One even drove itself down San Francisco’s famous crooked Lombard Street – which I can testify is quite a challenge for human drivers.
The only accident involving Google’s autonomous vehicles happened, ironically, when a human driver was behind the wheel.
The key developments that makes it possible for autonomous vehicles to drive safely on congested public roads is the huge increase in computing power available to the designers and advances in artificial intelligence software.
That makes it feasible to detect pedestrians, street lights and lane markers, as well resolve conflicting sensor data.
With all that intelligence onboard, the logical next step is allow human occupants of the vehicle to interact with the onboard computer using their voice, rather than having to fiddle with touchscreens, keypads and buttons.
While the tests of autonomous vehicles have involved pre-programmed journeys, real-life driving is not always planned.
For example, you receive a call from your spouse while your “self-driven” vehicle takes its regular route home from work. Your spouse wants you to pick up some groceries – you know where to go, but how do you tell the computer?
The quickest and easiest way to program these on-the-fly route changes is with spoken commands– “turn left at next traffic light and drive straight while I look for a store.”
Or perhaps you have been to the store before so the computer will know its location, even though you don’t recall the name. No problem: you simply say “Take me to the grocery store we went to last Saturday”.
These are ideal applications for natural language technology as NLI can understand human conversation. Also, because the computer is driving, there is no risk of driver distraction as the human occupant looks out of the window for the right store.
Being able to give natural language directions to your car may sound like science fiction. But so did self-driven vehicles only a few years ago.
Read this Forbes article for more on autonomous vehicles in California.
Geoff Nairn is a freelance journalist who specializes in business and technology. For the past decade he has been a regular contributor to Europe’s leading business newspapers such as the Wall Street Journal Europe, the Times and the Financial Times, writing on a bro ad spectrum of other topics ranging from management consultancy and risk management through to renewable energy and superconductors. He has lived for extended periods in three European countries — Spain, Italy and the UK — and knows at first hand many of the issues that characterize and shape Europe’s hi-tech industry.