Not Sure About The Latest Technology?We Can Help!
Ask a member of the NTConnections team any technology question you have
Fill out the form to the right.
Artificial intelligence might be quite a ways off, but despite this, the push continues to make driverless cars a regular occurrence on the roads. Just look at how Google has its driverless cars rolling across testing grounds in Mountain View, California, and if they have their way, we might see a lot more of these vehicles hitting the roads in the near future.
For examples of how one of these automated cars views its surroundings, watch this video:
As you can probably expect, liability is a major concern for any autonomous process. With autonomous technology, though, this is a blurred grey line at best. As the feds claimed in their letter to Google, “If no human occupant of the vehicle can actually drive the vehicle, it is more reasonable to identify the ‘driver’ as whatever (as opposed to whoever) is doing the driving.” If something goes wrong, people want to find out who (or what) is at fault, and having vehicles capable of driving themselves makes it more difficult to do so.
Another huge issue is just how well Google’s autonomous cars fit into the current Federal Motor Vehicle Safety Standards. In particular, the regulations mention specific actions taken by human anatomy which describe how a motor vehicle should be controlled. As reported by WIRED:
The rule regarding the car’s braking system, for example, says it “shall be activated by means of a foot control.” The rules around headlights and turn signals refer to hands. NHTSA can easily change how it interprets those rules, but there’s no reasonable way to define Google’s software—capable as it is—as having body parts. All of which means, the feds “would need to commence a rulemaking to consider how FMVSS No. 135 [the rule governing braking] might be amended in response to ‘changed circumstances,’” the letter says. Getting an exemption to one of these rules is a long and difficult process, Walker Smith says. But “the regular rulemaking process is even more onerous.”
While liability will remain a major problem for autonomous cars, it’s still a significant step in the right direction. What this approval means is that computers can be considered humans, or at least human-like. This acknowledgement means that developers of artificially intelligent entities will have an easier time with their goals; yet, the process will still likely be filled with all sorts of legal maneuvers and such. Though Google has slated its automated cars to be available to the public by 2020, we might have to wait just a little bit longer, even for the most basic form of AI.
Would you trust an autonomous car to get you from point A to point B safely? Let us know in the comments!