Self-driving car discussion catch-all

As others said earlier in this thread, the auto-driving cars can't get here soon enough.

Stele wrote:

As others said earlier in this thread, the auto-driving cars can't get here soon enough.

As I just got rear ended in slow-moving traffic by a jackass that wasn't paying attention on my way home today, I whole-heartedly agree. I've been in three accidents as a driver now. In all three cases, I was rear ended by someone who wasn't paying attention.

Tanglebones wrote:
Stele wrote:

As others said earlier in this thread, the auto-driving cars can't get here soon enough.

As I just got rear ended in slow-moving traffic by a jackass that wasn't paying attention on my way home today, I whole-heartedly agree. I've been in three accidents as a driver now. In all three cases, I was rear ended by someone who wasn't paying attention.

My fiancee got rear-ended two days in a row last month while at a dead stop at an intersection, both times by young ladies that we highly suspect were paying more attention to their phone than the road.

Fortunately, virtually no damage was incurred in either case (not even a fender-bender, really), but that's pure dumb luck.

If you spend enough time on the road, accidents will happen whether you’re in a car or a self-driving car. Over the 6 years since we started the project, we’ve been involved in 11 minor accidents (light damage, no injuries) during those 1.7 million miles of autonomous and manual driving with our safety drivers behind the wheel, and not once was the self-driving car the cause of the accident.

I have zero doubt that a self-driving car is superior at following road rules and not causing accidents versus human drivers.

One question I have, though, is how do the self-driving cars fare at accident avoidance when a human driver is going way off-script?

This seems like it would be trickier for the computer to interpret. Does it recognize that, when on icy roads, the driver that zooms past the car at high speed is more likely to become a spin-out hazard up ahead, and adjust? If someone comes careening across multiple lanes, at what distance does the car recognize the threat? How does it deal with the various ways an irrational crazy tailgater might behave?

I don't doubt the cars can react to imminent hazards intelligently, but a lot of accident avoidance is recognition of a potential hazard and adjusting before it becomes an imminent one. That's the area I'd like to hear more about.

*Legion* wrote:

I don't doubt the cars can react to imminent hazards intelligently, but a lot of accident avoidance is recognition of a potential hazard and adjusting before it becomes an imminent one. That's the area I'd like to hear more about.

Agreed, and it seems like gathering the data needed to do that is part of this project. Also, there's this from the article:

With 360 degree visibility and 100% attention out in all directions at all times; our newest sensors can keep track of other vehicles, cyclists, and pedestrians out to a distance of nearly two football fields.

There were some diagrams in the article about how they avoided immediate drivers and cyclists.

I would think they would reduce speed in the rain, etc, following the rules of driving better than people do. I'm always amazed at how many people don't leave extra follow distance in adverse conditions.

Stele wrote:

There were some diagrams in the article about how they avoided immediate drivers and cyclists.

I would think they would reduce speed in the rain, etc, following the rules of driving better than people do. I'm always amazed at how many people don't leave extra follow distance in adverse conditions.

Try any conditions... I always try to leave at least 2 second gap between me and the car in front of me on a normal dayand everyone thinks its an invite to cut in.

Imagine how much simpler this problem would be to solve if ALL cars on the road were self-driving.
You take out the need to predict all sorts of random behaviors [from cars, at least] since every car would be following the rules of the road precisely and most likely communicating with each other in some way.
I could see this maybe starting by having 'Autonomous-only Zones' in cities or on freeways and possibly expanding out from there.
Also, the presence of autonomous cars eliminates the need to own your own car and could reduce the amount of space we have to set aside for parking. Just open up the Uber app on your phone, request a car and a self-driving vehicle shows up to take you to your destination. You don't have to find parking once you get there and the car just drives off to pick up another passenger.
I'd still want to have the ability to drive my own car [I have a restored '77 Lotus Esprit], but more for pleasure than as a way of getting from one place to another.

*Legion* wrote:

I have zero doubt that a self-driving car is superior at following road rules and not causing accidents versus human drivers.

One question I have, though, is how do the self-driving cars fare at accident avoidance when a human driver is going way off-script?

This seems like it would be trickier for the computer to interpret. Does it recognize that, when on icy roads, the driver that zooms past the car at high speed is more likely to become a spin-out hazard up ahead, and adjust? If someone comes careening across multiple lanes, at what distance does the car recognize the threat? How does it deal with the various ways an irrational crazy tailgater might behave?

I don't doubt the cars can react to imminent hazards intelligently, but a lot of accident avoidance is recognition of a potential hazard and adjusting before it becomes an imminent one. That's the area I'd like to hear more about.

I have no doubt that the computer would be MUCH better at this than humans. By drawing some conjectures based on what I've seen of the technology, they have it now to the point where it identifies all the entities involved in traffic, applies expectations to them based on what they are, and then judges their behaviour based on those expectations. It is also capable of doing that with 100% attention and with 360 degree awareness, as mentioned above.

This means that something goes off the script, the computer has a LOT more options than a human would. It knows current and can predict future behaviour of all agents in its vicinity. Therefore it can choose the course of action much quicker, and with much better probability of a positive outcome than a human driver can. The limitation lies in the part of the algorithmic description of how to identify entities, and how to assign expected behaviour to them. I suspect this would be fairly easy for normal cars and trucks, and possibly bicycles/pedestrians. But if you had an oddly shaped car/vehicle, or some unusual object like a log falling off of a truck, or an animal you haven't really encountered/coded before, how does it recognize and expect behaviour from such things? That I think is a big question. And I believe that this is what they are answering by driving these millions of miles at the moment.

MoonDragon wrote:

I have no doubt that the computer would be MUCH better at this than humans.

I have no doubt that the computer CAN be much better at it, it's just a question of how far the development has come. I would expect this to be one of the more challenging areas of improvement.

Bear in mind, I still think self driving vehicles are a big net gain even if their detection of not-yet-hazards is not as keen as an astute human's. I just think it's one of the more interesting areas of the problem space, and one of the things I expect where a lot of the future development/improvement of the technology is going to focus.

Mostly it's just that we've heard how awesome the technology is at performing the routine tasks of driving, and dealing with clear, somewhat predictable hazards. I want to see how it behaves when you throw crazy crap at it, and how well it can be taught to sense conditions that are more likely to become hazards (eg. the overly-fast driver on ice) that are, at that point, still operating within parameters.

To a certain point, I'd guess that if the lowest level of the calculations are examining the physics of objects around the car and predicting their future behavior, I'd expect a bit of automatic robustness. Robots make different assumptions than humans; I'd expect a properly programmed robot to react much more effectively to a completely unexpected behavior like a tree falling in front of the car. Provided it can detect the obstacle, of course.

Gremlin wrote:

To a certain point, I'd guess that if the lowest level of the calculations are examining the physics of objects around the car and predicting their future behavior, I'd expect a bit of automatic robustness. Robots make different assumptions than humans; I'd expect a properly programmed robot to react much more effectively to a completely unexpected behavior like a tree falling in front of the car. Provided it can detect the obstacle, of course.

Right. But also what I mean is something like, "wow, the wind is blowing HARD and that tree looks like it's about to uproot. I better move over just in case."

There are things that human intuition detects and prepares to avoid before they cross the line into being an active hazard. I'm just curious to know where the current thresholds of recognition are. No doubt it could predict the tree's fall better than a human, but can it recognize that the tree is teetering on the edge of possibly falling or not? It's those fuzzy areas where something may or may not become a hazard, which we humans (well, some of us) recognize and shift into avoidance before they become an issue, that I think is one of the more fascinating questions when it comes to this technology.

*Legion* wrote:
Gremlin wrote:

To a certain point, I'd guess that if the lowest level of the calculations are examining the physics of objects around the car and predicting their future behavior, I'd expect a bit of automatic robustness. Robots make different assumptions than humans; I'd expect a properly programmed robot to react much more effectively to a completely unexpected behavior like a tree falling in front of the car. Provided it can detect the obstacle, of course.

Right. But also what I mean is something like, "wow, the wind is blowing HARD and that tree looks like it's about to uproot. I better move over just in case."

There are things that human intuition detects and prepares to avoid before they cross the line into being an active hazard. I'm just curious to know where the current thresholds of recognition are. No doubt it could predict the tree's fall better than a human, but can it recognize that the tree is teetering on the edge of possibly falling or not? It's those fuzzy areas where something may or may not become a hazard, which we humans (well, some of us) recognize and shift into avoidance before they become an issue, that I think is one of the more fascinating questions when it comes to this technology.

While you have a point, I think that the math is heavily in favor of the robot in this case, for the following reasons:

(1): Even assuming that your point holds true, the robot is going to be several orders of magnitude better at reacting to the hazard even if it is late in detecting it, for the simple reason that robots don't panic. My instantaneous response to a tree falling onto my side of the road is potentially going to be (a) screaming and (b) swerving into oncoming traffic. Is a perfect response that comes later better than an imperfect one that comes sooner? Often, yes.

(2): The kind of intuitive hazard detection that us humans do is very prone to false positives. OHMYGODTHATTREEISFALLINGONME, oh no, never mind, it was just a gust of wind. But again, if I *think* the tree is falling, I might swerve into oncoming traffic to avoid the non-falling tree. You might actually be suggesting a net-gain on the robot's part by eliminating that set of false positives.

(3): The kind of hazards you're describing are statistical edge cases. It's fairly indisputable at this point that the robot is going to be significantly better at avoiding hazards that make up the vast majority of traffic incidents. If it outperforms the human in 99.9% of hazards, then I'd call that a win.

Yeah I think you're giving human drivers too much credit. I'd wager freak accidents like a limb falling are less than 1% of human accidents. Heck probably a near 0 fatality rate.

Jonman wrote:

While you have a point, I think that the math is heavily in favor of the robot in this case, for the following reasons:

You're arguing against a point I am not making. I'm interested in how well the technology does at this particular kind of recognition. I am NOT putting forth some kind of argument that doing less well at it than a human is an indictment against the technology, or some kind of failing, or arguing in any way that it invalidates all the other aspects of the task that the technology is superior to the human driver at.

What I am is a software developer wondering, "how far have they come with the ability to recognize these sorts of fuzzy things?"

Stele wrote:

Yeah I think you're giving human drivers too much credit. I'd wager freak accidents like a limb falling are less than 1% of human accidents. Heck probably a near 0 fatality rate.

I don't just mean freak accidents. I mean the kind of interpretation of things that takes some human context into account.

Example: someone is running along the side of the road. How I adjust my driving differs depending on if this is an adult in full runner's gear, or a child chasing a ball that is bouncing around erratically.

Of course, if the person does end up in the road, the self-driving car will react swiftly and far superior to a human driver. But does the self-driving car recognize the child as more likely to present an unexpected hazard, and adjust accordingly before it darts into the road, the way I do? If the technology does not do that now, I would bet money that future development will go into improving this kind of recognition. These suckers are going to get smart. What I'm curious about is how far down that path they've gone now.

*Legion* wrote:

Of course, if the person does end up in the road, the self-driving car will react swiftly and far superior to a human driver. But does the self-driving car recognize the child as more likely to present an unexpected hazard,

Around here, the runner is way more likely to be the hazard :p

and adjust accordingly before it darts into the road, the way I do?

...you're a way better driver than me, that's for sure.

I am with the majority of folks who say that the car will almost certainly be a better driver than the human driver in all the scenarios that statistically matter. That said, I think what Legion is really saying is that he is curious to see what the edge limitations of the AI are. I can agree that that is a useful exercise, but it should never be used as some kind of justification for keeping human control over multi ton death machines a moment longer than absolutely necessary.

Even at the edge cases, the AI will be better than 95% of the population.

Paleocon wrote:

I am with the majority of folks who say that the car will almost certainly be a better driver than the human driver in all the scenarios that statistically matter. That said, I think what Legion is really saying is that he is curious to see what the edge limitations of the AI are. I can agree that that is a useful exercise, but it should never be used as some kind of justification for keeping human control over multi ton death machines a moment longer than absolutely necessary.

Bruce wrote:

Even at the edge cases, the AI will be better than 95% of the population.

In case it wasn't clear, I am in full agreement with all of this. I am very much on board with self-driving vehicles, and think they far exceed the threshold required to justify putting them on the road. But the day they hit the road isn't the end of their development. Work will continue to make them even safer, even smarter. What I'm interested in is how well the AI holds up in areas where human cognition would seem to have its biggest advantage over AI.

If it seems like the scenarios are reaching, that's because they are, because self-driving cars have pretty much overwhelmingly answered the question of the more common cases.

But there will be a case, somewhere, where a self-driven car is in an accident and someone will say, "a human would have recognized the danger and avoided it ahead of time".

The Oatmeal: 6 Things I learned from riding in a Google Self-driving car

He has some really good points in the article.

The car we rode in did not strike me as dangerous. It struck me as cautious. It drove slowly and deliberately, and I got the impression that it’s more likely to annoy other drivers than to harm them. Google can adjust the level of aggression in the software, and the self-driving prototypes currently tooling around Mountain View are throttled to act like nervous student drivers.
Some of the scenarios autonomous vehicles have the most trouble with are the scenarios human beings have the most trouble with, such as traversing four-way stops or handling a yellow light (do you brake suddenly, or floor it and run the light?). At one point during the trip, we were attempting to make a right turn onto a busy road. Everyone’s attention was directed to the left, waiting for an opening. When the road cleared and it was safe to turn right, the car didn’t budge. I thought this was a bug at first, but when I looked to my right there was a pedestrian standing very close to the curb, giving the awkward body language that he was planning on jaywalking. This was a very human interaction: the car was waiting for a further visual cue from the pedestrian to either stop or go, and the pedestrian waiting for a cue from the car. When the pedestrian didn’t move, the self-driving car gracefully took the lead, merged, and entered the roadway.

[quote=manta173]

farley3k wrote:

This is why I can't stand the news... my parents always wonder why I don't pay attention and it is because of this. Literally cannot trust the news to get basic facts correct.

Indeed. I remember when Smart Cars (of which I currently drive) started coming out in the U.S. There was a big story one day about a man dying in a car accident not too far from where I live. The headline was of course that he was driving a Smart Car and details that he was A) drunk and B) not wearing a seat belt were buried well into the article.

I cant' wait for self-driving cars myself, though I do hold a couple concerns. One being that, being a gamer, I know how some people will act. A self-driving car is just an AI... how long till people figure out how it works and reacts and start to drive around them in a way to force them to react in a certain way?

Apologies if that was brought up already. I'm not going to read through all these pages of posts tonight. :p

Nevin73 wrote:

Google can adjust the level of aggression in the software

IMAGE(http://i.imgur.com/MMEPqjJ.jpg)

More likely the basic trim package will never overtake. If you want to get places faster then you're going to have to pay for an upgrade.

Something I've long suspected: self-driving cars that refuse to go above the speed limit will annoy a rather large group of people.

Gremlin wrote:

Something I've long suspected: self-driving cars that refuse to go above the speed limit will annoy a rather large group of people.

Self-driving cars would hopefully allow us to raise or even abolish the speed limit.

*Legion* wrote:

I don't just mean freak accidents. I mean the kind of interpretation of things that takes some human context into account.

Example: someone is running along the side of the road. How I adjust my driving differs depending on if this is an adult in full runner's gear, or a child chasing a ball that is bouncing around erratically.

Of course, if the person does end up in the road, the self-driving car will react swiftly and far superior to a human driver. But does the self-driving car recognize the child as more likely to present an unexpected hazard, and adjust accordingly before it darts into the road, the way I do? If the technology does not do that now, I would bet money that future development will go into improving this kind of recognition. These suckers are going to get smart. What I'm curious about is how far down that path they've gone now.

Self-driving cars already track cyclists, pedestrians, road hazards, etc.

I'm not sure you're example pans out, though. That's because what you're basically saying is that you try to pay extra attention in some circumstances than you do in others. Self-driving cars don't do that. They pay extra attention *all* the time and, thanks to 360 degrees of sensor coverage, their baseline attention and ability to perceive dangers is going to be far superior to any human's.

The self-driving car would simply deal with a child darting into the street as it would with a car that suddenly stops or pulls out in front of it. Additionally, the self-driving car wouldn't be doing what many humans would, such as traveling above the speed limit in a residential area or school zone.

Demyx wrote:
Gremlin wrote:

Something I've long suspected: self-driving cars that refuse to go above the speed limit will annoy a rather large group of people.

Self-driving cars would hopefully allow us to raise or even abolish the speed limit.

Which will absolutely lead to 'pay more to get there faster' without regulatory oversight. Also speed limits are often in place to keep non-car road and sidewalk users safe. Stopping distance is stopping distance whether are computer is driving or not.

Self driving cars are going to be an intellectual property minefield. Should car owners be allowed to customise the software running their car? If not then the current John Deere tractor licencing/owning debacle appears to be the first shot across the bows on this sort of issue. If yes then making the system safe is going to be crazy complicated.