Self-driving car discussion catch-all

Jonman wrote:
ActualDragon wrote:

I think we're saying the same thing in different ways. My takeaway here is that Uber is not responsible enough to be testing on public roads. I know that development will take time and I believe the tech will work eventually, but I don't believe the Silicon Valley philosophy "move fast and break things" has any place on public roads. Yes, testing has to happen on public roads eventually because you can't replicate a lot of traffic in a lab environment, but it has to be done responsibly.

:thumbsupemoji:

While "move fast and break things" is the absolutely wrong philosophy when it comes to safety-critical systems, there's also an element of not making omelettes without breaking eggs.

Even when you move slowly and try to break zero things, you need to learn what you don't know before the product gets out into the wild, and you sometimes learn those lessons when things explode/disintegrate/crash.

To put it another way, it's better this bug is found now, with 1 fatality, than missed out of an overabundance of caution in development, and subsequently found in service, with thousands of those cars out on the road with this vulnerability.

Valid points, but if the eggs you are breaking are pedestrians, your company is too incompetent to be making omelets and shouldn't be doing it. There has got to be some level of error assessment in your project management process that comes ahead of "dead pedestrian," and if not... should we really be doing this development at all?

Paleocon wrote:

I saw an article that said that Uber was looking at automated flying cars.

Great. Just what I need. A company that can't even get wheeled vehicles right dropping a multiton projectile onto my house.

Huh! Probably a diversionary tactic meant to distract from the self driving fatality accident.

Flying as a means of personal transportation is a sci-fi fantasy that even our grandchildren aren’t likely to experience. Flying machines are incredibly inefficient on fuel consumption. And even if the fuel efficiency problem gets solved, there are still more hurdles to cross before flying cars will be viable. For instance, the ability to stop quickly. With today’s tech that’s not really possible.

But I could be wrong. About 30 years ago, I dreamed about a “walkman” that was slightly larger and thicker than a credit card, that enabled you to listen to any song ever recorded. I didn’t think I would live long enough to experience that kind of tech. And I would have never dreamed that it would also be a phone that you could play video games on.

The bit that stood out for me was this:

"Uber had been racing to meet an end-of-year internal goal of allowing customers in the Phoenix area to ride in Uber’s autonomous Volvo vehicles with no safety driver sitting behind the wheel," Efrati added.

ie. Someone's bonus was riding on getting this working on an arbitrary timeline, no matter what the actual readiness was. Of course they were cutting corners.

thrawn82 wrote:

Valid points, but if the eggs you are breaking are pedestrians, your company is too incompetent to be making omelets and shouldn't be doing it. There has got to be some level of error assessment in your project management process that comes ahead of "dead pedestrian," and if not... should we really be doing this development at all?

Sorry, but no.

Zero risk is a myth. It doesn't exist. You can employ a million engineers with infinite budget for a century, and still not be certain that no-one will be killed when you start testing your incredibly complex system. Never mind the fact that testing is one of the methods by which you drive that risk down further.

Testing is risky. It's why flight test pilots get paid the big bucks.

I'm not being callous. A good, sane development strategy will absolutely minimize the risks involved, will move slowly and carefully, expand the operational envelope of the system under test incrementally, and when something does go wrong, stop and understand it fully before continuing.

And even when you do that, sometimes you Have A Bad Day, and terrible things happen.

Bottom line, if you're going to insist on zero risk, we'll never have self-driving cars. We'll just keep on keeping on with > 30,000 people a year dying in the US in human driven cars.

Jonman wrote:
thrawn82 wrote:

Valid points, but if the eggs you are breaking are pedestrians, your company is too incompetent to be making omelets and shouldn't be doing it. There has got to be some level of error assessment in your project management process that comes ahead of "dead pedestrian," and if not... should we really be doing this development at all?

Sorry, but no.

Zero risk is a myth. It doesn't exist. You can employ a million engineers with infinite budget for a century, and still not be certain that no-one will be killed when you start testing your incredibly complex system. Never mind the fact that testing is one of the methods by which you drive that risk down further.

Testing is risky. It's why flight test pilots get paid the big bucks.

I'm not being callous. A good, sane development strategy will absolutely minimize the risks involved, will move slowly and carefully, expand the operational envelope of the system under test incrementally, and when something does go wrong, stop and understand it fully before continuing.

And even when you do that, sometimes you Have A Bad Day, and terrible things happen.

Bottom line, if you're going to insist on zero risk, we'll never have self-driving cars. We'll just keep on keeping on with > 30,000 people a year dying in the US in human driven cars.

I'm not pushing for zero risk, sorry if it came off that way i realize even perfected live systems will result in casualties. I was criticizing uber for (apparently) jumping directly to testing against live pedestrians without making an attempt to minimize risk.

thrawn82 wrote:

I'm not pushing for zero risk, sorry if it came off that way i realize even perfected live systems will result in casualties. I was criticizing uber for (apparently) jumping directly to testing against live pedestrians without making an attempt to minimize risk.

Fair enough.

I'm not here to defend Uber's actions (realistically, I don't know anywhere near enough about how they've been working to have a considered opinion on that), but I will say that I don't envy the corner they've painted themselves into. Accurately assessing the level of risk in a developmental system is technically challenging. Defining an acceptable level of risk to move forward with when there's significant commercial pressure on the schedule is a hard balancing act. And working under intense media scrutiny has got to be unpleasant, knowing that even if you did everything right, it'll be reported like you didn't when anything goes wrong.

Jonman wrote:
thrawn82 wrote:

Valid points, but if the eggs you are breaking are pedestrians, your company is too incompetent to be making omelets and shouldn't be doing it. There has got to be some level of error assessment in your project management process that comes ahead of "dead pedestrian," and if not... should we really be doing this development at all?

Sorry, but no.

Zero risk is a myth. It doesn't exist. You can employ a million engineers with infinite budget for a century, and still not be certain that no-one will be killed when you start testing your incredibly complex system. Never mind the fact that testing is one of the methods by which you drive that risk down further.

Testing is risky. It's why flight test pilots get paid the big bucks.

Testing is indeed risky. Where it gets sticky here is that everyone else on the street didn't sign on to be a part of this.

A concept that's been taking root among traffic engineers recently is called Vision Zero (or, in some places, Towards Zero Deaths). It's based around rethinking the traffic fatality as not an inevitable side effect of cars, but an engineering/policy/public health problem that can and should be solved. Autonomous vehicles are going to be a huge boon to that effort once they are reliable.

Bruce wrote:

The bit that stood out for me was this:

"Uber had been racing to meet an end-of-year internal goal of allowing customers in the Phoenix area to ride in Uber’s autonomous Volvo vehicles with no safety driver sitting behind the wheel," Efrati added.

ie. Someone's bonus was riding on getting this working on an arbitrary timeline, no matter what the actual readiness was. Of course they were cutting corners.

This does not match with a Vision Zero ethos. That's my sticking point with the whole thing. It's not inevitable for people to die while these vehicles are being tested. It may still happen - we aren't perfect and certainly in-development systems won't be - but there's a difference in approaching the engineering task with the idea that fatal mistakes can be prevented.

Basically, you've gotta break some eggs to make an omelette, but the eggs are people, so what we're doing here is more like those egg drop challenges. No eggs have to break, not if we do our engineering well.

+1 to what ActualDragon said.

Though it got me to thinking...

One of the weakest parts of our current system of auto travel is the moron behind the wheel. Removing them from the equation is a net good. Fewer accidents, fewer deaths, yadda, yadda, yadda.

But I started ruminating on the real-far-downstream effects and “freakonomics” kind of related effects. Not just “truckers will be out of work”, but what happens to a society and culture when you remove the aggravation of a daily commute? Decrease in stress-related illnesses? Increase in videogame sales (‘cos everyone now has an hour a day to themselves )? Drop in gun sales because road rage isn’t a thing anymore? Rise in popularity of adrenalin-sports to counterbalance the loss of excitement?

Likewise, what’s the impact of removing the barriers to mobility for those unable to drive a car themselves? Are able-bodied folk going to be surprised to see “more” disabled people who otherwise would have found it difficult to get out in public?

Is it going to have an effect on demographics over time? If you negate a primary source of mortality over the space of a generation, the subsequent generations will live longer.

In most (likely a number approaching 100%) of collisions, the problem is the human. We just can't maintain 100% attention, 100% of the time. The elapsed time from event > notice > decision > reaction > execution is too slow (at the speeds we drive). That's not even counting the intentionally reckless.

Even with times of "mechanical failure" - it's more often a lack of maintenance, than due to a manufacturing defect. So, again... the human.

The problem is "people".

Spoiler:

Did I just convince myself that Skynet is right?

Wink_and_the_Gun wrote:

In most (likely a number approaching 100%) of collisions, the problem is the human. [/spoiler]

Here's a diagram on crash causes from the Highway Safety Manual (though when I see this I always wonder if it's changed since the 1979 study):
IMAGE(https://lh4.googleusercontent.com/-s8rXGT5f4L4/TzglQnUtqaI/AAAAAAAAADQ/iCyefu_Etz0/s320/Factors.JPG)

Removing the human from the equation is going to have a huge impact. And as Jonman mused, many rippling impacts. It's going to be some exciting times.

Jonman wrote:

But I started ruminating on the real-far-downstream effects and “freakonomics” kind of related effects. Not just “truckers will be out of work”, but what happens to a society and culture when you remove the aggravation of a daily commute?

More time/opportunity to eat buckets of chicken during that commute.

Likewise, what’s the impact of removing the barriers to mobility for those unable to drive a car themselves? Are able-bodied folk going to be surprised to see “more” disabled people who otherwise would have found it difficult to get out in public?

Okay, serious answer along those lines: I think nothing will decide the debate over self-driving cars like the Baby Boomers becoming seniors. They're not going to accept less and less transportation freedom like other generations, and while they're not all the economic stereotype, they've got the purchasing power to make it happen.

This is a tricky issue, because on one hand there are a lot of emotions that come into play when human lives are at stake, but on the other hand, we can use statical data to figure out the impact of driverless car tech on human safety.

It’s my opinion that, when driverless car tech can show robust evidence of being significantly safer than than human drivers, then it's time to go live. Of course “significant” is a relative term, and one person’s definition of significant can be much different than another’s.

For me, I feel that 30% is significant. What I mean by that is, when driverless cars can be proven to be at least 30% safer than human drivers, that’s when I’m ok with driverless cars going mainstream. But I’m talking about at least 30% safer across all conditions and circumstances, both best case scenarios, worst case scenarios, and all conditions in between. And there needs to be millions of data points supporting that finding.

I’m not sure we’re there yet. And testing the safety of driverless cars must NOT be done on the public. Which makes testing them really difficult.

RawkGWJ wrote:

And testing the safety of driverless cars must NOT be done on the public. Which makes testing them really difficult.

I suspect that the reality is that a lot of the testing ISN'T being done on public roads. It's done virtually in simulation. The car doesn't even touch a public road until it's driven millions (billions?) of virtual miles.

But in the regulatory environment, you have to demonstrate the system in real-world situations. To whit, how do you show that 30% improvement in safety without directly demonstrating it by driving thousands (millions?) of miles on public roads?

To provide a counterpoint to your notion that safety testing mustn't take place on public roads, I'll mention that if you live in the Seattle area, there's developmental airplanes being tested above your head most days.

Jonman wrote:
RawkGWJ wrote:

And testing the safety of driverless cars must NOT be done on the public. Which makes testing them really difficult.

I suspect that the reality is that a lot of the testing ISN'T being done on public roads. It's done virtually in simulation. The car doesn't even touch a public road until it's driven millions (billions?) of virtual miles.

But in the regulatory environment, you have to demonstrate the system in real-world situations. To whit, how do you show that 30% improvement in safety without directly demonstrating it by driving thousands (millions?) of miles on public roads?

To provide a counterpoint to your notion that safety testing mustn't take place on public roads, I'll mention that if you live in the Seattle area, there's developmental airplanes being tested above your head most days.

I figured that they were testing them in simulators. Hopefully they have some really talented people who are able to build super accurate computer models. I honestly don't know where they stand on AI piloted cars vs human piloted. I've heard claims that the AI is safer than humans, and I've heard the opposite too.

Once we have the factual evidence, then that's where we should look. Maybe that evidence already exists? I'm not an expert on the topic. I do feel that fatalities due to AI driven cars are tolerable, but only if they are significantly lower than human drivers. Which for me is at 30% safer or better.

It's almost exactly like the train switcher thought experiment. Imagine that there is a train that has lost it's breaks and can't stop. It's heading straight towards a group of ten people who are standing on the track ahead of it. You are standing next to a switch lever that will put the train on a different track, thus avoiding the deaths of those ten people. But... There is a single person standing on the other track. The train will inevitably either kill 10 people, or one person. Do you switch the train, thus causing the death of that one person, or do you do nothing, and allow the train to kill 10 people?

RawkGWJ wrote:
Jonman wrote:
RawkGWJ wrote:

And testing the safety of driverless cars must NOT be done on the public. Which makes testing them really difficult.

I suspect that the reality is that a lot of the testing ISN'T being done on public roads. It's done virtually in simulation. The car doesn't even touch a public road until it's driven millions (billions?) of virtual miles.

But in the regulatory environment, you have to demonstrate the system in real-world situations. To whit, how do you show that 30% improvement in safety without directly demonstrating it by driving thousands (millions?) of miles on public roads?

To provide a counterpoint to your notion that safety testing mustn't take place on public roads, I'll mention that if you live in the Seattle area, there's developmental airplanes being tested above your head most days.

I figured that they were testing them in simulators. Hopefully they have some really talented people who are able to build super accurate computer models. I honestly don't know where they stand on AI piloted cars vs human piloted. I've heard claims that the AI is safer than humans, and I've heard the opposite too.

Once we have the factual evidence, then that's where we should look. Maybe that evidence already exists? I'm not an expert on the topic. I do feel that fatalities due to AI driven cars are tolerable, but only if they are significantly lower than human drivers. Which for me is at 30% safer or better.

It's almost exactly like the train switcher thought experiment. Imagine that there is a train that has lost it's breaks and can't stop. It's heading straight towards a group of ten people who are standing on the track ahead of it. You are standing next to a switch lever that will put the train on a different track, thus avoiding the deaths of those ten people. But... There is a single person standing on the other track. The train will inevitably either kill 10 people, or one person. Do you switch the train, thus causing the death of that one person, or do you do nothing, and allow the train to kill 10 people?

Does it make me a monster that my response to The Trolley Problem is always intervene if inaction results in a higher death toll. Even when the problem in proposed the most morbid way possible with a fat guy and a bridge?

The Trolley Problem is nonsense, because it implies prescience.

It's an interesting thought experiment but has little to no real-world application.

Jonman wrote:

The Trolley Problem is nonsense, because it implies prescience.

It's an interesting thought experiment but has little to no real-world application.

How do people actually behave when faced with trolley problems?

Also, given there are _roughly_ 100 US motor deaths per day, and it's been over 50 days since the first self-driving car fatality, we've seen around 5,000 US deaths due to motorists, one way or another in that time.

What are the odds that one or more of those fatalities was due to someone hiring a human driver who really shouldn't have been allowed to professionally drive (drunk driving history, bad medical condition, history of unsafe driving, etc etc) but who was hired because someone couldn't be bothered to find a better driver, didn't care that much, or because that driver was cheaper and they were cutting corners, or something of that nature?

Self-driving cars are new and therefore get tons of attention, but whoever was killed by someone cutting corners on a human driver is still just as dead as the person killed by the first self driving car.

Additional Trolley Problem resources:

For the record, I’m extremely pro driverless cars. In fact I think it’s inevitable that they will go mainstream in the next 30 years. Probably sooner than that.

IMAGE(https://imgs.xkcd.com/comics/fatal_crash_rate.png)

Spoiler:

Fixating on this seems unhealty. But in general, the more likely I think a crash is, the less likely one becomes, which is a strange kind of reverse placebo effect.

Ars Technica: NTSB: Uber’s sensors worked; its software utterly failed in fatal crash: Driver says she was looking at an Uber touchscreen, not a smartphone, before crash.

The National Transportation Safety Board has released its preliminary report on the fatal March crash of an Uber self-driving car in Tempe, Arizona. It paints a damning picture of Uber's self-driving technology.

The report confirms that the sensors on the vehicle worked as expected, spotting pedestrian Elaine Herzberg about six seconds prior to impact, which should have given it enough time to stop given the car's 43mph speed.

The problem was that Uber's software became confused, according to the NTSB. "As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path," the report says.

Things got worse from there.

Any of those things in the road should have lead to the car stopping. In fact, the car was fully capable of emergency breaking, but Uber disabled it. And on top of that, Uber required the driver to not look at the road:

Dashcam footage of the driver looking down at her lap has prompted a lot of speculation that she was looking at a smartphone. But the driver told the NTSB that she was actually looking down at a touchscreen that was used to monitor the self-driving car software.

"The operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review," the report said.

The article describes more stuff that they did wrong.

This was gross negligence, and people should be going to jail over it.

"There's a reason Uber would tune its system to be less cautious about objects around the car," Efrati added. "It is trying to develop a self-driving car that is comfortable to ride in."

"Uber had been racing to meet an end-of-year internal goal of allowing customers in the Phoenix area to ride in Uber’s autonomous Volvo vehicles with no safety driver sitting behind the wheel," Efrati wrote.

So really, it's not the "software's" fault, but rather the upper management decision makers' fault.

The NTSB report wrote:

According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

Wait, what? The system is built to require human intervention at literally the worst times possible (and after lulling said human into inaction by, you know, not needing to do anything), and it's built to not alert that human? Who signed off on these as requirements??

Chumpy_McChump wrote:
The NTSB report wrote:

According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

Wait, what? The system is built to require human intervention at literally the worst times possible (and after lulling said human into inaction by, you know, not needing to do anything), and it's built to not alert that human? Who signed off on these as requirements??

We'll patch it in 1.1.

Chumpy_McChump wrote:

Wait, what? The system is built to require human intervention at literally the worst times possible (and after lulling said human into inaction by, you know, not needing to do anything), and it's built to not alert that human? Who signed off on these as requirements??

Managers from three different departments, I'm sure.

Walmart will chauffeur shoppers in self-driving Waymo cars

Walmart shoppers in Chandler, Arizona will glimpse the future of transportation later this week when the mega-retailer starts chauffeuring customers in Waymo's autonomous cars.
farley3k wrote:

Walmart will chauffeur shoppers in self-driving Waymo cars

Walmart shoppers in Chandler, Arizona will glimpse the future of transportation later this week when the mega-retailer starts chauffeuring customers in Waymo's autonomous cars.

The footage of the people in the those cars would be internet gold. Hell, people barely put on clothes now to go to Walmart. Imagine if they don't even need to drive themselves.

https://www.reddit.com/r/Futurology/...

Tesla Vehicles have driven well over 1.2 billion miles while on autopilot, during that time there has only been 3 fatalities, the average is 12.5 deaths per billion miles so Tesla Autopilot is over 4 times safer than human drivers.
farley3k wrote:

https://www.reddit.com/r/Futurology/...

Tesla Vehicles have driven well over 1.2 billion miles while on autopilot, during that time there has only been 3 fatalities, the average is 12.5 deaths per billion miles so Tesla Autopilot is over 4 times safer than human drivers.

I'm really skeptical of this comparison - is the 12.5/billion miles statistic also from the extremely limited conditions in which Autopilot operates (highways, good weather, good pavement/markings)?

Don't get me wrong, people are bad at driving and robots will be better, but I don't trust companies with a financial stake in being first to market to make the responsible call.