Self-driving car discussion catch-all

The metal of the bike does show up on radar, unlike a person, but I have absolutely no idea if a bicycle would have enough of a radar return to be expected to be recognized as an obstacle. I've seen videos overlaying the different things that a self-driven car is sensing that shows them detecting USPS dropboxes, for example, but those are big and flat, unlike the gaps and whatnot of a bicycle. Unfortunately I can't find that clip now, it was interesting because it was showing when the car was noticing all of the signs and traffic lights, and it showed that it was paying attention to lights 2-3 intersections down.

I find this video where they have removed all metadata except for cars, people, and perhaps bikes, no sign/light/non-moving obstacles, although at 3:30 it does actually seem to notice the bikes tied up off to the side, that's almost certainly from vision processing, not radar or lidar.

jbavon wrote:

At the same time the culprit in this road design is clearly poor lighting with too long a distance between lights creating dark spots in between. This is not possible where I live but perhaps there aren't uniform codes for this in the US. It's also unfortunate that the lights were not placed where pedestrians are crossing the road (illegally). The road designers obviously knew where this was as they had made walkable pavement in the median. Seems like an obvious oversight.

There are a lot of guidelines for lighting in the US, but the actual standards vary place to place, and from what I've seen lighting has been gradually upgraded alongside other road projects. It's bizarre to have the walkable pavement in the median without a crossing, and the no pedestrian sign shows that they know it's a problem. The design of the road here is definitely not friendly to pedestrians, and that's a huge problem in the US.

The car's LiDAR and short-range radar sensors should have been able to identify the pedestrian regardless of the level of light. And something about the size of a pedestrian unexpectedly entering the road should have been an obvious case for designers to consider (see also: children playing, deer crossings). Theoretically, this system should have several redundancies. I'm sure the NTSB will figure out what went wrong, and it will be interesting to see whether it was a failure of detection, classification, or something else.

The safety driver not watching the road is a failure of that system, even if she wouldn't have seen the pedestrian in time. It really underscores the question of whether Level 2-4 vehicles should ever be on public roads - anything short of Level 5 requires a driver that is ready to assist in emergency situations, and this and the Tesla crash underline that humans are not good at that. However, the tech counterpoint is that they can't get the tech to Level 5 without testing in situ.

I agree with krev about not wanting this to turn into fear-mongering over AVs, especially when compared to human drivers. But each case is an important learning moment for everyone building these vehicles, and for better or worse it will have impacts on policy, liability, and yes, public perception of the tech. The Silicon Valley "move fast and break things" approach becomes a little less appropriate when public safety is on the line.

If we take autonomous driving out of the picture, the pedestrian clearly would have been at fault. With level 2 automation, it's up for some debate. I seem to recall that Uber is working on level 4 automation. One of the main arguments for a fully autonomous system is that it can do a better job than a human being.

The video that was released was just low-quality dashcam. It's not used by the driving system, afaik. There are higher-quality cameras adjusted for different exposure levels to handle the "vision", as well as radar and lidar systems. Given that the pedestrian was in motion and had crossed an entire lane within line of sight of the radar and lidar, I see this as a major failure of the autonomous driving system.

ActualDragon wrote:

The safety driver not watching the road is a failure of that system, even if she wouldn't have seen the pedestrian in time. It really underscores the question of whether Level 2-4 vehicles should ever be on public roads - anything short of Level 5 requires a driver that is ready to assist in emergency situations, and this and the Tesla crash underline that humans are not good at that. However, the tech counterpoint is that they can't get the tech to Level 5 without testing in situ.

Slight correction: level 4 is a fully autonomous vehicle--no human required. The limitation on level 4 is driving conditions. It can only operate in a limited set of conditions. But it should be able to detect that its conditions are not correct and stop itself from operating when that happens. Think self-driving subway trains, or inter-terminal buses in airports.

I still claim that the root cause of this problem was licensing Uber to do in-situ testing. That's like licensing Facebook to hold your health records. And then being surprised when it turns out that insurance companies have been mining your data for years and hiking your premiums without you knowing or having any control over it.

Good catch on that distinction - I don't spend a lot of time talking about the 4/5 end of the scale so I glossed over it. Point still stands.

And yes, Uber is just about the last company I want testing on public roads.

ActualDragon wrote:

The car's LiDAR and short-range radar sensors should have been able to identify the pedestrian regardless of the level of light. And something about the size of a pedestrian unexpectedly entering the road should have been an obvious case for designers to consider (see also: children playing, deer crossings). Theoretically, this system should have several redundancies. I'm sure the NTSB will figure out what went wrong, and it will be interesting to see whether it was a failure of detection, classification, or something else.

Lidar Vendor Defends Technology

Assuming no LiDAR malfunction (which should of shut the entire AV system down), I tend to agree that LiDAR should of seen the pedestrian crossing from left to right with a bicycle on a multi-lane road. What the AV system did with the environmental data is now suspect. As an aside, I do not want AV to be as good as a human driver I want it to be much, much, much better. Regardless of the physics involved this is not a good example of that.

The plot thickens... This arstechnica article proposes that the posted video is nonsense. The stretch of the road in question is quite well lit.

--Edit--
Actually, while on Ars, I found this much better article that goes into detail about a lot of things that make me feel very anti-Uber when it comes to AVs (yaay for Confirmation Bias).

Consider the source of the original video. Has to be Uber. How hard is it to just apply a darkening filter to it? Given what we know about Uber and how weirdly lax Arizona is (they don't even require incident reports) there's real potential that the Uber software is grossly, maybe even intentionally negligent to boost stats and big ol payoffs are being paid to Arizona to not pay attention.

polypusher wrote:

how weirdly lax Arizona is (they don't even require incident reports)

Some places, like Arizona and Pittsburgh, have gone this regulatory "hands off" approach to encourage companies to test there, hoping for economic boosts. There are some good reasons not to overburden the space with regulation - tech moves much faster than governments - but arguably they've left too much leeway. In Arizona's case, it was a calculated move to take business from California, who was starting to crack down on these things.

Also Uber (and Lyft and others) probably has some good lobbyists in those states. I work with some where I am and it's enlightening as to the process.

ActualDragon wrote:

Also Uber (and Lyft and others) probably has some good lobbyists in those states. I work with some where I am and it's enlightening as to the process.

Any stories you can share?

jrralls wrote:
ActualDragon wrote:

Also Uber (and Lyft and others) probably has some good lobbyists in those states. I work with some where I am and it's enlightening as to the process.

Any stories you can share?

Nothing terribly interesting - in my state they're currently more concerned with standardizing regulation for normal operations. No autonomous cars here yet, but they've unsurprisingly cautioned against regulation in the space. It's my job in this group to describe how connected/autonomous vehicles will affect the state in the next twenty to thirty years, so there's some interesting tension there. I believe safety-minded regulation is necessary both to protect the public and encourage faith in the tech.

My position on autonomous vehicles is, and has always been, that the manufacturer is ultimately liable for any incident involving the vehicle. This is not an impossible standard: many manufacturers already want this. Because this Uber scenario is exactly what they're afraid will happen: someone gets sloppy, leading to regulatory crackdown.

I regard Uber as absolutely at fault here. They've been reckless before, and this suggests that they're not taking enough precautions.

As to the speed limit, I think this is an area where our current laws are possibly making things worse: perhaps the natural safe speed for a stretch of road is far below the posted limit. If we let the autonomous system find the safe speed it might be better all around--this is one area where working with human drivers gets complicated.

Similarly, I regard all the trolley problem stuff as nonsense: in 99.99% of incidents, the correct response is to slow down or stop--trying to change lanes just makes things worse and is usually a sign that you're going too fast. (e.g. Never swerve to avoid a deer.) If you can't stop, something went wrong earlier and should have been dealt with then.

That's the case here: the car either didn't sense the obstacle in time, didn't recognize the danger, or make the wrong decision. Whichever layer the fault was on, it's important to trace the incident's causality back from the point of failure to the root causes. (As NASA did with Challenger, for example. I'm fond of informally using the Five Whys, but there are many more rigorous approaches.)

An Uber autonomous car killing someone isn't an endpoint, it's a symptom. A system exists that lead to this outcome. Now, maybe we'll investigate and determine that this is an outlier, one that is unlikely to reoccur or that for other reasons we can't fix (such as deciding that it is unacceptable to have autonomous cars slow down enough to avoid injuring pedestrians).

But even in the case of pilot error, it can usually be traced back to larger systemic problems. We shouldn't be treating autonomous vehicles like we treat auto collisions, we should be treating them like airline safety.

One thing that autonomous vehicles need to get better at that I'm not seeing very many people working on is in communicating their intentions. Existing signals on the cars are inadequate for this, and in practice they're supplemented by making eye contact or inferring whether the cars around you are understanding your intentions. Autonomous vehicles will likely need additional signaling to make its intentions clear, both to the cars around it and to the passengers.

Excellent points.

Gremlin wrote:

As to the speed limit, I think this is an area where our current laws are possibly making things worse: perhaps the natural safe speed for a stretch of road is far below the posted limit. If we let the autonomous system find the safe speed it might be better all around--this is one area where working with human drivers gets complicated.

Posted speed limits are traditionally set at the 85th percentile running speed for a road. They are designed to be safely traveled at higher speeds, since engineers always build in a factor of safety, but this is what leads to roads posted at 35 mph that feel like they should be 45. It's because that's the speed they were designed for. Also, speeds on suburban roads often don't account for pedestrians and cyclists being able to safely navigate the space. Slower car speeds would go a long way to improving life for non-car users.

This points to another issue we have to tackle with autonomous vehicles - everything about the driving infrastructure was designed with human strengths and limitations in mind. However, states don't generally have the funding available to enhance infrastructure for autonomous and connected vehicles, though there's awareness that this will have to be implemented in the near future.

Gremlin wrote:

Similarly, I regard all the trolley problem stuff as nonsense: in 99.99% of incidents, the correct response is to slow down or stop--trying to change lanes just makes things worse and is usually a sign that you're going too fast. (e.g. Never swerve to avoid a deer.) If you can't stop, something went wrong earlier and should have been dealt with then.

I can't underscore this enough, lay people spend 95% of their time considering self-driving cars on stupid edge cases that are completely unimportant.

Here is a great video on self-driving cars done by google:

This video has some really interesting clips that apply to this crash too, because both Uber and Google apparently use the same LIDAR system from the same company. At 20:00 they show the raw data of how the lidar actually sees cars and pedestrians. The data is waaaaay higher resolution than I imagined.

It's not just stills either, later on they show feeds of crazy situations their cars have seen, and they show a super clear picture.

Yonder wrote:

This video has some really interesting clips that apply to this crash too, because both Uber and Google apparently use the same LIDAR system from the same company

That's because Velodyne owns the 360-degree LIDAR market. The guys that started the subwoofer company were doing DARPA challenges and came up with it. There are other companies promising LIDAR systems because Velodyne has had problems scaling up to meet all this new demand, but it's pretty much all still Velodyne, afaik.

Notably, just Uber. Are any other companies testing in Arizona right now?

If you own stock in Uber, the next thing to watch is their delivery from Volvo. They've bet their timeline on getting those 24,000 cars modified and up and running, but Volvo is fairly safety conscious and working on their own autonomous driving technology. They're likely to be very upset by this.

Uber's Autonomous Test Cars Had Volvo's Safety Systems Disabled

As directly mentioned or at least implied previously in this thread, Arizona seems to be a hotbed for the testing of (semi)autonomous vehicles. This looks like a very understandable reaction by the state's government in an effort to save face in the public forum, as it were.

I would fully expect the testing of (semi)autonomous vehicles to continue in Arizona by other companies, assuming Arizona wants to keep earning whatever rewards (financial or otherwise) come with allowing the testing. Plus, I'm sure there are other states with vast expanses of open area that would be suitable for similar testing.

Former Uber Backup Driver: 'We Saw This Coming'

Without a doubt, Uber and other companies find themselves in a challenging position, said Rajkumar. Autonomous vehicle developers need real-world mileage and data in order to remove humans from the driver’s seat, eventually. Without people behind the wheels of developmental vehicles, the cars would clearly be more dangerous; on the other hand, those operators, being people, will always be distracted. If society wants the promised safety benefits of fully autonomous vehicles, “the only option we have is to work through the transition, slowly and steadily, until the technology is reliable,” he said.
Yet if Uber has appeared impatient in its pursuit of self-driving development, that may be explained by the fact that the world’s largest ride-hailing company, which launched in 2011 and has been recently valued at $72 billion, is still not profitable. Eliminating the considerable overhead represented by the roughly 600,000 drivers it contracts in the U.S. alone may be the company’s best path to profitability: It could help Uber reduce costs enough to stop subsidizing the cheap, on-demand rides that helped build up its enormous market share. Uber’s former CEO, Travis Kalanick, once called the autonomous vehicle project “existential” to the company’s survival. Dara Khosrowshahi, Uber’s current chief executive, has reportedly become similarly convinced. In November, the company placed an order for 24,000 Volvo SUVs to speed up testing and readiness of autonomous cars for its broader ride-hailing market.

There is a difference, in other words, between Uber and many of its major competitors in the self-driving race: Firms like Waymo, GM, and Ford are either carmakers selling private vehicles or software companies betting that this technology will help them secure space in the automated mobility market of the future. Uber, on the other hand, may need self-driving to happen as soon as possible, at scale, to bring in revenue.

Given how much research and regulation there are on truck drivers with regards to hours of service, I'm a little surprised that hasn't been applied to safety drivers. Arguably they should be on duty for even shorter periods of time, given that they have even less to do. On the other hand, I'm not at all surprised that Uber is pushing the envelope here. It may turn out to be a good thing that it's their name on the first fatality, given public opinion on them has soured in the last couple years and regardless of the legal fault findings, it's clear that their safety systems failed. It's a good sign for Arizona to have only banned Uber for the moment.

These kinds of stories make me think Uber is criminally negligent here.

Uber reportedly reduced the number of sensors on autonomous cars

polypusher wrote:

These kinds of stories make me think Uber is criminally negligent here.

Uber reportedly reduced the number of sensors on autonomous cars

If a full, public investigation ever happens I fully expect that it'll turn up multiple ways they were criminally negligent.

They've bet the entire company on their unrealistic timeline. That works for a tech company that's deploying an app--which was also rushed to market and required engaging in illegal behavior. I am not surprised that a company that had tech specifically aimed to cover up how they skirted regulations may have cut corners in a way that risked causing death.

Looks like Uber was cutting corners:
Report: Software bug led to death in Uber’s self-driving crash: Sensors detected Elaine Herzberg, but software reportedly decided to ignore her.

The fatal crash that killed pedestrian Elaine Herzberg in Tempe, Arizona, in March occurred because of a software bug in Uber's self-driving car technology, The Information's Amir Efrati reported on Monday. According to two anonymous sources who talked to Efrati, Uber's sensors did, in fact, detect Herzberg as she crossed the street with her bicycle. Unfortunately, the software classified her as a "false positive" and decided it didn't need to stop for her.

Distinguishing between real objects and illusory ones is one of the most basic challenges of developing self-driving car software. Software needs to detect objects like cars, pedestrians, and large rocks in its path and stop or swerve to avoid them. However, there may be other objects—like a plastic bag in the road or a trash can on the sidewalk—that a car can safely ignore. Sensor anomalies may also cause software to detect apparent objects where no objects actually exist.

Software designers face a basic tradeoff here. If the software is programmed to be too cautious, the ride will be slow and jerky, as the car constantly slows down for objects that pose no threat to the car or aren't there at all. Tuning the software in the opposite direction will produce a smooth ride most of the time—but at the risk that the software will occasionally ignore a real object. According to Efrati, that's what happened in Tempe in March—and unfortunately the "real object" was a human being.

"There's a reason Uber would tune its system to be less cautious about objects around the car," Efrati wrote. "It is trying to develop a self-driving car that is comfortable to ride in."

False positives are a problem...but false negatives are worse. I don't know how they're using image recognition here -- if they're classifying objects by type that strikes me as the exact wrong way to go about it. A thing in the road that you don't recognize but will interfere with the car is still bad. All kinds of things might end up on the road: imagine a truck spill that covers the road with slime eels, rubber ducks, or an exploding whale. Proper use of LIDAR can probably give a better threat estimate of actual hazards as three-dimensional moving objects, but only if they're measuring the right thing.

What I want to see for the tech on the road right now is a real-time heatmap of where the car thinks the hazards are, so the driver can anticipate the upcoming decisions that AI is going to try to make.

Uber wasn't going to spring for that, because their survival depends on eliminating drivers before their funding runs out.

Gremlin wrote:

Looks like Uber was cutting corners:
Report: Software bug led to death in Uber’s self-driving crash: Sensors detected Elaine Herzberg, but software reportedly decided to ignore her.

The fatal crash that killed pedestrian Elaine Herzberg in Tempe, Arizona, in March occurred because of a software bug in Uber's self-driving car technology, The Information's Amir Efrati reported on Monday. According to two anonymous sources who talked to Efrati, Uber's sensors did, in fact, detect Herzberg as she crossed the street with her bicycle. Unfortunately, the software classified her as a "false positive" and decided it didn't need to stop for her.

Distinguishing between real objects and illusory ones is one of the most basic challenges of developing self-driving car software. Software needs to detect objects like cars, pedestrians, and large rocks in its path and stop or swerve to avoid them. However, there may be other objects—like a plastic bag in the road or a trash can on the sidewalk—that a car can safely ignore. Sensor anomalies may also cause software to detect apparent objects where no objects actually exist.

Software designers face a basic tradeoff here. If the software is programmed to be too cautious, the ride will be slow and jerky, as the car constantly slows down for objects that pose no threat to the car or aren't there at all. Tuning the software in the opposite direction will produce a smooth ride most of the time—but at the risk that the software will occasionally ignore a real object. According to Efrati, that's what happened in Tempe in March—and unfortunately the "real object" was a human being.

"There's a reason Uber would tune its system to be less cautious about objects around the car," Efrati wrote. "It is trying to develop a self-driving car that is comfortable to ride in."

False positives are a problem...but false negatives are worse. I don't know how they're using image recognition here -- if they're classifying objects by type that strikes me as the exact wrong way to go about it. A thing in the road that you don't recognize but will interfere with the car is still bad. All kinds of things might end up on the road: imagine a truck spill that covers the road with slime eels, rubber ducks, or an exploding whale. Proper use of LIDAR can probably give a better threat estimate of actual hazards as three-dimensional moving objects, but only if they're measuring the right thing.

What I want to see for the tech on the road right now is a real-time heatmap of where the car thinks the hazards are, so the driver can anticipate the upcoming decisions that AI is going to try to make.

Uber wasn't going to spring for that, because their survival depends on eliminating drivers before their funding runs out.

I saw an article that said that Uber was looking at automated flying cars.

Great. Just what I need. A company that can't even get wheeled vehicles right dropping a multiton projectile onto my house.

I would be perfectly ok with a car that can't properly detect spilled eels or exploded whales, that's unfortunate for all three cars affected this decade. Not properly detecting a bicyclist in the road is probably 4 or 5 orders of magnitude more severe.

The phrasing "software bug" makes me really angry here. A bug in software is something in the code itself not working as intended. What seems to be at fault here is a deliberate tuning of the false positive threshold without sufficient testing or backup, prioritizing a smooth ride over safety. Calling it a bug minimizes the decisions being made.

ActualDragon wrote:

The phrasing "software bug" makes me really angry here. A bug in software is something in the code itself not working as intended. What seems to be at fault here is a deliberate tuning of the false positive threshold without sufficient testing or backup, prioritizing a smooth ride over safety. Calling it a bug minimizes the decisions being made.

The phrasing is apt. The requirement is "don't run people over". The software isn't meeting that requirement, presumably because there's a trade-off with the other requirement "provide a smooth ride unimpeded by false positives". Software is not performing per design intent, which is a long way of saying "it's bugged".

What gets me is the assumption that developmental systems should be perfect during development. By all means get angry at the fact that Uber's development is taking place on public roads with an insufficient level of system maturity, that's a perfectly reasonable position. It's the immediate jump to "this technology will never work, self driving cars are awful" that is nonsense.

Yonder wrote:

I would be perfectly ok with a car that can't properly detect spilled eels or exploded whales, that's unfortunate for all three cars affected this decade. Not properly detecting a bicyclist in the road is probably 4 or 5 orders of magnitude more severe.

Oh, obviously. My point was that a robust system should be operating on the level of detecting the physics of the objects around it, not on the level of "is this a person on a bicycle or is this a zebra": either way there's something in the road that would be bad to hit. Worse if it is a person, of course, but the correct course of action is the same in both cases: break to a stop.

My view is probably colored by how much I know about how current vision classification systems work.

Self-driving cars can work, but only if they're measuring the correct things. This leads me to believe that Uber is measuring exactly the wrong things.

Jonman wrote:
ActualDragon wrote:

The phrasing "software bug" makes me really angry here. A bug in software is something in the code itself not working as intended. What seems to be at fault here is a deliberate tuning of the false positive threshold without sufficient testing or backup, prioritizing a smooth ride over safety. Calling it a bug minimizes the decisions being made.

The phrasing is apt. The requirement is "don't run people over". The software isn't meeting that requirement, presumably because there's a trade-off with the other requirement "provide a smooth ride unimpeded by false positives". Software is not performing per design intent, which is a long way of saying "it's bugged".

What gets me is the assumption that developmental systems should be perfect during development. By all means get angry at the fact that Uber's development is taking place on public roads with an insufficient level of system maturity, that's a perfectly reasonable position. It's the immediate jump to "this technology will never work, self driving cars are awful" that is nonsense.

I think we're saying the same thing in different ways. My takeaway here is that Uber is not responsible enough to be testing on public roads. I know that development will take time and I believe the tech will work eventually, but I don't believe the Silicon Valley philosophy "move fast and break things" has any place on public roads. Yes, testing has to happen on public roads eventually because you can't replicate a lot of traffic in a lab environment, but it has to be done responsibly.

Gremlin wrote:

Oh, obviously. My point was that a robust system should be operating on the level of detecting the physics of the objects around it, not on the level of "is this a person on a bicycle or is this a zebra": either way there's something in the road that would be bad to hit. Worse if it is a person, of course, but the correct course of action is the same in both cases: break to a stop.

and to add to your point, hitting a deer isn't going to be a fun time for the person *in* the self-driving car, either.

ActualDragon wrote:

I think we're saying the same thing in different ways. My takeaway here is that Uber is not responsible enough to be testing on public roads. I know that development will take time and I believe the tech will work eventually, but I don't believe the Silicon Valley philosophy "move fast and break things" has any place on public roads. Yes, testing has to happen on public roads eventually because you can't replicate a lot of traffic in a lab environment, but it has to be done responsibly.

:thumbsupemoji:

While "move fast and break things" is the absolutely wrong philosophy when it comes to safety-critical systems, there's also an element of not making omelettes without breaking eggs.

Even when you move slowly and try to break zero things, you need to learn what you don't know before the product gets out into the wild, and you sometimes learn those lessons when things explode/disintegrate/crash.

To put it another way, it's better this bug is found now, with 1 fatality, than missed out of an overabundance of caution in development, and subsequently found in service, with thousands of those cars out on the road with this vulnerability.