Self-driving car discussion catch-all

I feel like the very fact that this is newsworthy should be celebrated as a win for how safe self-driving cars are.

I read the Jalopnik article, and it seems like it is also a case of Tesla being stubborn and not using the better tech that could detect situations like these.

Shadout wrote:

I read the Jalopnik article, and it seems like it is also a case of Tesla being stubborn and not using the better tech that could detect situations like these.

Elon should really ask himself "what if X Æ A-12 was On Board?"

I googled that thinking it was going to be some kind of LIDAR thingie. Wat.

Was anyone killed? How bad were the injuries?

RawkGWJ wrote:

Was anyone killed? How bad were the injuries?

No injuries. In fact, the Tesla's air bags didn't even deploy. Watching the video it does look like the car performed hard enough braking just prior to the impact to cause the wheels to lock up momentarily.

Tesla's autopilot is basically deceptive advertising at this point, since a lot of people have the impression that it's level 4 when it's barely level 2.

And as someone who has worked in machine learning, let me say that I'm disturbed by all of the companies who think that they can use image recognition via camera by itself to figure out when to stop the vehicle. Your captcha-trained bicycle detector has a disturbingly high false negative rate.

IIHS is estimating that self-driving cars will only prevent a third of accidents, which is not nothing but isn't utopia.

Gremlin wrote:

And as someone who has worked in machine learning, let me say that I'm disturbed by all of the companies who think that they can use image recognition via camera by itself to figure out when to stop the vehicle.

As someone who also works in machine learning... When you look at adversarial techniques in ML it blows my mind that you'd trust an Neural Network based computer vision system. Sometimes you can make single pixel changes to images and get the wrong result.

Almost certainly 'bad actors' could design print outs that cause decision faults in computer vision AI. And it is not at all clear to me that these ML systems are hardened against bad actors to a sufficient degree. I can already imagine an attack where a graf atrist paints a series of dots on a freeway bridge that causes self-driving cars to perform an emergency stop.

[quote="LouZiffer"]

Baron Of Hell wrote:
Paleocon wrote:
Baron Of Hell wrote:

Self driving car crashes into a wrecked trudk.

Can we avoid Russia Today as a source of anything please?

Why?

Because that supports a propaganda arm of the Russian government.

There's a decent Jalopnik article with video sourced from Taiwan (where this happened).

And yet... three people quoted the video.

...

Recent NPR report about RT, and how the YouTube algorithm is pairing Trump advertisements with RT content.

DanB wrote:

Almost certainly 'bad actors' could design print outs that cause decision faults in computer vision AI.

They can do that with people and changing street signs, raising limits or putting up detour signs or even just throwing a bag of flour at someone's windshield. But folks don't generally do that because they're not assholes.

And it is not at all clear to me that these ML systems are hardened against bad actors to a sufficient degree. I can already imagine an attack where a graf atrist paints a series of dots on a freeway bridge that causes self-driving cars to perform an emergency stop.

As opposed to putting up something distracting that makes human drivers crash? People's vision systems can be messed with too, but mostly that doesn't happen, because again, there aren't that many assholes.

Also, highways and most other high speed zones are often very well-covered with cameras, sometimes simply from cars driving by, as well as humans with their Mark One Eyeballs. It's not easy to do that kind of thing without getting caught.

ML driving systems aren't ready yet, and probably won't be for another decade at least, maybe two. But they're a huge boon to a driver that stays attentive; they can take care of easy stuff like maintaining a following distance, while the driver can pay attention to possible issues further out.

Machine driving can be dangerous if people abdicate their responsibility to be drivers, but can help a lot if they stay alert. Having two sets of eyes on the road is better than one, even if one set of eyes isn't very good.

Malor wrote:

Machine driving can be dangerous if people abdicate their responsibility to be drivers, but can help a lot if they stay alert. Having two sets of eyes on the road is better than one, even if one set of eyes isn't very good.

Yeah--I'm very happy at all of the driver-assist features that come standard on a lot of new cars now. I wish people understood the distinction between Level 1 or 2 and Level 4 though, because it's dangerous to get it wrong and think the car is going to protect you when it can't.

Musk is saying that he will have level five self driving cars this year;

https://www.forbes.com/sites/johnkoe...

And given the rise of Tesla stock recently the stock market seems to believe him.

Musk is an asshole.

Musk is a pedo guy.

Wow, I haven't thought about self-driving cars since their promise went from lived experience to abstract thought experiment. Funny how much can change in a year.

They are going to have to be self driving and self cleaning car from now on

Musk smells.

jrralls wrote:

Musk is saying that he will have level five self driving cars this year;

https://www.forbes.com/sites/johnkoe...

And given the rise of Tesla stock recently the stock market seems to believe him.

My guess: not happening. I think it eventually will, but not this year.

Counter-guess: Tesla will ship a working level-5 car, but every trip will have a 1% chance of killing someone.

(The article says it's more like "level 5 in the lab but not in the deployed cars" which is slightly more realistic but a long way from actually shipping a level 5 car.)

Gremlin wrote:

Counter-guess: Tesla will ship a working level-5 car, but every trip will have a 1% chance of killing someone.

(The article says it's more like "level 5 in the lab but not in the deployed cars" which is slightly more realistic but a long way from actually shipping a level 5 car.)

Still better odds than human drivers.

The car will be capable of level-5, but the software will not be up to snuff. He will try to get it to pass safety tests the same way Volkswagon and others were able to pass emissions tests.

Stengah wrote:

The car will be capable of level-5, but the software will not be up to snuff. He will try to get it to pass safety tests the same way Volkswagon and others were able to pass emissions tests.

There's a political predictions thread for stuff like this.