Ways to deter motorists?

Edwardoka

Shambling ruin of a man
i'm with Drago on this one. As for the Trolley problem, it's a red herring. If the vehicle is not put into scenario where there is an unavoidable collision it is a non-problem.

Why would it not be in that position? It's not human. It has multiple inputs - visual, IR, etc. It can see further. It can use prediction modelling to work out what is going on. It has reaction times vastly faster than a human. It can talk to other vehicles which are also autonomous. The more cars you have talking to each other, the more information the system can have about danger vectors. The development scenario is for the car to take action *before* the scenario happens.

The trolley problem is a binary choice. Autonomous AIs will never have a binary choice and all AI development is around the car learning to read the world around it using its enhanced senses. That's why it is still some time away. Tesla have the biggest archive of input data in the world from their vehicles. Elon Musk has stated that Tesla will have Grade 5 capable vehicles by the end of next year. That's probably over optimistic, but by 5 years? I wouldn't bet on it - and once Tesla has Grade 5 autonomous vehicles expect networks of quick hire self-driving Tesla taxis to become commonplace.
Quite a lot to unpick here so I've warmed up the waffle iron.

Your central argument seems to rest on the theory that everyone on the same road as AVs are good model citizens and rational actors and that the system is infallible, but as the road network is neither a closed loop nor is anyone using it a rational actor, and unless you make wearing beacons mandatory for every person and animal then there will still be unexpected elements that the system will have to deal with on the fly.

I can think of several real world no-win situations that an AV cannot prevent but where the AV can alter the outcome by making a decision.

Here's just one, I can provide more if necessary.

You don't have to go far back to see a massive pileup on a motorway caused by people driving blindly into fog. Yes, an AV can see the obstructions through the fog and will react accordingly, but:
- what about the non AV car behind that has no way of receiving data from the AV's beacon and fails to slow down? The AV will certainly be aware of it
- what evasive manouevres will it take? If it's only going to be a minor collision probably none
- what if, in this scenario, it's not a car but a truck behind the AV that fails to slow down, and the AV slowing down to avoid the crash ahead would cause the death of the occupants of the AV?
- what if the AV detects someone standing on the shoulder that the AV could otherwise use as an escape route?

These are not hypothetical scenarios but real world ones that AVs need to solve via computation and analysis rather than humans making split-second decisions with imperfect information and relatively terrible reaction times.

Also, any machine learning trained with real world data will pick up the unconscious biases of the people who provide the training data. There's a lot of study into this. In particular. Look up computer vision and racism.
Do you really want life-or-death decisions to be made by an AI trained by a data set comprised mainly of those who drive Teslas? ;)

If I'm going to be killed on the road I'd really rather it not be because of a techbro with terrible ethics and whose training methodology informed an AI that in the event of a no-win situation that I was expendable.
 

mudsticks

Über Member
Quite a lot to unpick here so I've warmed up the waffle iron.

Your central argument seems to rest on the theory that everyone on the same road as AVs are good model citizens and rational actors and that the system is infallible, but as the road network is neither a closed loop nor is anyone using it a rational actor, and unless you make wearing beacons mandatory for every person and animal then there will still be unexpected elements that the system will have to deal with on the fly.

I can think of several real world no-win situations that an AV cannot prevent but where the AV can alter the outcome by making a decision.

Here's just one, I can provide more if necessary.

You don't have to go far back to see a massive pileup on a motorway caused by people driving blindly into fog. Yes, an AV can see the obstructions through the fog and will react accordingly, but:
- what about the non AV car behind that has no way of receiving data from the AV's beacon and fails to slow down? The AV will certainly be aware of it
- what evasive manouevres will it take? If it's only going to be a minor collision probably none
- what if, in this scenario, it's not a car but a truck behind the AV that fails to slow down, and the AV slowing down to avoid the crash ahead would cause the death of the occupants of the AV?
- what if the AV detects someone standing on the shoulder that the AV could otherwise use as an escape route?

These are not hypothetical scenarios but real world ones that AVs need to solve via computation and analysis rather than humans making split-second decisions with imperfect information and relatively terrible reaction times.

Also, any machine learning trained with real world data will pick up the unconscious biases of the people who provide the training data. There's a lot of study into this. In particular. Look up computer vision and racism.
Do you really want life-or-death decisions to be made by an AI trained by a data set comprised mainly of those who drive Teslas? ;)

If I'm going to be killed on the road I'd really rather it not be because of a techbro with terrible ethics and whose training methodology informed an AI that in the event of a no-win situation that I was expendable.
I can totally see this is a tangled Web of ethics that's going to need a lot of sorting out.

But I'd have thought before anything it's the lorries and other large vehicles that need the strictest controls first.

Lorries cause a disproportionately high number of fatalities compared with their numbers on the roads.

Not surprising given their size, and how the drivers are so hard up against it, to fulfil their time sheets, but also at the same time probably bored with the endless driving.

We'd preferably should have a lot more freight on rail.

And maybe (whisper it) we just don't need so much stuff shifting about anyhow.

 

GM

Legendary Member
I rather like the idea someone posted a few years ago, can't remember who it was. Brilliant idea, do away with drivers seat belts, keep the passengers belts and stick a dirty great big spike in the centre of the steering wheel. That should do the trick!
 

mudsticks

Über Member
Or we do but with longer timeframes so it can all be bundled together by a more ecological method, trains, canals, something new?
But if we don't move stuff about then how will everyone get the latest smart phone to replace their perfectly serviceable existing model?
We definitely dont need so much stuff.

I'm a very light shopper by most folks standards..
And I've still got way too much cr@p kicking about the place.

I always thought that after about the age of forty the best 'present' would be for someone to come round your house and relieve you of at least five unnecessary items..

My smartphone has just passed its third birthday!!:dance:

And despite having being taken on lots of hiking and biking and tractor trips..
It's still in one piece..

Cue - catastrophic accident this avo :whistle:
 

icowden

Senior Member
Location
Surrey
Quite a lot to unpick here so I've warmed up the waffle iron.

You don't have to go far back to see a massive pileup on a motorway caused by people driving blindly into fog. Yes, an AV can see the obstructions through the fog and will react accordingly, but:
- what about the non AV car behind that has no way of receiving data from the AV's beacon and fails to slow down? The AV will certainly be aware of it
- what evasive manouevres will it take? If it's only going to be a minor collision probably none
- what if, in this scenario, it's not a car but a truck behind the AV that fails to slow down, and the AV slowing down to avoid the crash ahead would cause the death of the occupants of the AV?
- what if the AV detects someone standing on the shoulder that the AV could otherwise use as an escape route?

If I'm going to be killed on the road I'd really rather it not be because of a techbro with terrible ethics and whose training methodology informed an AI that in the event of a no-win situation that I was expendable.
All of those scenarios are possible, but are they likely? How likely is it that a driverless car needs to make that decision, and how likely is it that a human driver could make a better decision. You have already demonstrated that the human driver has no time to make a decision due to reaction times. The AI that has time to think about it could make a better informed decision.

But in essence, you seem to be saying it is much better for 40000 people to be killed by human drives in a year than two people being killed as the result of a driverless car making a bad decision (stats based on USA 2018)?
 

winjim

Iron pony
Tough crowd. I quite like the idea of generalized motorist deterrents.
Yes, but you can't fit a farking massive tax hike in a bottle cage.
 

Phaeton

Guru
Location
Oop North (ish)
But in essence, you seem to be saying it is much better for 40000 people to be killed by human drives in a year than two people being killed as the result of a driverless car making a bad decision (stats based on USA 2018)?
WOW that's an nice out of context quote, how many miles did the non automatants cars travel compared with the automatants cars in that year?
 

icowden

Senior Member
Location
Surrey
WOW that's an nice out of context quote, how many miles did the non automatants cars travel compared with the automatants cars in that year?
Hi @Phaeton, there is quite a good article here which explains why that's quite difficult to calculate:

https://medium.com/@mc2maven/a-closer-inspection-of-teslas-autopilot-safety-statistics-533eebe0869d

Tesla themselves report that 1 death in 325 million miles traveled for a "driverless equipped vehicle" vs 1 in 86 million miles for ordinary vehicles. The article above goes into some detail as to why the comparison is not really valid, but still estimates that autopilot has 35% lower crash rates than a human. It is still a fact though that every death due to a driverless car has been reported internationally, and can be listed on one very short wikipedia page.

I have found no statistical evidence to back up the assertion that humans with two eyes, two ears and poor reaction times are better at driving than an AI which has many sensors and far superior reaction times. AI doesn't get drunk, or angry, or upset. It follows the rules it is given. The biggest risk is hacking.
 

YukonBoy

The Monch
Location
Inside my skull
I have found no statistical evidence to back up the assertion that humans with two eyes, two ears and poor reaction times are better at driving than an AI which has many sensors and far superior reaction times. AI doesn't get drunk, or angry, or upset. It follows the rules it is given. The biggest risk is hacking.
An AI doesn't follow rules it is given else it isn't an AI. Who sets the rules, what are they, and are they infallible? We can't judge how well a given AI will do in a given situation if its decision making is opaque. It's no good being fast at something if you keep coming up with the wrong answer.
 

classic33

Legendary Member
Hi @Phaeton, there is quite a good article here which explains why that's quite difficult to calculate:

https://medium.com/@mc2maven/a-closer-inspection-of-teslas-autopilot-safety-statistics-533eebe0869d

Tesla themselves report that 1 death in 325 million miles traveled for a "driverless equipped vehicle" vs 1 in 86 million miles for ordinary vehicles. The article above goes into some detail as to why the comparison is not really valid, but still estimates that autopilot has 35% lower crash rates than a human. It is still a fact though that every death due to a driverless car has been reported internationally, and can be listed on one very short wikipedia page.

I have found no statistical evidence to back up the assertion that humans with two eyes, two ears and poor reaction times are better at driving than an AI which has many sensors and far superior reaction times. AI doesn't get drunk, or angry, or upset. It follows the rules it is given. The biggest risk is hacking.
How many of those miles were on roads where pedestrians could be reasonably be expected to be in conflict with cars though.

Even the US has limits on where a driverless, driver still required if on the road, can legally be used.
 
Top Bottom