

How does it decide the car is uncontrolled? That’s a failure scenario, too.
I’m not even sure what you’re arguing. I said from the get go that there are niche cases where AI is nothing but positive. You seem to be arguing that there are a bunch more cases. Fine. Maybe the niche is slightly less thin and narrow than I think. Cool.



My scenario was a safety device that prevented cars from hitting pedestrians. You’re stuck on this autonomous self control in the event of loss of human control and it seems like you’re interpreting what I’m saying in that context, which I wasn’t. I presented a scenario when it’s a good idea and one when it isn’t. Nothing to do with your autonomous control scenario.
But let’s see. If you’ve got a done that can fly itself for a few seconds or minutes if it loses signal, simply loitering waiting for control to continue, or maybe continuing on a flight path until it is out of jamming range. Alternative is uncontrolled crash, possibility of avoiding that is nothing but upside, whether it’s 10% or 90% success. It’s a good example of the type of scenario I was describing with the smart mine.
I wasn’t trying to address your scenario because it already falls into the niche I was describing. I was trying to demonstrate how to consider scenarios where AI is good vs ones where it has an unacceptable tradeoff. Where the consequences of failure don’t outweigh the benefits when it gets it right.
So I think we were talking past each other, and if my communication was unclear then I apologize. In my defense, it’s 2AM here.