In the piece — titled “Can You Fool a Self Driving Car?” — Rober found that a Tesla car on Autopilot was fooled by a Wile E. Coyote-style wall painted to look like the road ahead of it, with the electric vehicle plowing right through it instead of stopping.

The footage was damning enough, with slow-motion clips showing the car not only crashing through the styrofoam wall but also a mannequin of a child. The Tesla was also fooled by simulated rain and fog.

  • FuglyDuck@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    24 hours ago

    So, as others have said, it takes time to brake. But also, generally speaking autonomous cars are programmed to dump control back to the human if there’s a situation it can’t see an ‘appropriate’ response to.

    what’s happening here is the ‘oh shit, there’s no action that can stop the crash’, because braking takes time (hell, even coming to that decision takes time, activating the whoseitwhatsits that activate the brakes takes time.) the normal thought is, if there’s something it can’t figure out on it’s own, it’s best to let the human take over. It’s supposed to make that decision well before, though.

    However, as for why tesla is doing that when there’s not enough time to actually take control?

    It’s because liability is a bitch. Given how many teslas are on the road, even a single ruling of “yup it was tesla’s fault” is going to start creating precedent, and that gets very expensive, very fast. especially for something that can’t really be fixed.

    for some technical perspective, I pulled up the frame rates on the camera system (I’m not seeing frame rate on the cabin camera specifically, but it seems to either be 36 in older models or 24 in newer.)

    14 frames @ 24 fps is about 0.6 seconds@36 fps, it’s about 0.4 seconds. For comparison, average human reaction to just see a change and click a mouse is about .3 seconds. If you add in needing to assess situation… that’s going to be significantly more time.