• vacuumflower@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Well, from this description it’s still usable for things too complex to just do Monte-Carlo, but with possible verification of results. May even be efficient. But that seems narrow.

    BTW, even ethical automated combat drones. I know that one word there seems out of place, but if we have an “AI” for target\trajectory\action suggestion, but something more complex\expensive for verification, ultimately with a human in charge, then it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).

    • pinball_wizard@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      15 hours ago

      it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).

      But how does this work help next quarter’s profits?

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        If each unplanned death not result of operator’s mistake would lead to confiscation of one month’s profit (not margin), then I’d think it would help very much.