• RunawayFixer@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    15 days ago

    You can’t create an automated machine, let it run loose without supervision and then claim to not be responsible for what the machine does.

    Maybe just maybe this was the very first instance of their ai malfunctioning (which I don’t believe for a second), in which case the correct response of Brandshield would have been to announce that they would temporarily suspend the activities of this particular program & promise to implement improvements so that it would not happen again. Brandshield has done neither of these, which tells me that it’s not the first time and also that Brandshield has no intention of preventing it from happening again in the future.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 days ago

      I’m not trying to exonerate them of any blame, I’m just saying “knowingly” implies a human looking at something and making a decision as opposed to a machine making a mistake.

      • RunawayFixer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        15 days ago

        I made an automaton. I set the parameters in such a way that there is a large variability of actions that my automaton can take. My parameters do not pre-empt my automaton from taking certain illegal actions. I set my automaton loose. After some time it turns out that my automaton has taken an illegal action against a specific person. Did I know that my automaton was going to commit a illegal action against that specific person? No, I did not. Did I know that my automaton was sooner or later going to commit certain illegal actions? Yes I did, because those actions are within the parameters of the automaton. I know my automaton is capable of doing illegal actions and given enough incidences there is an absolute certainty that it will do those illegal actions. I do not need to interact with my automaton in any way to know that some of it’s actions will be illegal.

          • RunawayFixer@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            15 days ago

            And I’m not saying that you are. I tried to show with a parable that they do not need to see their machine’s actions to know that some of it’s actions are illegal. That’s what we were disagreeing on: that they know.