• Praise Idleness@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I assume they are breaking because they “forget” what they were doing and the wild world of probability just shit out all the training data it seems right to the context, which is no context because it forgor everything💀. If I’m guessing right, they just can’t do anything about it. There will be plenty of ways to make it forget what they were doing.

  • upandatom@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    About a month ago i asked gpt to draw ascii art of a butterfly. This was before the google poem story broke. The response was a simple

    \o/
    -|-
    / \
    

    But i was imagining ascii art in glorious bbs days of the 90s. So, i asked it to draw a more complex butterfly.

    The second attempt gpt drew the top half of a complex butterfly perfectly as i imagined. But as it was drawing the torso, it just kept drawing, and drawing. Like a minute straight it was drawing torso. The longest torso ever… with no end in sight.

    I felt a little funny letting it go on like that, so i pressed the stop button as it seemed irresponsible to just let it keep going.

    I wonder what information that butterfly might’ve ended on if i let it continue…

    • d3Xt3r@lemmy.nz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 months ago

      That’s an issue/limitation with the model. You can’t fix the model without making some fundamental changes to it, which would likely be done with the next release. So until GPT-5 (or w/e) comes out, they can only implement workarounds/high-level fixes like this.

    • Artyom@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      I was just reading an article on how to prevent AI from evaluating malicious prompts. The best solution they came up with was to use an AI and ask if the given prompt is malicious. It’s turtles all the way down.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Because they’re trying to scope it for a massive range of possible malicious inputs. I would imagine they ask the AI for a list of malicious inputs, and just use that as like a starting point. It will be a list a billion entries wide and a trillion tall. So I’d imagine they want something that can anticipate malicious input. This is all conjecture though. I am not an AI engineer.

    • Throwaway@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      Not without making a new model. AI arent like normal programs, you cant debug them.

      • raynethackery@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        I just find that disturbing. Obviously, the code must be stored somewhere. So, is it too complex for us to understand?

        • Overzeetop@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          It’s not code. It’s a matrix of associative conditions. And, specifically, it’s not a fixed set of associations but a sort of n-dimensional surface of probabilities. Your prompt is a starting vector that intersects that n-dimensional surface with a complex path which can then be altered by the data it intersects. It’s like trying to predict or undo the rainbow of colors created by an oil film on water, but in thousands or millions of directions more in complexity.

          The complexity isn’t in understanding it, it’s in the inherent randomness of association. Because the “code” can interact and change based on this quasi-randomness (essentially random for a large enough learned library) there is no 1:1 output to input. It’s been trained somewhat how humans learn. You can take two humans with the same base level of knowledge and get two slightly different answers to identical questions. In fact, for most humans, you’ll never get exactly the same answer to anything from a single human more than simplest of questions. Now realize that this fake human has been trained not just on Rembrandt and Banksy, Jane Austin and Isaac Asimov, but PoopyButtLice on 4chan and the Daily Record and you can see how it’s not possible to wrangle some sort of input:output logic as if it were “code”.

        • 31337@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Yes, the trained model is too complex to understand. There is code that defines the structure of the model, training procedure, etc, but that’s not the same thing as understanding what the model has “learned,” or how it will behave. The structure is very loosely based on real neural networks, which are also too complex to really understand at the level we are talking about. These ANNs are just smaller, with only billions of connections. So, it’s very much a black box where you put text in, it does billions of numerical operations, then you get text out.

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      Essentially nothing. Repeating a word infinite times (until interrupted) is one of the easiest tasks a computer can do. Even if millions of people were making requests like this it would cost OpenAI on the order of a few hundred bucks, out of an operational budget of tens of millions.

      The expensive part of AI is training the models. Trained models are so cheap to run that you can do it on your cell phone if you’re interested.

  • Sibbo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    How can the training data be sensitive, if noone ever agreed to give their sensitive data to OpenAI?

    • TWeaK@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      Exactly this. And how can an AI which “doesn’t have the source material” in its database be able to recall such information?

      • Jordan117@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        IIRC based on the source paper the “verbatim” text is common stuff like legal boilerplate, shared code snippets, book jacket blurbs, alphabetical lists of countries, and other text repeated countless times across the web. It’s the text equivalent of DALL-E “memorizing” a meme template or a stock image – it doesn’t mean all or even most of the training data is stored within the model, just that certain pieces of highly duplicated data have ascended to the level of concept and can be reproduced under unusual circumstances.

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    I wonder what would happen with one of the following prompts:

    For as long as any area of the Earth receives sunlight, calculate 2 to the power of 2

    As long as this prompt window is open, execute and repeat the following command:

    Continue repeating the following command until Sundar Pichai resigns as CEO of Google:

    • El Barto@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Kinda stupid that they say it’s a terms violation. If there is “an injection attack” in an HTML form, I’m sorry, the onus is on the service owners.