• Diurnambule@jlai.lu
    link
    fedilink
    arrow-up
    1
    ·
    10 hours ago

    If I understand they would have to pass the input in a “ai” then train another ai on the output of the first ? Am I mistaken or do i remember well that training “ai” on “ai” output break the trained model ?

    • webghost0101@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      9 hours ago

      In concept art art education they call this particular thing “incest”

      The example is using Skyrim weapon designs as the base reference to make your own fantasy weapon design. Over time each generation strays further from reality.

      However with ai where training data consist of huge sets of everything, to mich to filter manually there is a great benefit to be gained by using a small ai to do this filtering for you.

      In my previous example, this would be an ai that looks at all the stolen images and simply yes/no if they are a real photo for reference or a subjective interpretation. Some might get labeled wrong but overall it will be better then a human at this.

      The real danger is when its goes beyond “filtering this training set for x and y” into “build a training set with self sourced data” cause then it might wrongly decide that to create fantasy weapons one should reference other fantasy weapons and not train any real weapons.

      Currently some are already walking a grey line in between. They generate new stuff using ai to fit a request. Then use ai to filter for only the best and train on that. This strategy appears to be paying off… for now.

      • Diurnambule@jlai.lu
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        5 hours ago

        On large data you can’t filter by hand how are you sure you small “ai” doesn’t halucinate things, or filter things in poetry ? This field is very interesting :)

        • webghost0101@sopuli.xyz
          link
          fedilink
          arrow-up
          2
          ·
          4 hours ago

          Zero guarantees. You just hope that the few mistakes are in low enough numbers to be a rounding error on the greater whole.

          The narrower the task the more accurate it is though. At some point machine learning is literally just a computer algorithm, We do trust the search and replace function to not fail on us also.