• MyTurtleSwimsUpsideDown@fedia.io
      link
      fedilink
      arrow-up
      72
      ·
      6 days ago

      Even analytical AI needs to be questioned and validated before use.

      1. I wouldn’t trust a AI to ID mushrooms for consumption.

      2. I forget the details, but there was a group training a diagnostic model (this was before “AI” became the popular term), and it was giving a lot of false positives. They eventually teased out that it was flagging low quality images because most of the unhealthy examples it was trained on came from poorer countries with less robust healthcare systems; hence the higher rates of the disease and lower quality images from older technology.

      • Ageroth@reddthat.com
        link
        fedilink
        arrow-up
        35
        ·
        6 days ago

        I’ve seen a similar thing where the machine learning model started associating rulers with cancer because the images it was fed with known cancer almost always also had a ruler to provide scale to measure the size of the tumor

      • shrugs@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        6 days ago

        It’s like these geoguesser not guessing the country by the plants and streets or houses, but by the camera angle and some imperfections only occuring in pictures taken in that country.

        “When a measure becomes a target, it ceases to be a good measure”