• MyTurtleSwimsUpsideDown@fedia.io
      link
      fedilink
      arrow-up
      72
      ·
      4 days ago

      Even analytical AI needs to be questioned and validated before use.

      1. I wouldn’t trust a AI to ID mushrooms for consumption.

      2. I forget the details, but there was a group training a diagnostic model (this was before “AI” became the popular term), and it was giving a lot of false positives. They eventually teased out that it was flagging low quality images because most of the unhealthy examples it was trained on came from poorer countries with less robust healthcare systems; hence the higher rates of the disease and lower quality images from older technology.

      • Ageroth@reddthat.com
        link
        fedilink
        arrow-up
        35
        ·
        4 days ago

        I’ve seen a similar thing where the machine learning model started associating rulers with cancer because the images it was fed with known cancer almost always also had a ruler to provide scale to measure the size of the tumor

      • shrugs@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        4 days ago

        It’s like these geoguesser not guessing the country by the plants and streets or houses, but by the camera angle and some imperfections only occuring in pictures taken in that country.

        “When a measure becomes a target, it ceases to be a good measure”

  • s@piefed.world
    link
    fedilink
    English
    arrow-up
    54
    ·
    4 days ago

    Definitely do not use AI or AI-written guidebooks to differentiate edible mushrooms from poisonous mushrooms

  • lavander@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    12
    ·
    3 days ago

    One of the issues with LLM is that it attracted all attention. Classifiers are generally cool, cheap and saved us from multiple issues (ok face recognition aside 🙂)

    When the AI bubble will burst (because of LLM being expensive and not good enough to replace a person even if they are good in pretending to be a person) all AI will slow down… including classifiers, NLP, etc

    All this because the AI community was obsessed by the Turing test/imitation game 🙄

    Turing was a genius but heck if I am upset with him for coming with this BS 🤣

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 days ago

      I am upset with him for coming with this BS

      It made sense in the context it was devised in. Back then we thought the way to build an AI was to build something that was capable of reasoning about the world.

      The notion that there’d be this massive amount of text generated by a significant percentage of the world’s population all typing their thoughts into networked computers for a few decades, coupled with the digitisation of every book written, that could be stitched together in a 1,000,000,000,000-byte model that just spat out the word with the highest chance of being next based on what everyone else in the past had written, producing the illusion of intelligence, would have been very difficult for him to predict.

      Remember, Moore’s Law wasn’t coined for another 15 years, and personal computers didn’t even exist as a sci-fi concept until later still.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      I dunno about that. We got a pile of architecture research out of it just waiting for some more tests/implementations.

      And think of how cheap renting compute will be! It’s already basically subsidized, but imagine when all those A100s/H100s are dumped.

  • Cevilia (she/they/…)@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    5
    ·
    4 days ago

    With respect to the original-original poster, this is wrong. AI plant identification is terrible. It gives you confidence but not enough nuance to know that there are similar plants, some of which look almost but not quite identical, some of which will provide some really nice sustenance, some of which will literally kill you and it’ll hurt the whole time you’re dying.

    It’s almost as bad as those AI-written foraging guides that give you enough information to feel confident but not enough information to be able to tell toxic or even deadly plants apart from the real ones.

    • odelik@lemmy.today
      link
      fedilink
      arrow-up
      4
      ·
      3 days ago

      Word to the wise.

      If it looks like a carrot, don’t touch it, to dig it up, and especially don’t eat it. There are tons of plants in the same familuly of plants that look nearly identical or extremely similar that will give you an extremely bad day, month, year, or death.

  • punkfungus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    35
    ·
    4 days ago

    People are saying you shouldn’t use AI to identify edible mushrooms, which is absolutely correct, but remember that people forage fruits and greens too. Plants are deadly poisonous at a higher rate than mushrooms, so plant ID AI has the potential to be more deadly too.

    And then there’s the issue that these ID models are very America and/or Europe centric, and will fail miserably most of the time outside of those contexts. And if they do successfully ID a plant, they won’t provide information about it being a noxious invasive in the habitat of the user.

    Like essentially all AI, even when it works it’s barely useful at the most surface level only. When it doesn’t work, which is often, it’s actively detrimental.

    • greedytacothief@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      I actually think AI for mushroom identification is okay, but as a step in the process. Sometimes you see a mushroom and your like “what is that?”. Do a little scan to see what it is. Okay now you have an idea of what it is, but then comes the next part! https://mushroomexpert.com/ there you can go through the list and see if you get a positive ID.

      Like if you’re not 100% positive you know what you’re foraging why would you take the risk.

  • ZombiFrancis@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    3 days ago

    If you ever are an executive and you need to explain a product idea you made up and don’t want to bother with actual proof of concept, then AI has got you.

    If you want custom porn, AI has got you.

    These are its two competing functions. And judging by my last foray into what AI is about, the latter is winning hard.

  • ThePantser@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    29
    ·
    4 days ago

    Just dealt with an AI bot this morning when I called a law office. They try so hard to mimic humans. They even added background people talking sounds. But was 100% easily given away when they repeated the same response to my asking to speak with a human. “I will gladly pass on your message to (insert weird pause) “Bill””

  • brbposting@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    Plant ID is soooo disappointing - works sometimes though.

    Always gotta run the ID, web search for images of the recommendation, compare images to plant.

    Semantic search can be helpful:

    Search mockup showing difference between a lexical search of “Daniel Radcliffe” compared to semantic search of “how rich is the actor who played Harry Potter“ which translates to “net worth Daniel Radcliffe“, sourced from seobility.net

    Guess OP image could be about e.g. Perplexity repeatedly HAMMERING (no caching?) the beautiful open web and slopping out poor syntheses.

  • The Velour Fog @lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    4 days ago

    Idk, when I Google Lensed a Sunflower plant, the AI told me it was a Peruvian ground apple…

    It also has a lot of trouble identifying lambs quarters and other common wild weeds.

    • renzhexiangjiao@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      3
      ·
      4 days ago

      probably because Google lens is made to be all-purpose, if you had a model that has been specifically trained to recognize plants, it wouldn’t make such obvious mistakes

        • kazerniel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          Try out Flora Incognita, it usually works pretty well, at least in the UK. Though I’m kinda bemused that while often I give it one blurry photo of a leaf and it immediately spits out the 97% confidence answer (and yes, the result’s example photos do match the real plant’s other parts too), but occasionally it has me take painstaking photos of like 5 different aspects of the plant, then thinks for more than a minute and goes “eeh, 37% that it’s this [completely different looking] plant, otherwise no clue”.

      • skavj@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        Flora incognita is a nice one I’ve tried. It’s a spinout from a university. The best use is to take its guess and then click through, and you can see example images of the plant it is guessing so you can judge for yourself

  • SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    4 days ago

    “AI”'s best uses are the Machine Learning aspects we were already using before we started calling things AI

    It’s so painful watching this tech be forced down our throats by marketing departments despite us already discovering all of its best use cases long before the marketing teams got ahold of the tech.

    • nfreak@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 days ago

      Exactly this. GenAI is pretty much dogshit all around. Machine learning as a concept is nothing new at all, and shoving it under the same marketing umbrella does so much more harm than good.

  • Jankatarch@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    4 days ago

    Honestly good rule about Machine Learning is just “predicting = good, generating = bad.” Rest are case by case but usually bad.

    Predict inflation in 3 years - cool
    Predict chance of cancer - cool.

    Generate image or mail or summary or tech article - fuck you.

    Generating speech from text/image is also cool but it’s kind of a special case there.

  • Aeao@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    4 days ago

    It didn’t mention the star finder apps!

    Save yourself the money and time. It’s Venus. That cool star you’re looking at? Yeah that’s Venus. Just trust me.

      • Aeao@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        4 days ago

        Yeah I actually paid for the full version of mine… even though it’s always Venus

  • ILikeBoobies@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    3 days ago

    It’s a decent evolution of the search engine but you have to ask it for sources and it’s way too expensive for it’s use case.

    • odelik@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      Is it?

      I’ve found so many fucking errors in AI summaries that I don’t trust shit from AI searches when a direct link to a source or wiki could give me better summarized info.

      I guess it’s an evolution, but I’m really hoping these mutations prove inferior and it dies off already. But capitalism won’t have that with their sunk cost fallacy driven insistence that I just use the inferior product.

      • ILikeBoobies@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        That’s why I said you have to ask for the source. It’s summary isn’t good but that you can describe something to it in human language instead of focusing on keywords to start your search and that the sources it gives aren’t just ads yet is useful.

  • CompactFlax@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    4 days ago

    Friendly reminder that automatic transmissions were sometimes considered to be artificial intelligence.

    “Fuck business idiots waxing poetic about the inestimable value of LLMs” isn’t a good community name though.