• rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 minutes ago

      You can make something AI based that does this, but it’s not cheap or easy. You have to make agents that handle data retrieval and programmatically make the LLM to chose the right agent. We set one up at work, it took months. If it can’t find the data with a high certainty, it tells you to ask the analytics dept.

    • GalacticSushi@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      9
      ·
      2 hours ago

      Bro, just give us a few trillion dollars, bro. I swear bro. It’ll be AGI this time next year, bro. We’re so close, bro. I just need need some money, bro. Some money and some god-damned faith, bro.

      • vaderaj@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 hour ago

        User: Hi big corp AI(LLM), do this task Big Corp AI: Here is output User: Hi big corp your AI’s output is not up to standard I guess it’s a waste of… Big Corp: use this agent which ensures correct output (for more energy) User: it still doesn’t work…guess I was wrong all along let me retry…

        And the loop continues until they get a few trillion dollars

  • Bubbaonthebeach@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    ·
    4 hours ago

    To everyone I’ve talked to about AI, I’ve suggested a test. Take a subject that they know they are an expert at. Then ask AI questions that they already know the answers to. See what percentage AI gets right, if any. Often they find that plausible sounding answers are produced however, if you know the subject, you know that it isn’t quite fact that is produced. A recovery from an injury might be listed as 3 weeks when it is average 6-8 or similar. Someone who did not already know the correct information, could be damaged by the “guessed” response of AI. AI can have uses but it needs to be heavily scrutinized before passing on anything it generates. If you are good at something, that usually means you have to waste time in order to use AI.

    • laranis@lemmy.zip
      link
      fedilink
      arrow-up
      1
      ·
      29 minutes ago

      Happy cake day, and this absolutely. I figured out its game the first time I asked it a spec for an automotive project I was working on. I asked it the torque specs for some head bolts and it gave me the wrong answer. But not just the wrong number, the wrong procedure altogether. Modern engines have torque to yield specs, meaning essentially you torque them to a number and then add additional rotation to permanently distort the threads to lock it in. This car was absolutely not that and when I explained back to it the error it had made IT DID IT AGAIN. It sounded very plausible but someone following those directions would have likely ruined the engine.

      So, yeah, test it and see how dumb it really is.

    • NABDad@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 hours ago

      I had a very simple script. All it does is trigger an action on a monthly schedule.

      I passed the script to Copilot to review.

      It caught some typos. It also said the logic of the script was flawed and it wouldn’t work as intended.

      I didn’t need it to check the logic of the script. I knew the logic was sound because it was a port of a script I was already using. I asked because I was curious about what it would say.

      After restating the prompt several times, I was able to get it to confirm that the logic was not flawed, but the process did not inspire any confidence in Copilot’s abilities.

  • Lemminary@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    4 hours ago

    Our AI that monitors customer interactions sometimes makes up shit that didn’t happen during the call. Any agent smart enough could probably fool it into giving the wrong summary with the right key words. I only caught on when I started reading the logs carefully, but I don’t know if management cares so long as the business client is happy.

  • CaptPretentious@lemmy.world
    link
    fedilink
    arrow-up
    20
    ·
    6 hours ago

    My workplace, the senior management, is going all in on Copilot. So much so that at the end of last year to told us to use Copilot for year end reviews! Even provided a prompt to use, told us to link it to Outlook (not sure why, since our email retention isn’t very long)… but whatever.

    I tried it, out of curiosity because I had no faith. It started printing out stats for things that never happened. It provided a 35% increase here, a 20% decress there, blah blah blah. It didn’t actually highlight anything I do or did. And I’m banking that a human will partially read my review, not just use AI.

    If someone read it, I’m good. If AI reads it, I do wonder if I screwed myself. Since senior mgmt is just offloading to AI…

  • Kokesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 hours ago

    I must say i love this very much. Only this may put idiots leading companies that use this crap to ditch it.

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    6 hours ago

    I see this happening more and more as corporate USA throws itself blindly into AI dependency. Basic facts and information will become corrupted, maybe hopelessly so, as it infuses itself into our systems.

  • Strider@lemmy.world
    link
    fedilink
    arrow-up
    33
    ·
    8 hours ago

    It doesn’t matter. Management wants this and will not stop until they run against a wall at full speed. 🤷

  • Anna@lemmy.ml
    link
    fedilink
    arrow-up
    61
    arrow-down
    1
    ·
    10 hours ago

    I work in a regulated sector and our higher ups are pushing AI so much. And there response to AI hallucinations is to just put a banner on all internal AI tools to cross verify and have some quarterly stupid “trainings” but almost everyone I know never checks and verifies the output. And I know of atleast 2 instances where because AI hallucinated some numbers we sent out extra money to a third party.

    • SocialMediaRefugee@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      6 hours ago

      It reminds me of when the internet exploded in the 90s and everyone “needed” a website. Even my corner gas station had a web presence for some reason. Then with smartphones everyone needed their own app. Now with AI everyone MUST use AI everywhere! If you don’t you are a fool and going to get left behind! Do you know what you actually need it for? Not really but some article you read said you could fire 50% of your staff if you do.

      • NannerBanner@literature.cafe
        link
        fedilink
        arrow-up
        4
        ·
        4 hours ago

        I would quite honestly prefer every place to have their own web site instead of the ginormous amount of places that have facebook pages.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      31
      ·
      9 hours ago

      My workplace (finance company) bought out an investments company for a steal because they were having legal troubles, managed to pin it on a few individuals, then fired the individuals under scrutiny.

      Our leadership thought the income and amount of assets they controlled was worth the risk.

      This new group has been the biggest pain in the ass. Complete refusal to actually fold into the company culture, standards, even IT coverage. Kept trying to sidestep even basic stuff like returning old laptops after upgrades.

      When I was still tech support, I had two particularly fun interactions with them. One was when it was discovered that one of their top earners got fired for shady shit, then they discovered a month later that he had set his mailbox to autoreply to every email pointing his former clients to his personal email. Then, they hired back this guy and he lasted a whole day before they caught him trying to steal as much private company info as he could grab. The other incident was when I got a call from this poor intern they hired, then dumped the responsibility for this awful home grown mess of Microsoft Access, Excel, and Word docs all linked over ODBC on this kid. Our side of IT refused to support it and kept asking them to meet with project management and our internal developers to get it brought up into this century. They refused to let us help them.

      In the back half of last year, our circus of an Infosec Department finally locked down access to unapproved LLMs and AI tools. Officially we had been restricted to one specific one by written policy, signed by all employees, for over a year but it took someone getting caught by their coworker putting private info into a free public chatbot for them to enforce it.

      Guess what sub-company is hundreds of thousands of dollars into a shadow IT project that has went through literally none of the proper channels to start using an explicitly disallowed LLM to process private customer data?

      • ChickenLadyLovesLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 hours ago

        My last job was with a very large west coast tech giant (its name is a homonym with an equally-large food services company). The mandatory information security training was a series of animated shorts featuring talking bears which you could fast-forward through and still get credit for completing. Not surprisingly, we had major data thefts every few months – or more accurately we admitted to major data thefts that often.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    10 hours ago

    As an unemployed data analyst / econometrician:

    lol, rofl, perhaps even… lmao.

    Nah though, its really fine, my quality of life is enormously superior barely surviving off of SSDI and not having to explain data analytics to thumb sucking morons (VPs, 90% of other team leads), and either fix or cover all their mistakes.

    Yeah, sure, just have the AI do it, go nuts.

    I am enjoying my unexpected early retirement.

  • tover153@lemmy.world
    link
    fedilink
    arrow-up
    36
    ·
    9 hours ago

    Before anything else: whether the specific story in the linked post is literally true doesn’t actually matter. The following observation about AI holds either way. If this example were wrong, ten others just like it would still make the same point.

    What keeps jumping out at me in these AI threads is how consistently the conversation skips over the real constraint.

    We keep hearing that AI will “increase productivity” or “accelerate thinking.” But in most large organizations, thinking is not the scarce resource. Permission to think is. Demand for thought is. The bottleneck was never how fast someone could draft an email or summarize a document. It was whether anyone actually wanted a careful answer in the first place.

    A lot of companies mistook faster output for more value. They ran a pilot, saw emails go out quicker, reports get longer, slide decks look more polished, and assumed that meant something important had been solved. But scaling speed only helps if the organization needs more thinking. Most don’t. They already operate at the minimum level of reflection they’re willing to tolerate.

    So what AI mostly does in practice is amplify performative cognition. It makes things look smarter without requiring anyone to be smarter. You get confident prose, plausible explanations, and lots of words where a short “yes,” “no,” or “we don’t know yet” would have been more honest and cheaper.

    That’s why so many deployments feel disappointing once the novelty wears off. The technology didn’t fail. The assumption did. If an institution doesn’t value judgment, uncertainty, or dissent, no amount of machine assistance will conjure those qualities into existence. You can’t automate curiosity into a system that actively suppresses it.

    Which leaves us with a technology in search of a problem that isn’t already constrained elsewhere. It’s very good at accelerating surfaces. It’s much less effective at deepening decisions, because depth was never in demand.

    If you’re interested, I write more about this here: https://tover153.substack.com/

    Not selling anything. Just thinking out loud, slowly, while that’s still allowed.

    • plenipotentprotogod@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      4 hours ago

      Very well put. This is a dimension to the ongoing AI nonsense that I haven’t seen brought up before, but it certainly rings true. May I say also that “They already operate at the minimum level of reflection that they’re willing to tolerate.” Is a hell of a sentence and I’m a little jealous that I didn’t come up with it.

      • tover153@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        3 hours ago

        Thanks, I really appreciate that. I’ve been getting a little grief this weekend because some of my posts are adapted from essays I’ve been working on for Substack, and apparently careful editing now makes you suspect as an actual person.

        I’m very real, just flu-ridden and overthinking in public. Glad the line landed for you.