• Jankatarch@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    42 minutes ago

    To be completely honest the $20 was the Token costs.

    If the service charged a profiting price that accounted for the training and hosting costs-

  • T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 hour ago

    Why even use an LLM for that? That seems like the completely wrong use-case for an LLM.

  • AeonFelis@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    1 hour ago

    Imagine if every time the kids ask you “are we there yet” during a long road trip you’d be charged $0.75.

  • Dogiedog64@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    3 hours ago

    Motherfucker blew 20$ in a night, and extrapolated it to several hundred bucks a month. All for what is essentially a labeled alarm. You know, something your phone can already do, no AI necessary, for FREE.

    This technology is a bad joke. It needs to die.

  • Bazell@lemmy.zip
    link
    fedilink
    arrow-up
    11
    ·
    3 hours ago

    People who mastered calendar, clock and notes apps in their smartphones be like:

  • Seefoo@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    3 hours ago

    How did he rackup 120k tokens in a single convo about setting an alarm/reminder?

    I literally feed full services to claude for 1/10th of that context size

  • RattlerSix@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    6 hours ago

    Pairing an automated process with something that costs money without error checking is like putting a credit card on file with a hooker. You’re definitely running the risk of waking up broke.

  • Furbag@lemmy.world
    link
    fedilink
    arrow-up
    29
    ·
    7 hours ago

    Why does it seem like he repeats himself in a slightly different way? Did he get an LLM to summarize what happened, and then summarize the summary? Who talks like this?

  • Zink@programming.dev
    link
    fedilink
    arrow-up
    7
    ·
    5 hours ago

    Over-designing something using trendy technologies while also spending far more money than it would cost to go with the existing solution that is also more reliable – this can be a valid plan. But it is called a hobby, not a business!

    Has anybody told the techbros?

  • starman2112@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    6 hours ago

    I still have the old school Google assistant on my phone, and it manages to remind me of things all the time without costing anything

  • pseudo@jlai.lu
    link
    fedilink
    arrow-up
    45
    ·
    11 hours ago

    Why use an LLM to solve a problem you could solve using an alarm clock and a post it.

    • Rooster326@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      3 hours ago

      Nooo you don’t understand. It needs it to be wrong up to 60% of the time. He would need a broken clock, a window and a post it note.

    • enbiousenvy@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      28
      ·
      edit-2
      9 hours ago

      programming nitpicks (for the lack of better word) that I used to hear:

      • “don’t use u32, you won’t need that much data”
      • “don’t use using namespace std”
      • “sqrt is expensive, if necessary cache it outside loop”
      • “I made my own vector type because the one from standard lib is inefficient”

      then this person implemeting time checking work via LLM over network and costs $0.75 each check lol

      • AnyOldName3@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        7 hours ago

        using namespace std is still an effective way to shoot yourself in the foot, and if anything is a bigger problem than it was in the past now that std has decades worth of extra stuff in it that could have a name collision with something in your code.

      • cecilkorik@piefed.ca
        link
        fedilink
        English
        arrow-up
        16
        ·
        9 hours ago

        We used to call that premature optimization. Now we complain tasks don’t have enough AI de-optimization. We must all redesign things that we have done in traditional, boring not-AI ways, and create new ways to do them slower, millions or billions of times more computationally intensive, more random, and less reliable! The market demands it!

        • very_well_lost@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          2 hours ago

          I call this shit zero-sum optimization. In order to “optimize” for the desires of management, you always have to deoptimize something else.

          Before AI became the tech craze du jour I had a VP get obsessed with microservices (because that’s what Netflix uses so it must be good). We had to tear apart a mature and very efficient app and turn it into hundreds of separate microservices… all of which took ~100 milliseconds to interoperate across the network. Pages that used to take 2 seconds to serve before now took 5 or 10 because of all the new latency required to do things they used to be able to basically for free. And it’s not like this was a surprise. We knew this was going to happen.

          But hey, at least our app became more “modern” or whatever…