• pseudo@jlai.lu
    link
    fedilink
    arrow-up
    45
    ·
    13 hours ago

    Why use an LLM to solve a problem you could solve using an alarm clock and a post it.

    • Rooster326@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      5 hours ago

      Nooo you don’t understand. It needs it to be wrong up to 60% of the time. He would need a broken clock, a window and a post it note.

    • enbiousenvy@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      30
      ·
      edit-2
      11 hours ago

      programming nitpicks (for the lack of better word) that I used to hear:

      • “don’t use u32, you won’t need that much data”
      • “don’t use using namespace std”
      • “sqrt is expensive, if necessary cache it outside loop”
      • “I made my own vector type because the one from standard lib is inefficient”

      then this person implemeting time checking work via LLM over network and costs $0.75 each check lol

      • AnyOldName3@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        8 hours ago

        using namespace std is still an effective way to shoot yourself in the foot, and if anything is a bigger problem than it was in the past now that std has decades worth of extra stuff in it that could have a name collision with something in your code.

      • cecilkorik@piefed.ca
        link
        fedilink
        English
        arrow-up
        16
        ·
        10 hours ago

        We used to call that premature optimization. Now we complain tasks don’t have enough AI de-optimization. We must all redesign things that we have done in traditional, boring not-AI ways, and create new ways to do them slower, millions or billions of times more computationally intensive, more random, and less reliable! The market demands it!

        • very_well_lost@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          4 hours ago

          I call this shit zero-sum optimization. In order to “optimize” for the desires of management, you always have to deoptimize something else.

          Before AI became the tech craze du jour I had a VP get obsessed with microservices (because that’s what Netflix uses so it must be good). We had to tear apart a mature and very efficient app and turn it into hundreds of separate microservices… all of which took ~100 milliseconds to interoperate across the network. Pages that used to take 2 seconds to serve before now took 5 or 10 because of all the new latency required to do things they used to be able to basically for free. And it’s not like this was a surprise. We knew this was going to happen.

          But hey, at least our app became more “modern” or whatever…