• gmtom@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    4 hours ago

    Oh man, you guys should see what I was cooking up at my old place.

    Head office too shitty to give us an actual asset management solution, but we did have full access to the Microsoft suite, so i used a SharePoint lists as databases, powerapps apps running on iPads for all the data entry ux and then like two dozen hacked together power automate flows linking them all together as well as taking any Info out of the actual IT systems head office used and since we didn’t have API access to those system any data feeding back in to them would be in the form of automated emails that the poor 1st line techs in head office would have to sort through and process manually.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    40
    ·
    12 hours ago

    I don’t see the alias in your .bashrc

    yeah, um, about that. I have no idea where it comes from. We can type alias and see what it is, so if it’s ever lost, we can recreate it, but I looked for 30 minutes yesterday even did a grep -R and I have NO IDEA where it comes from, or why it’s named electricboogaloo

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      31
      ·
      12 hours ago

      Well, here’s a sentence I haven’t been tempted to use before:

      “I believe that may be too many crontab entries.”

      • DickFiasco@sh.itjust.works
        link
        fedilink
        arrow-up
        16
        ·
        12 hours ago

        Any problem in server administration can be solved with an additional crontab entry. Except for the problem of too many crontab entries.

        • Opisek@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          9 hours ago

          And that’s why I added a crontab entry that periodically purges my cron configuration. That way, I’m forced to readd only the truly necessary cron jobs, successfully reducing the amount of crontab entries.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        pshaw, just drop in there and combine a few

        /etc/cron.d/first25 /etc/cron.d/second25 …

        • j_z@feddit.nu
          link
          fedilink
          arrow-up
          1
          ·
          4 hours ago

          This is the way. Exactly what we did + migrated 80% of everything to k8s cronjobs and Argo workflows

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      10 hours ago

      At some point it may be good to migrate to airflow or something similar.

      It’s not the number of entries that makes it bad. It’s the fact that if you run crontab, they are gone…

      • bleistift2@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        8 hours ago

        At first I thought you missed the -r. Then I checked. Defaulting to STDIN here is very, very dumb, IMHO. Almost as bad as putting the “edit” flag right next to the “delete everything without confirmation” flag on a Western keyboard (-e vs -r).

        • marcos@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          8 hours ago

          Crontab is a really badly designed program that we just can’t fix because everybody depends on its WFTs for something.

        • marcos@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          9 hours ago

          Make the rule start a secondary cron system. Otherwise it won’t run after you erase the crontab.

          • dondelelcaro@lemmy.world
            link
            fedilink
            arrow-up
            6
            ·
            edit-2
            9 hours ago

            Here you go:

            with-lock-ex -q /path/to/lockfile sh -c '
            while true; do
                crontab cronfile;
                sleep 60;
            done;'
            
  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    89
    ·
    17 hours ago

    I have a tool that I wrote, probably 5+ years ago. Runs once a week, collects data from a public API, translates it into files usable by the asterisk phone server.

    I totally forgot about it. Checked. Yep, up to date files created, all seem in the right format.

    Sometimes things just keep working.

    • RustyNova@lemmy.world
      link
      fedilink
      arrow-up
      49
      ·
      16 hours ago

      Meanwhile, had to debug a script that zipped a zip recursively, with the new data appended. The server had barely enough storage left, as the zip took almost 200GB (the data is only 3GB). I looked at the logs, last successful run: 2019

      • r00ty@kbin.life
        link
        fedilink
        arrow-up
        17
        ·
        16 hours ago

        Yes, had the same happen. Something that should be simple failing for stupid reasons.

        • RustyNova@lemmy.world
          link
          fedilink
          arrow-up
          11
          ·
          15 hours ago

          Well it’s not that simple… Because whoever wrote that made it way too complicated (and the production version has been tweaked without updating the dev too)

          A clean rewrite with some guard clauses helped remove the haduken ifs and actually zipping the file outside of the zipped directory helped a lot

          • r00ty@kbin.life
            link
            fedilink
            arrow-up
            6
            ·
            13 hours ago

            I mean, I have to say I’ve hastened my own demise (in program terms) by over-engineering something that should be simple. Sometimes adding protective guardrails actually causes errors when something changes.

          • Quantenteilchen@discuss.tchncs.de
            link
            fedilink
            arrow-up
            2
            ·
            12 hours ago

            Am I understanding that last part correctly?

            […] and actually zipping the file outside of the zipped directory helped a lot

            Did they just automatically create a backup zip-bomb in their script‽

            • RustyNova@lemmy.world
              link
              fedilink
              arrow-up
              8
              ·
              11 hours ago

              I oversimplified it but the actual process was to zip files to send to an FTP server

              The cron zipped the files to send in the same directory as the zipped files, then sent the zip, then deleted the zip

              Looks fine, right? But what if the FTP server is slow and uploading take more time than the hourly cron dispatch? You now have a second script that zip all the folder, with the previous zip file, which will slow down the upload, etc…

              I believe may have been started by an FTP upload erroring out and forcing an early return without having a cleanup, and progressively got worse

              … I suppose this happened. The logs were actually broken and didn’t actually add the message part of the error object, and only logging the memory address to it

        • RustyNova@lemmy.world
          link
          fedilink
          arrow-up
          13
          ·
          15 hours ago

          Oh no need. The client didn’t noticed anything in 6 years, and the reason why we had to check is because they wanted us to see if we could add this feature… That already existed.

          • Elvith Ma'for@feddit.org
            link
            fedilink
            arrow-up
            7
            ·
            13 hours ago

            My favorite part is, if you do some extensive analytics from time to time (e.g. to prepare an upgrade to a new major version) and as a side effect stumble upon some workflows/pipelines/scripts constantly failing (and alerting the process owner) every five minutes for… at least a few months already.

            Then you go and ask the process owner and they’re just like “yeah, we were annoyed by the constant error notification mails, so we mad a filter that auto deletes them”…

    • Gonzako@lemmy.world
      link
      fedilink
      arrow-up
      20
      arrow-down
      1
      ·
      17 hours ago

      Yeah, all these simple data processing scripts will always work as long as both sides stay the same/compatible

    • cenzorrll@lemmy.ca
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      13 hours ago

      Ha, loser.

      *glances over at 6 bash scripts and 2 cron jobs*

      Not you, you’re perfect

  • MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    16 hours ago

    How can a shell alias be undocumented? Type alias, there is the oneliner that can’t be too complicated due to lack of variables.

  • AnanasMarko@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    16 hours ago

    Since I’m somewhat of a simpleton… isn’t that how pipelines actually work? The only difference being, they’re all (scripts) available from a centralized system and triggered i.e. with webhooks?

    Instead of a local script on a server, the system opens i.e. a ssh session and runs the script step by step remotely?

    So is that the joke or am I missing something?

    • orhtej2@eviltoast.org
      link
      fedilink
      English
      arrow-up
      13
      ·
      15 hours ago

      Pipelines are meant to be versioned an replicable, as opposed to a hack job that only runs on a forgotten server in someone’s closet depicted in the meme.

    • Trainguyrom@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 hours ago

      As much as I love the magic of working and attending meetings in your undies, I’ve found I’m a far better professional if I’m actually fully dressed while I work. And when I go into the office I always wear something with a collar even at workplaces where that’s overdressed. It just puts me in the right mindset to be the best I can be at what I do

      • phutatorius@lemmy.zip
        link
        fedilink
        arrow-up
        4
        ·
        10 hours ago

        I’m always fully dressed while working remotely. That is, if wearing a bow on my winkie counts as “dressed.”