Oh man, you guys should see what I was cooking up at my old place.
Head office too shitty to give us an actual asset management solution, but we did have full access to the Microsoft suite, so i used a SharePoint lists as databases, powerapps apps running on iPads for all the data entry ux and then like two dozen hacked together power automate flows linking them all together as well as taking any Info out of the actual IT systems head office used and since we didn’t have API access to those system any data feeding back in to them would be in the form of automated emails that the poor 1st line techs in head office would have to sort through and process manually.
Nah bro, that bash alias is FULLY documented in .bashrc! Idiot.
I don’t see the alias in your .bashrc
yeah, um, about that. I have no idea where it comes from. We can type alias and see what it is, so if it’s ever lost, we can recreate it, but I looked for 30 minutes yesterday even did a grep -R and I have NO IDEA where it comes from, or why it’s named electricboogaloo
My current project has a crontab with 216 entries.
Well, here’s a sentence I haven’t been tempted to use before:
“I believe that may be too many crontab entries.”
Any problem in server administration can be solved with an additional crontab entry. Except for the problem of too many crontab entries.
And that’s why I added a crontab entry that periodically purges my cron configuration. That way, I’m forced to readd only the truly necessary cron jobs, successfully reducing the amount of crontab entries.
just randomly delete 50 of them.
Yes. The strongest crontab entries will probably restore themselves. (For anyone reading along, this is sarcasm. Don’t do this.)
a crontab can regenerate from bisection to form two whole crontabs
pshaw, just drop in there and combine a few
/etc/cron.d/first25 /etc/cron.d/second25 …
This is the way. Exactly what we did + migrated 80% of everything to k8s cronjobs and Argo workflows
At some point it may be good to migrate to airflow or something similar.
It’s not the number of entries that makes it bad. It’s the fact that if you run
crontab
, they are gone…At first I thought you missed the
-r
. Then I checked. Defaulting to STDIN here is very, very dumb, IMHO. Almost as bad as putting the “edit” flag right next to the “delete everything without confirmation” flag on a Western keyboard (-e
vs-r
).Crontab is a really badly designed program that we just can’t fix because everybody depends on its WFTs for something.
That’s why there’s a crontab rule to load the crontab from a file. Cronception if you will.
Make the rule start a secondary cron system. Otherwise it won’t run after you erase the crontab.
Here you go:
with-lock-ex -q /path/to/lockfile sh -c ' while true; do crontab cronfile; sleep 60; done;'
Use SystemD timers, you animal
the final part of that is “written by person that left the company ten years ago”
I have a tool that I wrote, probably 5+ years ago. Runs once a week, collects data from a public API, translates it into files usable by the asterisk phone server.
I totally forgot about it. Checked. Yep, up to date files created, all seem in the right format.
Sometimes things just keep working.
Meanwhile, had to debug a script that zipped a zip recursively, with the new data appended. The server had barely enough storage left, as the zip took almost 200GB (the data is only 3GB). I looked at the logs, last successful run: 2019
Yes, had the same happen. Something that should be simple failing for stupid reasons.
Well it’s not that simple… Because whoever wrote that made it way too complicated (and the production version has been tweaked without updating the dev too)
A clean rewrite with some guard clauses helped remove the haduken ifs and actually zipping the file outside of the zipped directory helped a lot
I mean, I have to say I’ve hastened my own demise (in program terms) by over-engineering something that should be simple. Sometimes adding protective guardrails actually causes errors when something changes.
Am I understanding that last part correctly?
[…] and actually zipping the file outside of the zipped directory helped a lot
Did they just automatically create a backup zip-bomb in their script‽
I oversimplified it but the actual process was to zip files to send to an FTP server
The cron zipped the files to send in the same directory as the zipped files, then sent the zip, then deleted the zip
Looks fine, right? But what if the FTP server is slow and uploading take more time than the hourly cron dispatch? You now have a second script that zip all the folder, with the previous zip file, which will slow down the upload, etc…
I believe may have been started by an FTP upload erroring out and forcing an early return without having a cleanup, and progressively got worse
… I suppose this happened. The logs were actually broken and didn’t actually add the
message
part of the error object, and only logging the memory address to it
Need some monitoring!
Oh no need. The client didn’t noticed anything in 6 years, and the reason why we had to check is because they wanted us to see if we could add this feature… That already existed.
My favorite part is, if you do some extensive analytics from time to time (e.g. to prepare an upgrade to a new major version) and as a side effect stumble upon some workflows/pipelines/scripts constantly failing (and alerting the process owner) every five minutes for… at least a few months already.
Then you go and ask the process owner and they’re just like “yeah, we were annoyed by the constant error notification mails, so we mad a filter that auto deletes them”…
Yeah, all these simple data processing scripts will always work as long as both sides stay the same/compatible
Yep. It seems they haven’t changed a thing about the format. Probably a script much older than mine on their end is generating it too.
Isn’t that true for all of data processing?
Maybe. But webdevs have made it a mission not to seem like so
I’ll hear NO aspersions against my precious Cron!
Cron is magic. Cron is civilization!
This might come in handy.
Naw, mate, that’s Crom.
OK, I got called out
Ha, loser.
*glances over at 6 bash scripts and 2 cron jobs*
Not you, you’re perfect
I feel attacked
Suck my dick O’Leary
I know there’s a meme here, but as a Canadian, I’m sorry about that traitorous asshat.
A self-written shell script “daemon” that tails & greps log output for “ERR|FAIL”
How can a shell alias be undocumented? Type
alias
, there is the oneliner that can’t be too complicated due to lack of variables.Nobody write down that if you run the stuff in a different machine, you have to create the alias first.
And once you lose the machine and are trying to restore your backups, you can’t run
alias
and discover whatdoThingy
actually does.alias thisdoessomething='cd /home/linuxuser/ && ./myscript.sh'
alias cd='echo "command not found"'
If you try hard enough
This a joke? Cause that won’t show up on another machine. Of course it’s undocumented.
Since I’m somewhat of a simpleton… isn’t that how pipelines actually work? The only difference being, they’re all (scripts) available from a centralized system and triggered i.e. with webhooks?
Instead of a local script on a server, the system opens i.e. a ssh session and runs the script step by step remotely?
So is that the joke or am I missing something?
Pipelines are meant to be versioned an replicable, as opposed to a hack job that only runs on a forgotten server in someone’s closet depicted in the meme.
This used to be my remote work wardrobe. But now I dress more casually.
As much as I love the magic of working and attending meetings in your undies, I’ve found I’m a far better professional if I’m actually fully dressed while I work. And when I go into the office I always wear something with a collar even at workplaces where that’s overdressed. It just puts me in the right mindset to be the best I can be at what I do
I’m always fully dressed while working remotely. That is, if wearing a bow on my winkie counts as “dressed.”