- 64 Posts
- 47 Comments
Devs used to have to consider deployment and uptime! They still should. We as an industry became arbitrarily segmented and irresponsible. I have never gotten used to this tossing shit over the fence.
Yeah that’s some Pope shit to do
Didn’t even think of that. You’re right.
themaninblack@lemmy.worldto
Technology@lemmy.world•U.S. Tech Layoffs Hit Two-Decade High in OctoberEnglish
7·2 days agoDual citizen with Australia, sorry. Though it is fairly light paperwork for Americans who are in tech - as in the U.S., the best chances are to get in stateside with a big company that has an Aussie HQ (Atlassian, Xero, Canva, FAANG, etc.) and then transfer
themaninblack@lemmy.worldto
politics @lemmy.world•Trump Thinks Canceled Flights, Long TSA Lines, and Chaotic Airports Will End the Shutdown. He’s Wrong.
2·2 days agoSuch better political discussions here than on Reddit
themaninblack@lemmy.worldto
Technology@lemmy.world•U.S. Tech Layoffs Hit Two-Decade High in OctoberEnglish
13·2 days agoI left and got two Sr SWE positions within 3 months. It’s like the 90’s down here
I’m actually removing this myself because the rule is no politics
I’m going to try. Could be:
- A long running UPDATE which can temporarily lock all of the data that is being updated. Basically a lock is when the relevant data is frozen while the transaction executes. This can happen at the field or row or table level in most robust database management systems, but in SQLite, during the time when a create, update, or delete is actually being written to disk, the whole file (database) is locked while that happens, even for processes wishing to perform reads.
The solution is to wait for completion, but your query could take 7 million years to complete so… you might not have the patience. You could also just exhaust the compute/memory resources of the machine.
This feels bad when you expected it to be a simple transaction or when you only expected the update to apply to a small subset of data… it’s possible that you’re using a suboptimal query strategy (e.g. many JOINs, lack of indices, not using WITH) or that you’re running your UPDATE on a huge number of records instead of the three you expected to change.
And/or
- A deadlock, meaning the same data is being operated on at the same time, but the operations can’t execute because there is a competing/circular lock.
The use of BEGIN means that the transaction has started. You usually use COMMIT to actually finish and complete the transaction. If you’ve got another query operating on the same data happening during this time, even if it’s data that is incidental and only used to make the JOIN work, there can be “overlap” which makes the transactions hang, because the DB engine can’t work out which lock to release first.
SQLite is single file based and has a more basic and broad lock vs Postgres or other DMBSes. This means that SQLite doesn’t deadlock because it processes each transaction one after another, but this paradigm may slow everything down vs. MariaDB, Postgres etc
Also see ACID compliance for further reading (https://en.wikipedia.org/wiki/ACID)
All my homies work at Lawrence Livermore
themaninblack@lemmy.worldtoUnited States | News & Politics@midwest.social•Why Is There a “The Oval Office” Sign Outside The Oval Office?
7·5 days agoLook at that subtle off-white coloring. The tasteful thickness of it.
I believe you just described MCP
And Australia! And I am from both.








You have me thinking. My gut tells me this is true.
For example, if you have a segmented auth service that someone gets root on, it’s possible for someone to act as anyone else, but not get the whole database if unavailable to all users.
If your load balancer gets compromised, you could cause denial of service or act as a man-in-the-middle for all requests.
If your database gets got, that’s the worst, but you generally can’t intercept web requests and other front-end facing things.
But, I’d like to play devil’s advocate here. I feel that most of these segmented architecture strategies may have negative security implications as well.
First, the overall attack surface increases. There are more redundant mechanisms, more links in the chain, probably more differing types of security/tokens/certificates that can get exploited. It also adds maintenance burden, which I believe reduces security because other priorities may get in the way if things are cumbersome.
In my examples above, a compromise of the auth service in most cases pretty much means a complete compromise of the what your system allows its highest level users to do. Which is normally a lot.
Getting a load balancer will allow an attacker to MITM if TLS termination happens there, and basically this can mean the same as in the auth service, plus XSS-type stuff.
If the service hosting the database is compromised, it’s kinda game over. Including XSS.
So what have we gained here?
A monolith hosting all of these has more or less the same consequences if compromised. However, if it’s all together, it becomes everyone’s responsibility and there are more eyes on each aspect of your application. You’re more likely to update things that need updating. Traffic can be analysed a little easier.
Just wanted to jot down some notes because I have a talk coming up and need to prepare for this question. Please prod my thinking, it would really help me out!