The United States, stepping boldly into the 19th century…
What a contrast between the glorious race to the moon in the 60’s, and medicine by leeches under RFK Jr’s HHS and congresscritters wanting to bring back privateering in 2025… The Roman empire took centuries to collapse, and it only took the US 50 years. Quite stunning.
The mistake we made was thinking they want to return to the 1950s.
I can’t say that I’m against this, if used appropriately (which of course Trump won’t). For example against entities their respective juristictions do nothing against, like North Korean hackers or Russian bot farms. It’s no secret that Russia, China, North Korea and Iran, to name a few, are actively directing hacker groups.
Most European powers banned letters of marquee in the 19th century, but the US never signed it because we didn’t have the navy to compete with them otherwise. That’s not true anymore.
It’s technically still in the Constitution as something Congress is allowed to do. Every once in a while, a right wing Congress critter thinks that because Congress is allowed to, then therefore, it should. No thought as to why.
It is a bad idea. The problem is that counter-hacks don’t work.
Any somewhat decent hacker knows the secret of backups and botnets. They don’t attack using their own PCs but some random grandma’s hacked bot PC. So when you counter-hack them, you just nuke random useless bot PCs, which doesn’t harm the hacker at all. And if you manage to hack their own infrastructure, they just wipe it and upload the backup.
So what’s more likely to happen under a scheme like this is that the US hacker will likely just hack russian infrastructure or companies, so doing the same thing we hate about russia.
Also, stuff like that tends to decrease security for everyone.
You are correct, but I just want to mention that the guys operating botnets are not usually the smart ones — they’re just the skids who are have the patience to actually do social engineering and phishing, or coming up with clever stuff to hide malware in.
A lot of the time, the operators of these large networks are caught simply because they didn’t think they needed to hide the IP, MAC or Hostname of the orchestrating machine. Sometimes it is as easy as supoening the purchase records for an off-the-shelf VPS. One time, an operator was caught because a text file captured that it was encoded using a very specific country keyboard type.
That might be outdated in many cases. Botnet operators usually are infrastructure providers nowadays. Hardly anyone operates and uses a botnet at the same time. Usually you have a very professional group who create and run the botnet, and they then rent it out to another person or group who then actually use the botnet.
Getting caught is also not really the issue at hand, we are talking about counter-hacking against hackers operating from countries that don’t care about what these hackers are doing. Can’t rely on e.g. russian police to bust a group of russian hackers who hack US companies.
So finding these hackers is hardly of help, and hacking them to destroy their setups or something like that also hardly matters, since they can quickly recreate anything that was destroyed.
US hacker will likely just hack russian infrastructure or companies, so doing the same thing we hate about russia.
I mean that’s not a bad thing with the war in Ukraine unfolding.
First of all, there are better ways to deal with anything than privateering. There’s a reason why all countries in the world have abandoned it.
Secondly, everybody is operating under the assumption that cybercrime is something that happens and there’s no way around it. I contend that if software vendors were penally responsible for vulnerabilities in their software, you’d see a dramatic reduction in hacks very, very quickly.
As in, if a piece of software is exploited, the engineers who worked on it, their managers and the CEO of their company had better come up with extensive documentation proving how they did their best to implement security before releasing the unfortunate piece of code, else one or all of that bunch gets to spend time in the slammer.
If this was implemented into law, I guarantee you software would become very secure across the board in no time flat.
But of course, in the age of tech monopolies and generalized corruption, it will never happen.
You don’t need to (and shouldn’t) hold the lowest level employees responsible. Just hold the company responsible with severe fines, like their entire revenue for a year or something. For gross malfeasance, hold the CEO and CISO (or CTO) responsible. That would be enough to see budgets for application and network security teams and training skyrocket. Right now those budgets are as small as possible to avoid catastrophic brand risk.
First of all, there are better ways to deal with anything than privateering. There’s a reason why all countries in the world have abandoned it.
I mean, mostly that is because speed of communication increased significantly and most of the major powers increasingly formed alliances that were mostly to enhance trade. So having Captain Ron murder all the cargo ships on behalf of England would not only hurt England’s pockets but also get out really fast.
Secondly, everybody is operating under the assumption that cybercrime is something that happens and there’s no way around it. I contend that if software vendors were penally responsible for vulnerabilities in their software, you’d see a dramatic reduction in hacks very, very quickly.
A dedicated blackhat is like someone who is dedicated to getting into your house. You can take precautions but if they want in, they’ll get in. Which is why there is so much emphasis on threat detection and policies to react to them.
Probably closer to twenty years ago than not, there was a pretty fun show on Discovery (?) called “To Catch a Crook” or something. The premise being that two brothers used to be burglars but now help families improve their security by selling name branded security systems. Every episode would begin with them breaking in, explaining how, and the security system would be installed. Then they would try again and always get in because the family didn’t turn it on or they left an upstairs window open or whatever. Except for one episode where the family DID actually follow all best practices… so they just smashed a window and stole shit before the cops got there.
As in, if a piece of software is exploited, the engineers who worked on it, their managers and the CEO of their company had better come up with extensive documentation proving how they did their best to implement security before releasing the unfortunate piece of code
This already exists. And much of the software that the world actually runs on has regular security audits from third parties and even governments. They suck but they are also what lead to “We are going to make damned sure every merge request has a detailed review” and so forth.
This is why “supply chain hardening” is such a big deal and why Canonical and Redhat exist.
or all of that bunch gets to spend time in the slammer. (…) If this was implemented into law, I guarantee you software would become very secure across the board in no time flat.
If this was the law, I guarantee you that every software company subject to these laws would shut down instantly. And the ones that are left would be structured as a series of shell companies to minimize liability and flee the country.
Don’t get me wrong. I don’t think (official) letters of marque are at all a good idea for the same reasons we migrated away from them as a people: They are just a way to trace liability and trigger a war. But basically giving hackers the PMC treatment (which russia, china, north korea, etc already publicly do) and sending them after enemy infrastructure? Welcome to the cold war of the 21st century… assuming we don’t just go hot in the next year or two.
My god, exactly this. Instead of AI slop features being slammed into every nook and cranny, we’d see software release rate slow to a crawl. Features would take way longer to produce and that would be a good thing. Software engineering licensing should also be a thing, just like with other engineering disciplines. Imagine if your building or bridge were treated like a typical software product. God damn terrifying. It is time for this discipline to grow up.
And software like Lemmy and PieFed would become nonexistent because they can’t afford to meet the regulatory capture.
This would be disastrous for us as users of software and would only benefit the big companies.
That’s what I’m thinking as well. I’m sure you know the attached meme, modern software would collapse, if FOSS software had to be certified like that.
As if we wouldn’t have a voice/vote to prevent that regulatory capture? And as if social software in particular wouldn’t be reshaped to avoid those regulations? So much of the world runs on FOSS already. It would be a monumental shift in the landscape and I sincerely doubt corpos would be successful in that regulatory capture without shooting themselves in the face. They need FOSS to be profitable.