However that’s not the point. The point is Companies, when brought into say, the Pennsylvania supreme court on Obscenity charges, can say they do everything technically possible to filter out these words, so we are not liable for whatever law – this usually makes prosecutors fail at trial, or reconsider bringing charges.
If you let anyone say the no no words, and someone sends a rape threat to Nancy Pelosi on your platform you could be liable for harassment, hosting obscenity (real charge in multiple US states), and other such financial annoyances.
So it is cheaper to just have a set of policies and procedures in place, even if it objectively makes your platform worse, even if it objectively is not effective, simply because it looks better in court and gets you out of more fines. See: The New Zealand law spurred on by the Christchurch shooter, which essentially requires every website to censor violent images and manifestos or pay some ridiculous $5mil NZD fine a DAY that it is kept up after being reported. If they can AI things breaking that law away, even if it harms other users, they’re going to do so.
Companies, like all parasites, will actively shy away from poisonous food sources and pick other directions to go in.
It is but it takes the blame off of the platform entirely.
To word it a different way - if I made a service where no matter what you wanted to say I would write down your words, read them, and run through town shouting them until I found the person it was directed to and then shout it at them… I would be liable for the words being said as much as the person paying me to say it.
If, however, I have a strict policy where I will only do the above after I strictly review and moderate your words, and you managed to sneak in a tongue twister that says something dirty that I didn’t realize until after I shouted it… I am no longer liable. I did everything a reasonable person could expect, you are the only one liable.
When people sue in the US (and when companies really fuck up) people sue the person liable and all possible parties that could be included. The parties then shift blame around pretrial and try to prove they are not liable by xyz to get dismissed off the case. If this fails then each party sued essentially has a trial for their specific liability, which needs to be separately proved in court; and if it makes it that far, in front of a jury (or panel of judges, or a single judge, depending on which state and what kind of action).
However that’s not the point. The point is Companies, when brought into say, the Pennsylvania supreme court on Obscenity charges, can say they do everything technically possible to filter out these words, so we are not liable for whatever law – this usually makes prosecutors fail at trial, or reconsider bringing charges.
If you let anyone say the no no words, and someone sends a rape threat to Nancy Pelosi on your platform you could be liable for harassment, hosting obscenity (real charge in multiple US states), and other such financial annoyances.
So it is cheaper to just have a set of policies and procedures in place, even if it objectively makes your platform worse, even if it objectively is not effective, simply because it looks better in court and gets you out of more fines. See: The New Zealand law spurred on by the Christchurch shooter, which essentially requires every website to censor violent images and manifestos or pay some ridiculous $5mil NZD fine a DAY that it is kept up after being reported. If they can AI things breaking that law away, even if it harms other users, they’re going to do so.
Companies, like all parasites, will actively shy away from poisonous food sources and pick other directions to go in.
Is a veiled threat not a threat that would hold up in (American ) court ?
It is but it takes the blame off of the platform entirely.
To word it a different way - if I made a service where no matter what you wanted to say I would write down your words, read them, and run through town shouting them until I found the person it was directed to and then shout it at them… I would be liable for the words being said as much as the person paying me to say it.
If, however, I have a strict policy where I will only do the above after I strictly review and moderate your words, and you managed to sneak in a tongue twister that says something dirty that I didn’t realize until after I shouted it… I am no longer liable. I did everything a reasonable person could expect, you are the only one liable.
When people sue in the US (and when companies really fuck up) people sue the person liable and all possible parties that could be included. The parties then shift blame around pretrial and try to prove they are not liable by xyz to get dismissed off the case. If this fails then each party sued essentially has a trial for their specific liability, which needs to be separately proved in court; and if it makes it that far, in front of a jury (or panel of judges, or a single judge, depending on which state and what kind of action).
deleted by creator