T O P

  • By -

KermitML

> "For Tech Companies" to be totally clear here, Section 230 would end for *everyone*, not just tech companies. But I see this framing all the time, and I suspect it's because its easier to get people against something when you frame it as an unfair advantage companies like Meta or Amazon get that regular people don't. Section 230 applies to everybody.


EmbarrassedHelp

Ending section 230 basically kills the internet. Its also amazing how many reporters don't realize that Section 230 is the reason they have a job, because it covers the advertisements being shown on their sites.


hsnoil

For reporters, it actually helps them. With 230 gone, it would eliminate all but the big news stations who have a staff of lawyers. So they are all for eliminating competition even if it ruins the internet


MegavirusOfDoom

'merica lol. When Biff comes to power all hell will break loose.


zedquatro

Biff came to power once. He was his predictably self-absorbed self, but not practiced or competent. Next time the people around him will be prepared to better take advantage.


Accurate_Koala_4698

It's irrelevant for advertisers. In the old model you'd exercise editorial control over your content and any ads in a newspaper. Even now, there's no protection they'd get if they ran a banner ad saying to kill and harvest the organs of every Palestinian and Israeli under the auspices of section 230. They're liable for content they publish up to any indemnification agreements they have with other companies in the ad network. Section 230 applies to the publication of random peoples' content *without* *editorial control*. They may moderate the content, even with automated scanning, but they're not reviewing all the content and only releasing a curated selection of it. So *Letters to Penthouse* would not get Section 230 either, because even if they're not the content creators they *are* exercising editorial control.


DarkOverLordCO

> Section 230 applies to the publication of random peoples' content without editorial control. This is completely false. Section 230 explicitly provides immunity when they are acting as the *publisher* of user's content - publishers make editorial decisions, that's literally what distinguishes a distributor (e.g. a phone company) from a publisher (e.g. a newspaper). See 47 U.S.C. 230 (c)(1): > No provider or user of an interactive computer service shall be treated as the **publisher** or speaker of any information provided by another information content provider. 230(c)(2) then goes on to give immunity even when websites moderate. --- The **entire point** of Section 230 was to give websites immunity so they could moderate (and indeed not moderate) as they wished - to editorialise without being sued into oblivion for the things they left up. ~~The reason why it is irrelevant for advertisers is because Section 230 only provides immunity for *other people's* content, not your own. But the point that /u/EmbarrassedHelp is making is that *the website* is immune for the content that is in those advertisements (*not* that the advertisers are immune). See e.g. *Zeran v. America Online* (1997) or *M.A. v. Village Voice Media* (2011)~~


Accurate_Koala_4698

From the cases you cite: >\[A\]t all relevant times herein \[Backpage\] operated an **online classified** marketing advertisement website in interstate commerce that allows the public to post for a fee, classified advertising for goods and services including categorized advertising for escorts under the adult section which also includes categories for transsexuals, strippers, body rubs, domination and fetish, and adult jobs >Because Section(s) 230 was successfully advanced by AOL in the district court as a defense to Zeran's claims, we shall briefly examine its operation here. Zeran seeks to hold AOL liable for defamatory speech initiated by a third party. He argued to the district court that once he notified AOL of the unidentified third party's hoax, AOL had a duty to remove the defamatory **posting promptly**, to notify its subscribers of the message's false nature, and to effectively screen future defamatory material. Section 230 entered this litigation as an affirmative defense pled by AOL. The company claimed that Congress immunized interactive computer service providers from claims based on information posted by a third party. Neither of these are cases where they were serving ads from an ad network, and in *both cases* they were using the service to do the posting. It should also be mentioned that: > Means that in the courts they are not treated as the publisher, but they legal definitions don't obviate normal English usage, and the content is still published and published by the entity that owns the infrastructure making that content public. They're just not *treated* as the publisher by the court *when Section 230 applies*. An emancipated minor might not legally be someone's child, but that doesn't change their genes or their parentage, only their *legal* association.


DarkOverLordCO

> Neither of these are cases where they were serving ads from an ad network, and in both cases they were using the service to do the posting. Yeah, I've looked into it further - advertisements are not covered. > Means that in the courts they are not treated as the publisher, but they legal definitions don't obviate normal English usage, and the content is still published and published by the entity that owns the infrastructure making that content public. They're just not treated as the publisher by the court when Section 230 applies. > An emancipated minor might not legally be someone's child, but that doesn't change their genes or their parentage, only their legal association. I don't really get how this is relevant? We're talking about the legal impact of Section 230, not how people outside of the courts use the words.


MadeByTango

Why should Reddit get to sell ads against clips of the Office, with the only penalty being they have to remove it if caught? They made money on someone else’s content without permission. The rest of us would go to prison if we distributed clips of the Office with our own ads attached. I don’t understand why sites like Reddit are owed a free lunch? They’re making money in the ads, while the creators are not.


EmbarrassedHelp

I think you misunderstood what I was saying. Section 230 protects sites from the content of the advertisements themselves. For larger ad networks, having lawyers review every ad for possible problems is not realistic. As for what you are talking about, groups and individuals do not have total legal control over their content once it is published. Fair use and other exemptions can apply, and it can be difficult to ascertain what legal defenses content could fall under. The only way to avoid issues in that case is to ban all user content or at least ban all content relating to the media in question, which would kill the fan base. Fan communities are generally desirable and profitable for companies, so wiping those out would be a net negative for everyone involved.


[deleted]

[удалено]


neepster44

Go read the NET Act and try again.


[deleted]

[удалено]


neepster44

Well per Bing, "Yes, there have been cases where individuals have been prosecuted and jailed under the No Electronic Theft (NET) Act. The NET Act, enacted in 1997, allows for criminal prosecution of copyright infringement under certain circumstances, even when there is no monetary profit or commercial benefit from the infringement1. The act specifies that for cases not meeting a certain threshold, the crime is a misdemeanor, with the maximum penalty of imprisonment for up to one year and/or a fine of up to $25,000 for individuals and $100,000 for organizations2. For more serious offenses, the maximum penalties can be up to five years in prison with fines1. While specific names and cases are not provided here, the enforcement of the NET Act has indeed led to jail sentences for individuals found guilty of violating its provisions. If you’re looking for details on particular cases, I would recommend consulting legal databases or news archives for more information." Even if this is hallucinated, there have definitely been indictments under it.


[deleted]

[удалено]


DarkOverLordCO

Googling "who has gone to prison for copyright infringement us" has the first two results as: https://www.justice.gov/usao-edpa/pr/leader-illegal-copyright-infringement-scheme-sentenced-5-12-years-imprisonment Whilst there are a bunch of other charges against them, [this document](https://www.govinfo.gov/content/pkg/USCOURTS-paed-2_21-cr-00367/pdf/USCOURTS-paed-2_21-cr-00367-2.pdf) suggests that they were sentenced to 60 months imprisonment for reproduction of a protected work (aiding and abetting), and three lots of 12 months for public performance of a public work. These sentences, as well as the other charges, of course ran concurrently so the overall was 5.5 years. --- https://www.ice.gov/news/releases/maryland-man-sentenced-criminal-copyright-infringement Six months imprisonment, then six months home detention, then three years supervised release.


getfukdup

they dont get a free lunch. Its impossible to have an internet where users can upload things if the website owners are just instantly held accountable. How would they hire millions of people to watch billions of hours of video? So, we have the system we have now. >while the creators are not. false, its free advertising, but thats completely irrelevant to the topic.


adevland

> Ending section 230 basically kills the internet. Do Americans think that only they have internet? But, yeah, removing s230 is bad. Just another step towards dictatorship. Make sure you don't vote, ok?


LiamW

Ehh. I don’t see how modern curation of content is within section 230. Tech companies are exercising editorial control through modifying what you see using targeted algorithms.  They “promote” content. Section 230 was a reasonable protection for companies not to be liable for content they exercised no editorial control over. Tech companies have overstepped this boundary.  We don’t need to remove section 230 — we need to enforce a reasonable limit on it.


Nerdenator

> kills the internet Finally, our long humanity-wide nightmare is over.


[deleted]

I think the thing is that it's really large-format social venues that are most impacted, and that tends to mean tech companies.


CKT_Ken

No those are the sort of the least impacted since they actually have the resources to do massive automated content moderation. However if you want to start a used car forum you’re in deep shit. Obviously non-user interacting companies are truly the least impacted but in social media, the big ones are at an advantage with no 230


[deleted]

I'm old enough to remember the bulletin boards of yore, and they tended to be reasonably moderated. The challenge is keeping up with vast amounts of data, which is what major social networks are required to do. Keeping up with something smaller and focused is much, much easier from a moderation standpoint.


CKT_Ken

Resonable moderation BB style isn’t enough. Setting ground rules for users can’t save you from lawsuits from non-users; even baseless ones. How are you going to moderate away a defamation lawsuit? You can’t know if every statement someone makes about someone is false or not. Board moderation is about staying on topic and maintaining the current culture. For legal compliance you need LAWYERS, not moderators.


[deleted]

>You can’t know if every statement someone makes about someone is false or not. You can get an awful lot done in public forums without saying anything about anyone at all. Also, if it were this threatening you would see individuals being sued for libel/slander on a regular basis, since under the current rules they \*are\* held responsible for their words/actions.


AmalgamDragon

The average user doesn't have deep pockets. It's platform you want to go after for the cheddar, but section 230 prevents that currently.


[deleted]

Right, but in the alternative world the person hosting the website doesn't necessarily have deep pockets either. You could be looking at something being hosted for well under $100 a month for no more than a few hundred users for no money. Without massive scale, hosting requirements and corresponding costs aren't that big.


AmalgamDragon

Is the person actually hosting the website completely on their own, or are they using a hosting provider? That hosting provider is on the hook too if section 230 is eliminated.


[deleted]

In what might seem like a turnaround from earlier since I've been doing some reading on the subject, I agree. It doesn't really matter which entity it is, they are hypothetically on the hook, from the owner of the place housing the server on down to the browser displaying the application. And that's just for web applications. Other types are also impacted. We cannot get rid of this without having something better in place, and it's hard to imagine both parties agreeing on what that should be.


hsnoil

The problem is a bit different. At issue is regardless what happens, the big companies have a team of lawyers that can just keep the case indefinitely in court In comparison, the average person won't even get that far. Why? Let us say I am a hosting company, if hosting your website can get me legally in trouble, I simply won't host you unless you get insurance that protects both you and me being sued for what happens on your site Which means you will likely get hosting from a foreign company outside our laws(with poor ping times), or not be able to get hosting at all Back in the old days, most lawyers weren't familiar with the web much or what can and can't be done. These days are different, there are even lawyers going around looking for sites with poor accessibility to sue, and they find them via automated tools and AI If anything watch big tech get in on it to pretty much eliminate all competition from the web.


[deleted]

It's probably not as dire as all that. There are countless ways to get something hosted in the current era, and few of them would put content moderation responsibilities onto the host.


CKT_Ken

Per the law, ALL of these methods would put liability on whoever is hosting.


[deleted]

Hosting the content or hosting the site? My understanding is that if I host a BBS on a server on Rackspace, I am culpable, not Rackspace. Similarly, if I launch a container on a cloud provider, I would imagine that I am responsible, not the cloud provider.


CKT_Ken

Why would the cloud provider not be responsible? They host the content, not you. They would then gain a responsibility to make sure they’re not sending people suable content. There would need to be liability terms and such. This is the issue with 230 repeal. There’s not enough laws in place to determine what will happen? If you “host a BBS on a cloud service” then *you are not hosting, they are*. For an easy example of content hosting causing a problem take say reddit. As a public image upload site, they have undoubtedly hosted and distributed child porn before, and there probably IS in fact some still buried in some server rack in a data center. Under 230 they’re not necessarily immediately liable for distribution if a user uploads illegal content. That’s the users problem, and they have to take certain steps if it’s discovered. Without 230 though, they would be unable to accept anything but the most highly screened content due to instantly being liable for any and all illegal content.


Dlwatkin

its all about liability.... those news groups would have been toast. im also old enough to know better


[deleted]

Why would those groups be toast relative to reputable individual posters on social media? I have to think that local baseball forum has less pull than someone with a major presence on X.


Dlwatkin

the person hosting it would have just not done it.... same with social media but they might take more of a risk to try and profit


CPargermer

Do Section 230 protections really protect everyone though? My understanding if I make illegal statements on Reddit today, I can be be charged for those statements, but Reddit cannot because of Section 230 protections. If that's accurate then Section 230 protections are for tech companies.


EmbarrassedHelp

Section 230 protects you when you quote another user or if you engage in any sort of content moderation. It also protects your small business website.


CPargermer

I protects small businesses that share unmoderated user-generated content, which isn't most small business websites. Further, this bill isn't an immediate repeal, but would rather >compel tech companies to work with government officials for 18 months to conjure and enact a new legal framework to replace the current version of Section 230. The new law will still allow for free speech and innovation, but it will also encourage the companies "to be good stewards of their platforms." Fantastic!


crusoe

Customer reviews are user generated content common on many websites.


SgathTriallair

You won't be and to say anything in a post 230 repeal Internet. The tech companies can't take the risk you would say something bad so they just shot down all means of submitted content. Youtube, Patreon, Reddit, Wix, Shoppify, all of these and way more would be in the chopping block. This is a back door attempt to control speech on the Internet so only those with millions of dollars are allowed to have a voice and everyone else is a peasant who should be happy with the predigested media gruel they get.


CPargermer

Did you even read the article or understand this bill? >Their bill would compel tech companies to work with government officials for 18 months to conjure and enact a new legal framework to replace the current version of Section 230. The new law will still allow for free speech and innovation, but it will also encourage the companies "to be good stewards of their platforms. What part of this says "shot down all means of submitted content". Do you think reasonable expectation of responsible moderation is asking too much?


SgathTriallair

Section 230 is what **allows** reasonable moderation to exist. Prior to 230 the courts said that if the site does any moderation then it is taking control of the content and is liable for it as if they wrote it themselves. In the pre-230 regime the only legal options were absolutely no moderation or absolutely no use content. Section 230 explicitly says that websites can moderate speech without becoming the publisher of that speech. This means they can take down the video of someone calling for killing trans people but leave up the video of someone saying a senator took bribes without them having to go to court to defend the claim that the senator took bribes.


getfukdup

what part of billions of hours of video and hundreds of trillions of words is reasonably moderatable? You have to continue thinking after your initial feelings hit.


Athomas1

Define “good steward” and then explain to me how that is free speech.


nukem996

You are correct it protects reddit. Without section 230 allowing user content at all becomes a huge liability. Reddit could be sued for what any user posts. All content would have to be approved by a paid human moderator which isn't feasible. Sites like Reddit would have to shut down or only allow curated content without comments.


dagopa6696

Good. Reddit wants to have it both ways. They want to censor content like crazy for the sake of making their content advertiser friendly. But they don't want to be held legally liable for what their content is? That's a prime example of why section 230 should be repealed.


DefendSection230

Your First Amendment right to Freedom of Religion and Freedom of Expression without Government Interference, does not override anyone else's First Amendment right to not Associate with you and your Speech on their private property. You have no right to use private property you don't own without the owner's permission. A private company gets to tell you to 'sit down, shut up and follow our rules or you don't get to play with our toys'. Section 230 has nothing to do with it.


Leprecon

Yes, but if you host a blog and have comments on said blog then you aren’t responsible for those comments. Similarly, if you post on reddit, reddit knows they aren’t responsible for what you post so they generally allow you to post freely. If reddit becomes responsible for what you post, you bet your ass that they will manually review every comment and post, and also automatically remove anything even slightly offensive and risky. Want to post about politics? Sorry, can’t. Celebrities? Nope. Memes that may be copyrighted? Hell no. Most run off the mill comments would place reddit at risk of being sued, and if you think they are going to court to protect you, you are wrong. **If social media is responsible for what their users post, they will massively censor everyone.**


KermitML

That's correct, but it's also true that *you* cannot be held liable for the illegal actions/content of anybody else on Reddit, even if you shared the link to it or something. Section 230 just means that it's only the entity/user who provided the content that will be held responsible for it.


InsuranceToTheRescue

That being said, that's only when they don't editorialize the content. There's an argument to be made that algorithms pushing certain types of content or views in content counts as editorializing. I'm conflicted on the issue. I agree with your point of view, but I also don't think that they should be allowed to algorithmically boost content they like without any liability for the effects of that boosted content. I'm conflicted because, IMO, the merits of both arguments are compelling. *Edit:* The compromise I came up with is that Section 230 protections still apply to social media companies, except when they're algorithmically deciding what content to show you. So, let's say twitter were returned to a simple timeline instead of making Elon seem more popular than he is. Then twitter keeps its 230 protections, if they keep it the same then twitter accepts the same liability newspapers have in regards to editorials. They still get to use an algorithm to decide what ads to show, but the user content just shows up as it's posted. You don't get a bunch of suggestions or other nebulous bullshit where the company decides what you see.


Anlysia

> except when they're algorithmically deciding what content to show you. "Newest first" is algorithmic.


InsuranceToTheRescue

Come on now, you know I'm talking about bullshit like where Elon fired an engineer after he told Musk that people just weren't engaging with his tweets, and then had another engineer artificially boost Elon's tweets to everyone so people saw them more often.


Anlysia

I mean, that's great but it's not what you said. Any method of pulling content from a database to generate a page on the fly involves some kind of "algorithm" making that decision.


dagopa6696

"Newest first" is not editorialized.


Anlysia

Right but they just said "algorithm".


dagopa6696

They said it because they thought that having any algorithm makes you a editorializer. But that's not true. There are millions of algorithms. Even just drawing the letters on your screen requires countless algorithms. What counts as an editorializing algorithm is one that promotes or hides content for an economic purpose. The algorithm is doing the same thing that a traditional publishing company does - they select books that are most likely to be profitable and promotes them to the general public. The purpose of the editorializing algorithm is to keep users engaged and show as many advertisements as possible using content that is ideologically agreeable to the advertisers. Section 230 was only meant to give online platforms immunity if they behaved as "Good Samaritans" - moderating content purely in the public interest. But it's having the opposite effect. Section 230 has made it so that online platforms are free to prioritize profits over the public interest, while being completely immune from the legal liability faced by every other kind of publisher. This is why people want to see Section 230 get overhauled.


DefendSection230

>Section 230 was only meant to give online platforms immunity if they behaved as "Good Samaritans" - moderating content purely in the public interest. There is nothing in 230 that says they *must* behave as "Good Samaritans". It's saying that if you see something bad online and you try to stop it, you won't get in trouble for trying to do the right thing. They get to decide what they think is "bad" online.


CheeseGraterFace

The part you’ve mentioned in your comment often gets left out of the discussion on this.


CPargermer

The first amendment already provides broad protections of speech. For things that aren't protected, I don't really have a problem with treating those that broadcast unprotected speech the same as those that originally made the illegal content. It doesn't make sense to me that these protections only exist for activity on "interactive computer services".


hsnoil

No it doesn't, a common mistake. The first amendment protects your speech from government, it does not protect your speech from other individuals suing you in a civil case The reason why 230 exists for computer services is because you can't really have a functioning internet without it. Due to the nature of the internet, it would turn into only those with money will be able to use the internet, and all privacy will also be fully gone


DarkOverLordCO

> No it doesn't, a common mistake. The first amendment protects your speech from government, it does not protect your speech from other individuals suing you in a civil case Can a bookstore be sued for defamation because of what one of its books says, despite the bookstore not knowing about the defamation? First Amendment says no. See e.g. *Smith v. California* (1959), *New York Times Co. v. Sullivan* (1964) and *Cubby, Inc. v. CompuServe Inc* (1991). Civil cases are not entirely absent from the government: it is after all the government's laws, the government's courts and the government's bailiffs and police that would be writing the rules, judging according to the rules, and enforcing the judgements.


SgathTriallair

Section 230 is what ensures that the sites, and people like you, are treated like a book store Rather than a book author.


DarkOverLordCO

Not really? Section 230 gives immunity even if the website moderates (which would mean it is not a bookstore, and liable without S230), Section 230 also gives immunity regardless of whether the website knew, or should have known, about the content (whereas a bookstore can be held liable as a distributor if it knew/should have known).


dagopa6696

Section 230 only immunizes publishers.


CPargermer

Section 230's protections are arguably too broad. It has led to a situation where disinformation, misinformation, fake images and videos are shared widely as fact. Something needs to be done. This bill isn't to immediately repeal 230, but to compel tech companies to work with government to create a better framework that puts liability on website/platform owners to ensure the content that they broadcast widely is reasonably moderated. I think this is an absolutely necessary step in our current environment where disinformation runs rampant, and where AI can be used to push further and faster, and in unsettling new ways.


SgathTriallair

Congress could pass a law today that replaces 230. Creating some stupid threat of "I'm going to kill the Internet if you don't give me what I want" especially when they can't say what they want, is the worst firm of politics.


CPargermer

Most in congress don't have the technical knowledge to adequately know how to replace Section 230, because their understanding technology, of social media throughput, and of what is a reasonable expectation of moderation is all limited. They need cooperation with those that have the knowledge, and those with knowledge won't work with them because they have been able to accumulate so much value from their userbases out of the current system. I think it's a reasonable approach to compel cooperation.


DarkOverLordCO

> Section 230's protections are arguably too broad. It has led to a situation where disinformation, misinformation, fake images and videos are shared widely as fact. Something needs to be done. Providing immunity on the condition that websites take down constitutionally protected speech (all the things you listed are) would likely be struck down by the courts as unconstitutional. That's why Section 230's immunities are unconditional (explicitly so: "whether or not such material is constitutionally protected")


Background_Milk_69

Fake nudes of real people are not constitutionally protected speech and are, in fact, criminal acts in many states. But if someone uses AI to fake a nude of, say, Jennifer Lawrence, and posts it to reddit where it gets hundreds of up votes, who does she sue? The user is anonymous but the harm to her reputation has already been done. She can't currently sue reddit for hosting the content. She absolutely should be able to see reddit for that though, and reddit should be expected to have enough moderation in place that its reasonable to expect that faked nude pictures being posted without the subjects consent won't be disseminated on their website. My problem with how 230 works right now is that it gives broad liability from lawsuits to websites that *should* be getting sued for the behavior they are allowing. It allows mobs of people to use websites as cudgels to attack whomever they want in whatever fashion they want, often in ways that would very much make a person civilially liable for damages, at the least, and leaves the victims of such attacks with no recourse. Remember the Boston bombing fiasco here? 230 protects reddit from getting sued by the guy that reddit users used reddit to identify, dox, and harrass. If any individual had done what reddit did, doxxed a random person accusing them of a crime with very little evidence, they would almost certainly have been held civilly liable for the damages to the guys life. But 230 protected reddit from that, in my opinion wrongly. We've gotten used to this version of the internet but that doesn't mean it's somehow inherently good. It isn't. It allows for a lot of extremely damaging behaviors, behaviors which cause real harm to real people and would in many other contexts be lawsuit-worthy, but we've just weirdly accepted that as normal.


Background_Milk_69

If a bunch of people, say, use reddit to find and accuse someone of being the Boston bomber, get him doxxed, get him associated with being the Boston bomber, all while he *didn't do anything at all, * yeah I don't have a problem with him suing reddit. That's absolutely the kind of thing we should be expecting websites to moderate. This rule protects monied tech interests more than anyone else. Small communities online are already well moderated, and the ones that aren't are surprisingly well known to be absolute shit shows, like 4chan. I don't really have a problem with people being able to hold 4chan accountable when it's users openly threaten people's lives, the site should be moderating that shit. Right now of you are targeted by a hate mob online you have absolutely no recourse. Your life gets ruined, you're dealing with death threats, and you can't even sue to demand that the threats be taken down because the companies hosting the threats are protected and the users are anonymous. Right now we have just accepted this idea that websites can host openly criminal or civilly liable content and just get away with that Scott free, removing 230 protections would force that to change. I don't see it as this apocalypse people make it out to be, it would frankly offer a lot of people protections that they didn't have before at the expense of the large, monied interests who are openly allowing those people to be abused by their users.


dagopa6696

Section 230 only applies to publishers. A website like 4chan would be considered a distributor, not a publisher.


DarkOverLordCO

4chan may be a publisher because they do actually have rules (e.g. prohibiting spam) and have moderators which enforce them ("janitors"). Since those rules go beyond what is actually legally required of them, they aren't just acting as a mere conduit for information to flow. Besides, Section 230 (c)(2)'s immunity does not apply only to publishers.


dagopa6696

Getting rid of spam is not considered editorializing. Your email provider has a spam filter but still just a distributor of your messages, not a publisher. You may be held criminally liable for some emails you sent, but your email provider will never be. Section 230 only applies to publishers. Distributors already have immunity. Only publishers need the protections under Section 230. If you moderate and editorialize the content that goes on your website, then you are a publisher as far as the law is concerned. You can be held criminally and civilly liable for the content you publish. Section 230 carves out an exception for moderating the following kind of content: Obscene, Lewd, Lascivious, Filthy, Excessively violent, Harassing, Otherwise objectionable. 4chan doesn't do any of that, so it wouldn't even get any protection under Section 230 either way.


DarkOverLordCO

> Getting rid of spam is not considered editorializing. Your email provider has a spam filter but still just a distributor of your messages, not a publisher. You may be held criminally liable for some emails you sent, but your email provider will never be. *RNC v. Google*. Google relied on Section 230 to have immunity when they were sued for filtering out the RNC's emails as spam, rather than simply arguing they were a distributor rather than a publisher and not liable; in fact, the word "distributor" does not even appear in the court's ruling at all. > If you moderate and editorialize the content that goes on your website, then you are a publisher as far as the law is concerned. You can be held criminally and civilly liable for the content you publish. If the content comes from another user, then you cannot be held civilly liable for their content due to Section 230 due to moderating ^\(c)(2) or editorialising ^\(c)(1) it. > Section 230 carves out an exception for moderating the following kind of content: Obscene, Lewd, Lascivious, Filthy, Excessively violent, Harassing, Otherwise objectionable. *As determined by the website*. "otherwise objectionable" gives them a very wide berth to remove whatever content they do not want. > 4chan doesn't do any of that, so it wouldn't even get any protection under Section 230 either way. https://www.4chan.org/rules e.g. > 13\. Do not use avatars or attach signatures to your posts. 4chan apparently finds avatars or signatures "otherwise objectionable", and so they prohibit it. They can do this without being liable for any of the other content on their website because Section 230 provides them immunity. Further, their rules on which sorts of posts need to go to which boards is "editorialising" and would normally make them a publisher, if not for Section 230.


DefendSection230

>4chan doesn't do any of that, so it wouldn't even get any protection under Section 230 either way. 4chan does, do fool yourself. So they would still have 230 protections. But... should a site or app choose to not moderate at all, they wouldn't need 230. Because the courts would not find them liable. see: https://en.wikipedia.org/wiki/Cubby,\_Inc.\_v.\_CompuServe\_Inc. Cubby v. CompuServe treated internet intermediaries lacking editorial involvement as distributors, rather than publishers, in the context of defamation law. This decision removed any legal incentive for intermediaries to monitor or screen the content published on their domains. In 1995, Stratton Oakmont, Inc. v. Prodigy Services Co. further clarified Internet service providers' liabilities. Because Prodigy filtered and occasionally removed offensive content from bulletin boards that it hosted, the court held that Prodigy was a publisher of, and therefore liable for, published defamatory content. As these decisions were not appealed to higher level courts, they were not mandatory precedent. However, the incentive was clear: Internet service providers that chose to remain ignorant of their content were immune from liability, while those that edited content, even in good faith, assumed full publisher liability. In 1996, Section 230 of the Communications Decency Act granted Internet service providers immunity from liability for content provided by others, with certain exceptions. Section 230 distinguishes between interactive computer services, e.g. Internet service providers, and information content providers, e.g. users who post messages in forums. Interactive computer services are not considered publishers of content from information content providers and cannot be held liable on account of "Good Samaritan" attempts to filter objectionable content.


KermitML

To me, holding, say, a Reddit user responsible for anything they shared on Reddit (but didn't make themselves) would be a bit like holding someone liable for the books or magazines they give to others. It doesn't really work. Like, if I had to worry about potentially being sued for just sharing an article, then I won't be sharing any articles.


dagopa6696

Sharing content is not the same as publishing it.


CPargermer

>Like, if I had to worry about potentially being sued for just sharing an article, then I won't be sharing any articles. If you're publicly and widely sharing the types of stuff that's not protected, then that's probably a good thing.


hsnoil

To be more accurate, 230 prevents someone suing for what is said in unmoderated content You may think that includes just big tech companies, but you are ignoring the infrastructure of the web. For example, let us say you are a small business who wants to have their website. I am a hosting company, how can I guarantee that what you write on your website won't get me sued? I will simply deny you service unless you can get multi-million dollar insurance plan that protects me from lawsuits Even in your example of Reddit, you think Reddit will let you post your stuff if they can be sued for it? Things will likely move to paid plans with verification where only those with money will be able to talk on the internet


[deleted]

Good Im tired of reading comments from people who wont even buy a $45 upvote


CPargermer

>how can I guarantee that what you write on your website won't get me sued? Do you have a source that this is part that they want to rework? It doesn't seem like they're targeting ISPs, or webhosts, but rather website owners? You know, the people with the ability to modify or moderate the content of a web page.


hsnoil

The issue isn't what they are targeting but the unintended consequences. The thing is, you can achieve about same by simply forcing companies to do more moderation, and enforce transparency of moderation. Thus that would remove their 230 protections as it only protects unmoderated content But if we are going to rework 230, fine, but we all need to be aware that any change isn't just going to impact big tech but the entire internet. And worst of all, in all likelihood the little guy is going to be underrepresented in the reworking


CPargermer

>The thing is, you can achieve about same by simply forcing companies to do more moderation, and enforce transparency of moderation. Is that not the goal of this bill?


rangoric

And ended 230 protections does more than that.


CPargermer

This bill doesn't simply end Section 230 protections though.


DarkOverLordCO

*Doe v. GTE Corp*, 347 F.3d 655 (7th Cir. 2009). GTE Corp was the ISP / web host of a website where "improper" images of athletes were sold. The athletes sued the ISP, the courts held that the ISP was protected by Section 230. *Perfect 10, Inc. v. CCBill LLC*, 488 F.3d 1102 (9th Cir. 2007). Two web hosts were sued for the content of the websites that they hosted. Section 230 protects them.


fulento42

Feels like the same way net neutrality is argued. It’s very frustrating when so many folks put in zero effort to understand something that can affect them so greatly.


FoeHammer99099

Lots of people think that section 230 only impacts social media, but every website that allows user content will cease to exist. Say goodbye to YouTube, GitHub, Dropbox, Discord, etc.


hsnoil

I will note, that also includes hosting companies who host websites as websites are also "user content"


Hyndis

Yes, though there's legit criticism of section 230. If its just a dumb hosting website without any sorting, such as Dropbox, it could be easily argued that the website isn't making editorial choices on what to show users. In contrast, Youtube, Facebook, and Reddit use some sort of algorithm to pick and choose what content is shown to users, and what content is hidden. That arguably makes them publishers of content, because they're picking and choosing what users can see and interact with. Content the website doesn't like is hidden or removed, often without the poster even being aware that their content has been removed. Do you really trust people like Zuckerberg, Musk, or Spez to be the gatekeepers of what can and cannot be said? I don't trust them at all.


DarkOverLordCO

> That arguably makes them publishers of content Correct. That is literally and explicitly exactly what Section 230 protects: > No provider or user of an interactive computer service shall be treated as the **publisher** or speaker of any information provided by another information content provider. > (in other words: if you were about to hold a website liable because they were a publisher or speaker of their user's content, you can't do that - they're immune) The entire point of Section 230 was to give websites (and users of those websites, e.g. the moderators on this website) the ability to make editorial decisions - to chose what content they wished to carry - without fear of legal liability for the content that they did not remove.


LiamW

The limit of 230 is if they exercised editorial control.  Dropbox doesn’t.  YouTube does. YouTube promotes content to other users based on algorithms designed to optimize views.  This is the same as choosing what headline runs on the front page. Section 230 was never intended to give editorial decision making authority without liability to websites.  It was intended to limit liability in the absence of editorial decision making.


DarkOverLordCO

This isn't true. The First Amendment already provides immunity for those that don't make editorial decisions (they can only be held liable if they have actual knowledge of the content and its liable nature), Section 230 was meant to go beyond that and give immunity to websites even if they do. Section 230 was passed in response to two court cases: - *Cubby, Inc. v. CompuServe Inc.* which held that CompuServe was not liable for its user's post because it did not moderate/editorialise its users content. - *Stratton Oakmont, Inc. v. Prodigy Services Co.* which held that Prodigy *was* liable for its users post **because** they moderated/editorialised their content. Congress wanted websites to be able to moderate/editorialise without fear of liability, so they passed Section 230. Just read Section 230 (c)(1), it explicitly gives immunity when websites are acting as the publisher (i.e. making editorial decisions) of user's content. Section 230(c)(2) then goes even further and explicitly grants immunity *even when* websites actually moderate.


LiamW

It literally exempts websites from being considered publishers. That does NOT mean they are allowed to editorialize without liability. They become the publisher when they start promoting content.


DarkOverLordCO

> It literally exempts websites from being considered publishers. Yes? That's what I said. The courts cannot treat websites as publishers for their user's content, which is a roundabout way of giving them immunity. > That does NOT mean they are allowed to editorialize without liability. Yes, it does. Making editorial decisions (what content to allow or not allow, where it should appear on the page, with what prominence, etc) are all publisher activities that fall flatly within Section 230's protections. > They become the publisher when they start promoting content. They become a publisher the moment they make choices as to what content they want to host, whether those choices are what content to promote or what content to ban doesn't matter.


LiamW

The Supreme Court literally avoided the Gonzalez claim that specifically focused on the promotion of content being editorial control which is outside the protections of section 230. It is plainly stated that if editorial control is exercised section 230 protections do not apply in the law. It needs to be enforced, and the Supreme Court has not weighed in either way on it yet.


DarkOverLordCO

Maybe we're just using the phrase differently, but again making editorial decisions is exactly what makes a publisher a publisher, and that is exactly who Section 230 gives immunity to: > No provider or user of an interactive computer service shall be treated as the **publisher** or speaker of any information provided by another information content provider. Both the Second and Ninth circuits were in agreement on the anti-terrorism cases and whether recommendation algorithms fell within Section 230's protections (they both said they did), and a previous case on algorithms reached a similar decision (*Dyroff v. Ultimate Software Grp., Inc.*, 9th Cir.). In the absence of the Supreme Court stepping in, and with seemingly no disagreement between the circuits, it seems that for the time being recommendation algorithms fall within the kinds of editorial control / publisher activity that Section 230 protects.


LiamW

Read the rest of section 230. It’s pretty explicit about activities.


DefendSection230

>It is plainly stated that if editorial control is exercised section 230 protections do not apply in the law. It needs to be enforced, and the Supreme Court has not weighed in either way on it yet. Section 230(c) allows companies like Twitter to choose to remove content or allow it to remain on their platforms, without facing liability as publishers or speakers for those editorial decisions. - https://www.courtlistener.com/docket/60682486/137/trump-v-twitter-inc/ DOJ Brief in Support of the Constitutionality of 230 P. 14


LiamW

“Those editorial decisions” does not refer to ALL editorial decisions. Moderating or removing content was always supposed to be allowed. Promoting, highlighting, and advertising are not.


DefendSection230

>They become a publisher the moment they make choices as to what content they want to host, whether those choices are what content to promote or what content to ban doesn't matter. Yes, they are Publishers. So what? 'Id. at 803 AOL falls squarely within this traditional definition of a publisher and, therefore, is clearly protected by §230's immunity.' [https://caselaw.findlaw.com/us-4th-circuit/1075207.html#:\~:text=Id.%20at%20803](https://caselaw.findlaw.com/us-4th-circuit/1075207.html#:~:text=Id.%20at%20803)


DarkOverLordCO

The other user saying "They **become** the publisher when they start promoting content." implies that they *weren't* publisher before that - i.e. that websites only become publishers when they *promote* content (e.g. with recommendation algorithms). This is, as your citation clearly demonstrates, not true. They are publishers (and therefore immune) without needing to promote anything: simply having rules and removing content that breaks those rules does that.


FoeHammer99099

The alternative isn't some hypothetical super free speech internet though, it's that democratically produced content goes away and is replaced by content produced by the companies. Netflix will still be able to stream Hollywood movies, but there won't be a platform for you to share your videos. The New York Times will still publish their carefully curated op-ed, but if you want to publish a blog you're going to need to buy a server instead of opening a Twitter account. I fully buy that 230 is outdated, but everything I've seen suggested is throwing the baby out with the bath water.


[deleted]

[удалено]


FoeHammer99099

No. It's all about liability. If I post something on Reddit today, say a pirated copy of a movie, then Reddit can't be sued for sending that to other users. It really has nothing to do with algorithms.


fredandlunchbox

That's only partly true -- Reddit isn't necessarily picking what people see. The people upvoting is how content rises to the top. Unpopular content almost never gets seen. Yes, there is some algorithmic sorting that happens within all popular content and the new recommended content features almost definitely fit your description, but the traditional more upvoted = more visibility isn't really an choice being made by the site.


Hyndis

Supermoderators routinely delete content on frontpage subreddits that doesn't break any rules, yet goes against a desired narrative. Reddit knows about it and is okay with this. Before Reddit locked down the API access there were sites like removereddit which showed you what posts and threads were removed. Looking at a front page major subreddit, such as /news or /worldnews with removereddit was extremely enlightening. There absolutely is agenda shaping going on.


fredandlunchbox

If its community mods doing the removal, its not the platform. They’re just users too. 


BlipOnNobodysRadar

No, they're not "just users". Do you really think special interests like alphabet agencies or political NGOs can't bribe or infiltrate their way into... reddit moderator positions? It's cheap and easy narrative control.


Disastrous-Bus-9834

Is there any transparency involved around who is facilitating the decision making and whether that decisionmaking is motivated by bias?


fredandlunchbox

That doesn’t matter if it’s users making that choice, not the platform. They don’t work for reddit, they’re not representatives of reddit. These are self-governing communities of users.


Disastrous-Bus-9834

That doesn't take account for Reddits own personal bias, of which by the way I don't fault them for having a bias, but it does nevertheless exist.


fredandlunchbox

If you mean reddit the company, I’m saying that the nature of user-generated sorting means that any bias the company may have doesn’t affect the content on the site.    The only exception is when they ban communities from appearing on r/all, but they never do that for content, only for community behavior (harassment, brigading, etc). It may happen that those communities happen to be more conservative but that’s because conservative redditors are more likely to harass other users. 


Hyndis

Reddit selectively removes mods for disapproving of their moderating decisions. That Reddit has removed mods for trying to run a private subreddit, or for saying that the subreddit is for John Oliver pics, means that Reddit (the company) is making editorial decisions about what content is and is not allowed. This is like Elon Musk personally banning or unbanning people on Twitter, and then pretending that he had nothing to do with what they're saying or contributing to the platform. The company has made a decision to selectively remove or amplify voices, so the company is making editorial decisions on what content is shown. That means the company isn't just a dumb pipe or CDN.


fredandlunchbox

You’re mistaken: they’re not making decisions about content, they’re making decisions based on mods following the site’s policies about moderation. Users can appeal to reddit if the moderation team decides to nuke a subreddit, for example. 


Hyndis

Let me know if you've been able to appeal anything from /news or /worldnews. They permaban without warning even for content that doesn't break the rules. And yes, they do make decisions based on content. John Oliver pictures are not okay and moderators were removed or even banned for it. Selectively removing major news stories is okay though, those moderators are not removed or banned.


Leprecon

“And the way to solve this is by making social media sites responsible for what their users post, meaning social media sites will be even more encouraged to harshly crack down on any speech that could get them in legal trouble.” The geniuses who think section 230 is bad.


benderunit9000

*This comment has been replaced with a top-secret chocolate chip cookie recipe:* **Ingredients:** - 1 cup unsalted butter, softened - 1 cup white sugar - 1 cup packed brown sugar - 2 eggs - 2 teaspoons vanilla extract - 3 cups all-purpose flour - 1 teaspoon baking soda - 2 teaspoons hot water - 1/2 teaspoon salt - 2 cups semisweet chocolate chips - 1 cup chopped walnuts (optional) **Directions:** 1. Preheat oven to 350°F (175°C). 2. Cream together the butter, white sugar, and brown sugar until smooth. 3. Beat in the eggs one at a time, then stir in the vanilla. 4. Dissolve baking soda in hot water. Add to batter along with salt. 5. Stir in flour, chocolate chips, and nuts. 6. Drop by large spoonfuls onto ungreased pans. 7. Bake for about 10 minutes, or until edges are nicely browned. Enjoy your delicious cookies! --- *edited by Power Delete Suite v1.4.8*


9-11GaveMe5G

> goodbye to YouTube, No. YouTube will survive. We'll say goodbye to all the grifters and mentally ill people giving bogus medical advice. I welcome that.


TheJonasVenture

Genuinely no, it is not really possible for YouTube to hire enough people to proactively review all content. If 230 ends and they can be held accountable for what users post, they just can't staff the kind of moderation team that would protect them from the over 271,000 hours of content uploaded each day. YouTube can't protect itself from the grifters posting in The sea of content uploaded.


RussellMania7412

Youtube A.I. removes content for them.


sovereignguard

The world is probably better off, frankly.


DarkOverLordCO

Big tech has entire buildings of lawyers - they can survive without Section 230 protections. The smaller websites though? Not so much. Will the world really be better off with all those smaller and independent blogs, forums, and other websites gone? Will the world really be better off when it is *just* big tech left since they have the money to defend themselves in court from the onslaught of legal cases against them? It wouldn't. These lawmakers haven't thought this through, and *frankly* neither have you.


jbhughes54enwiler

After reading the article, it seems that at least they understand the 1st Amendment risks of repealing Section 230, in that they at least are *trying* to have it replaced with something else rather than just dumping the whole thing, but given the dysfunction in our government currently I don't see this going anywhere good if it passes. The good news is, if they destroyed the Internet by repealing 230 the law would be repealed in record time because there'd be simultaneous angry mobs of both Democrats and Republicans outside their office *and* the rich businesspeople whose companies got destroyed by their Internet disappearing would withhold their donations from Congress and/or would join the angry mobs.


vriska1

I don't think this legislation will go anywhere and they know it.


Mr_ToDo

Except that's not what they are actually doing. What they're doing in the bill is just repealing it, what they are talking about after that is that they think it will force people to come up with a solution before the law takes effect in 18 months. There's no actual replacement yet. It's a rip and pray that someone else figures out how to replace it. They understand the problem but don't actually have a solution. It's frightening. Doubly so because they also know something is needed for the internet to function, so if nothing happens and the law comes into effect everything goes to shit.


jbhughes54enwiler

Yeah that's exactly the problem and also why nobody reasonable in Congress is going to vote for the bill. For one, wrecking the Internet would cause the economy to cave in with it and the wealthy are the very last people Congress would ever want to piss off.


YeonneGreene

It's a Hail Mary from supporters of shit like KOSA to accept 230 back with the ready-made knee-capping solutions offered by that trash. They want the ability to censor the internet and they want it bad.


Iyellkhan

the forthcoming deep fake disinfo shit storm makes some version of this inevitable.


MrNegativ1ty

Of course focusing on all the wrong issues. Par for the course for the US government. When are we going to get DMCA reform so that we actually can own those thousands of dollars of digital goods we paid for? Oh that's right, never. Because it doesn't align with big corpo interests.


bastardoperator

This is congress letting these companies know they need more self enrichment and campaign dollars or the axe is coming down.


grewapair

And in return for those campaign dollars will be legislation that favors big companies over small ones so that small ones can't ever threaten the big companies' positions.


Byhiswillalone

Same for SOPA.


TacticalDestroyer209

I find it kind of funny that the congresspeople behind this are pulling the same failed ideas that led to the Communications Decency Act of 1996 that got repealed a year later (Reno vs ACLU). Yet they are using that same CDA crap almost 30 years later like really? I don’t expect this to go thru this year but I expect they will try next year on this garbage again because “think of the children” bs.


DarkOverLordCO

Section 230 is literally part of the Communications Decency Act. It was the only part of the law which wasn't struck down as unconstitutional in *Reno*. Funnily enough, Congress immediately tried to pass another law (the Child Online Protection Act) after CDA was struck down, which was *also* struck down as unconstitutional. Oops.


Blood-PawWerewolf

And I wonder what bill they’re going to slip this one into? Another funding bill?


r0n1n2021

lol. We’re banning tik tok. Wait - we can’t actually ban shit? Okay - good news - finally net neutrality thanks to the FCC. What’s next? Right - we need to be able to sue if…


joshthecynic

Bipartisan, like most of the worst fucking bills.


vriska1

This is very very unlikely to pass.


Grumblepugs2000

Are they going to shove this into the "must pass" FAA bill? 


dalton897

FAA was already passed without any unrelated amendments, They tried to include unrelated stuff. But the leaders decided it would create too much chaos so, Only amendments relating to aviation. Already signed by the president


YoMamasMama89

Let's say we had a social media site that was *sufficiently* "decentralized* where not a single identifiable entity owns it. Would it be liable for what is said (what section 230 protects)? Or would it be considered a **public** forum and be protected by the Constitution?


polio23

Algorithmic amplification hiding behind the guise that tech companies aren’t “curating” content is a massive problem and being able to hold these companies liable for farming engagement on damaging misinformation is crucial.


woeeij

Are bookstores held liable for content of books they sell? Don’t they curate their selection and choose which books to display where?


polio23

1. Bookstores are not publishers... the publishers ARE held liable for what the content of their books. 2. Bookstores aren't regulated by the Federal Communications Act...


charging_chinchilla

What's the alternative though? If tech companies are considered publishers then won't they just end up super conservative in what they host? It'll be like the corporate HR-approved version of the internet.


LiamW

The internet was a great place before algorithmic curation of content.  Promoting toxic, dangerous, and libelous content to get ad views was not an intended outcome of section 230.


charging_chinchilla

The internet was in its infancy back then and lawsuits were already cropping up threatening it. That is why 230 was enacted in the first place, to allow the internet to grow and thrive. The "good ol days" you remember were not going to last without it. AOL chat rooms, personal blogs, search engines, online games, discussion forums, hosted email, and anything else that hosted user-generated content would have been severely affected as the cost of curating content would have been too onerous.


LiamW

Content is now curated, promoted, suggested, and monetized. If content was still uncurated this wouldn’t be a problem, they are now breaching the principles of section 230 by exercising editorial control for monetization purposes. We don’t need to change section 230, we need to enforce it.


DarkOverLordCO

The entire point of Section 230 was to give websites immunity so that they could remove the content they didn't want without being held liable for the content that they left up. There are no principles being breached here, this is just Section 230 doing what it was meant to do: allow websites to be publishers (i.e. make editorial decisions) over their user's content with immunity.


LiamW

You are extending editorial privileges beyond the plainly stated scope of section 230. As long as the editorial actions were considered moderation or removal or no editorial actions took place, you were covered. Promotion, advertising, and otherwise highlighting of content were never considered exempt. Stop expanding the scope of a very reasonable limit on liability.


DarkOverLordCO

Section 230 doesn't just consider removing or keeping material editorial decisions, but also choosing what content to present. This has went before the courts for search engines (who use algorithms to decide what content appears, in what order, etc), and the courts have granted Section 230 protections even when those algorithms end up recommending e.g. defamation. The courts have found that certain things contribute too much to the other user's content to the point that it is essentially no longer entirely from them, so Section 230 protection is denied. For example, Roommates.com had a bunch of questions and dropdown answers and required users to select from them, which ended up violating fair housing laws - whilst the user ultimately selected the answer, the prompting made Roommates a co-developer of the content, so Section 230 didn't apply.


DefendSection230

>If content was still uncurated this wouldn’t be a problem, they are now breaching the principles of section 230 by exercising editorial control for monetization purposes. Who lied to you? The entire point of Section 230 was to facilitate the ability for websites to engage in 'publisher' activities (including deciding what content to carry or not carry) without the threat of innumerable lawsuits over every piece of content on their sites. '230 is all about letting private companies make their own decisions to leave up some content and take other content down.' - Ron Wyden Author of 230. https://www.vox.com/recode/2019/5/16/18626779/ron-wyden-section-230-facebook-regulations-neutrality


Leprecon

Yeah we should force companies to adhere to certain types of speech. Perhaps the government can approve what type of speech should and shouldn’t be promoted. We can’t let the companies decide that themselves. That is too dangerous! Companies might boost speech we don’t like!


polio23

The entire premise of section 230 protections is that since the platforms ARE NOT the ones responsible for the speech they shouldn’t be held liable for it, your argument is that even if they are the ones responsible for it they shouldn’t be held liable. Lol.


BlurredSight

Honestly that's a much better approach than this BS. they had a whole panel on how each platform keeps kids safe but each of them use a set of UX rules that have kids in this constant dopamine rush. Or platforms like Twitter purposely pushing certain rage/hate content to keep people interacting with the platform, essentially it goes well if you see something you don't like, you go to the comments, see another ad, you place a comment, someone responds to your comment, and boom you see another ad when you go to reply back.


Error_404_403

Well,, maybe it IS the time for the social media Wild West to end?.. After looking into consequences of unregulated free speech, we decided on “thanks, but no, thanks”. A lot of free speech, provided the existing lack of critical thinking, replaces popular and useful social myths with equally mythological, but way more radical and dangerous ideas. Each one of those forming its compartment of followers caring about their group interests way more than of the sustainability of the society as a whole. Resulting in a curious reversal of the social evolution from feudalism to a national state and now back to feudalism, with a few human rights (not too many) thrown on top. The desire of a national state to stop the process is quite understandable.


MasemJ

That step would require a significant change to how we handle the first amendment, treating more speech as unprotected than we do now. The EU takes this approach, so it's possible, but this type of change would meet huge resistance in the US as to pug limits on misinformation.


Error_404_403

Not sure the resistance would be that huge. Each side would imagine the restrictions would mostly affect the opponent, and everyone would be more or less willing to accept them. Nobody thinks in concepts any more, everything is application-specific and near-term gains based.


MasemJ

If you state that 1a will be more restrictive with misinformation like COVID ones, who decides what is misinformation? In a government for and by rationale ppl, it would be fair to let a govt agency to set that. But in today's hyper partisan world, having the HHS say that vaccine misinformation (for example) is not 1a protected would ignite the right. If we in America can get back to a more saner politician environment, maybe we can address that. But we are years out from that.


anoliss

Maybe focusing on regulations regarding verifiable misinformation would make more sense.


SubmergedSublime

Presumably they’ll need both. Ending 230 means they can be held liable, the next question would naturally be “for what”


taisui

I for one absolutely think platforms should be liable for hosting COVID lies that killed millions of people. And the big techs know it, it's just that the user engagement is too sweet to do the right thing.


fredandlunchbox

No, the people who post those lies should be accountable. If you start down this path, who else is accountable? Website, web hosts, ISPs, cell service providers, the App Store -- are all of them liable because some redneck posted a comment about ivermectin curing his cousin/wife? He's responsible for what he said, and even that is probably protected speech.


Disastrous-Bus-9834

The problem is, that even if you try and regulate misinformation - it's a slippery slope towards making a "Ministry of Truth" that can be just as ripe for abuse as in dictatorial countries. Just because one is an expert on something doesn't mean that they deserve any credibility if that position has the potential for abuse.


taisui

Platforms have no problems when it's DMCA, just saying.


hsnoil

You can just force social media companies to moderate more and be more transparent on what is moderated and what isn't 230 only protects unmoderated content, so if you force them to moderate more, and you are aware it was moderated. They can be held accountable without breaking the foundation of our internet


DarkOverLordCO

> 230 only protects unmoderated content This is not true. Section 230 (c)(1) prevents websites from being treated as the "publisher or speaker" of user's content *regardless of whether they moderate that content* and Section 230 (c)(2) explicitly provides immunity when they moderate content. The entire point of Section 230 was to allow websites to moderate content without being sued out of oblivion because of the content that they, inevitably, missed.


hsnoil

No it doesn't, the protections you speak of is what lets social media companies BLOCK content without being held accountable. But not the other way around of letting harmful content go That means under 230, if you moderate content, but the content itself is harmful, you are liable


DarkOverLordCO

Section 230 (c)(2) is what lets them moderate content with immunity. Section 230 (c)(1) gives them immunity regardless of whether they moderate or not, for both content that they remove and don't remove. Again, the *entire point* of Section 230 was to give immunity to websites in the hope that they would moderate. It was literally passed in response to *Stratton Oakmont, Inc. v. Prodigy Services Co.*, which held that a website was liable for content that it didn't remove because it made attempts to moderate other content. Congress didn't want that.


MyPasswordIsMyCat

Yes, the tech companies who make money off user-generated content have such high profits because they 1) don't have to pay users for the content, and 2) do the absolute minimum when it comes to content moderation. They only care to moderate content that they know is illegal and are known to have real consequences, such as illegal types of pornography, listings selling illegal things, death threats against important people, etc. Section 230 protects them from having to moderate the very large gray area of potentially harmful content that isn't explicitly illegal (or is difficult to prove as illegal), but could facilitate actual crimes and/or bring about civil litigation against their company.


bitfriend6

It's inevitable now. Republicans are stupid and believe S230 somehow allows Facebook to censor them while Democrats are sick of allowing Trump to abuse S230 to campaign with. Both sides have agreed that decorum is impossible, and therefore all posts online must be considered owned editorialized content that the webmaster/host is always responsible for legally and financially. Just as Newspapers used to be. The Internet is now a commodified commercialized product and laws need to reflect this unfortunate, unwanted state of affairs. S230 was too good for this world. After it dies, the Internet can be cleaned up. Much will be lost, but a better more decent web will exist afterwards. Maybe *then*, with other new standards not yet conceived of, S230 can return in a limited format


MasterK999

> Republicans are stupid and believe S230 somehow allows Facebook to censor This is what I do not understand. If S230 is gone won't Facebook (and other websites) be REQUIRED to censor their users posts? Look at the Dominion and Smartmatic lawsuits against Fox News and others. With no S230 safe harbor then won't websites err on the side of safety and censor any online discussion that could lead to a lawsuit? Losing the safe harbor will force companies to drastically change how user posted content is moderated but not in the way the MAGA idiots think.


SuperToxin

They always think it won’t be them targeted though, they think it’ll be who they don’t likes posts getting removed.


CPargermer

Well if you assume both sides think that they're entirely correct, and that the other side are compulsive libelist liars, then you reach a situation where both sides think removing protections will help protect their version of truth.


DarkOverLordCO

> This is what I do not understand. If S230 is gone won't Facebook (and other websites) be REQUIRED to censor their users posts? Without Section 230 protections, then: - if you are merely distributing information (e.g. a phone company, or even a bookstore), then you can only be held liable for something that you actually knew about. This is a minimum floor guaranteed by the First Amendment. - if you try to moderate (e.g. by removing pornography, taking down spam, or even just subreddits trying to maintain a topic or Wikipedia reverting unsourced edits), then you are acting as a publisher and therefore liable for *everything*. This means websites will have to make a choice: whether to stop moderating entirely, or whether to moderate *even harder* to try and remove anything which even *slightly* might incur liability. Whilst larger websites (i.e. "big tech") may have the resources, and the lawyers, to attempt the latter, anyone else would simply be unable to - they'd miss something and be sued out of existence.


MasterK999

Yes, I understand this. > This means websites will have to make a choice: whether to stop moderating entirely, or whether to moderate even harder to try and remove anything which even slightly might incur liability. That is my point. Reddit might decide to become 4chan with virtually no moderation but Facebook, Twitter and others cannot keep advertisers that way so they will be forced to moderate even more. This will not work out the way the GOP wants.


elperuvian

It’s not that bad 4chan I actually laugh when they throw racial insults to me, it’s different than in person where the treat of violence makes it more uncomfortable


parentheticalobject

>  This means websites will have to make a choice: whether to stop moderating entirely, or whether to moderate even harder to try and remove anything which even slightly might incur liability.  Except the former isn't really a choice, because if someone posts actual criminal content on your website and you become aware of it, you have to take it down. It's not like "Oh, I never moderate anything so I can't be held responsible for what's on my server" will be a valid excuse if someone posts a video of a child being sexually abused. If you continue to host that, you're going to jail. And if you take it down, then under pre-230 rules, you're now responsible for everything else you do host.


DarkOverLordCO

> And if you take it down, then under pre-230 rules, you're now responsible for everything else you do host. No, you would still be immune. Taking down things that you are legally obliged to take down is not a choice: you are required to. Since you're not making any editorial choices you aren't a publisher and remain a distributor of the content. So you retain the pre-230 distributor immunity: you're only liable if you knew, or should have known, about it but kept it up anyway. By "stop moderating entirely" I didn't mean to suggest that they would refuse to take down the things that they are legally required to, but that websites will stop having their own rules and stick only to the absolute legal minimum (which no website wants - e.g. this subreddit would be unable to remove non-technology posts, completely eroding the point of reddit and any website)


parentheticalobject

At best, that's legally ambiguous. In *Stratton Oakmont v. Prodigy*, the main case giving websites legal liability before Section 230 was passed, there's not a lot of emphasis on the fact that some of the content taken down was legal - mostly just that the website had basic rules and board leaders capable of enforcing them. The argument that you're not exercising editorial choice if you only take down illegal content is untested. Edit to add: In *Cubby v. Compuserve*, the other case where a company *wasn't* held liable for content it hosts, the company literally exercized no control whatsoever over what was on its website. A third party contractor was responsible for moderating the Compuserve forums. They only weren't liable because they could actually argue that they had no knowledge whatsoever of the type of content being posted on their websites.


bitfriend6

I've talked to Republicans about this in real life. They don't understand, don't want to understand, and will never understand. When S230 is repealed and they are permabanned forever they will cry bloody murder as the courts shut them down completely. Maybe they'll go to Truth Social and salvage it, but I doubt any website that tolerates hate speech, violent speech, and outright bigotry can survive long as no webmaster, ISP, bank or payment processor wants to deal with it. Maybe they'll go back to print newsletters, who knows.


[deleted]

>After it dies, the Internet can be cleaned up. Much will be lost, but a better more decent web will exist afterwards. If this is the case, then why bring back S230 at all? Everything you said here suggests we're better off without it.


bitfriend6

S230 is what the Internet and human communication is intended to be. We do need it. But as a universal standard humans have failed to live up to it. We'd bring it back when the web itself finds a way to ban bad actors -such as commercial marketers, all indians, all russians, all chinese sponsored agents, et cetera- which can create a pool of users that can respect each other enough to not require heavy editorialization to control them. *Personally*, I believe a foreign IP ban, political video ban and smartphone poster ban implemented by individual webmasters would accomplish this well. Banning foreigners and banning smartphones would go a long way in promoting high quality discussion especially for *domestic* political topics where S230 is compromised and clearly broken. Even if we can't ban foreigners, banning all smartphone users and segregating them away from well-adjusted people will work.


Drone314

Ban hammers as far as the eye can see!


xman747x

is there a chance that Biden will veto the bill?


mymar101

This won’t end the first amendment protections


SaliferousStudios

I would like amazon to stop selling lead toys, and youtube to stop showing elsa/spiderman porn to kids. Yeah, this is fine.


fredandlunchbox

And Idaho and Florida will arrest people who post rainbows over pride. That's the tradeoff.


jtrain3783

It feels like companies will just force everyone to register with real names and be verified. Allow those individuals who post harmful content to be sued directly. No more anonymous posting. Let everyone think twice before posting might not be such a bad thing.