Skip to Content

X fights California regulation it says infringes on right to host ‘awful but lawful’ content

SF’s Ninth Circuit will hear company’s appeal to halt state law forcing platforms to disclose how they handle disinfo, hate speech

4:18 PM PDT on July 16, 2024

California’s legislative attempt to reign in hate speech and disinformation on social media faces a test Wednesday, as a federal appeals court weighs an argument by X — formerly Twitter — that the law constitutes a violation of free speech.

Enacted in 2022, Assembly Bill 587 requires big social media companies to publish their terms of service twice a year, along with content moderation policies and a description of what they’re doing to contain toxic and dangerous influences coursing through the platforms.

X sued California last year, arguing the measure is a violation of the First Amendment, in one of several free speech suits the company has filed since being taken over by Elon Musk. After a judge rejected the argument, X appealed to the San Francisco-based Ninth Circuit Court of Appeals.

On Wednesday, the appeals court will hear arguments from X, as well as from the state, which has held firm in its belief that the mandatory disclosure of such information is necessary for users to decide which platforms to use or avoid. The appeals court is weighing X’s request to temporarily block AB 587; no matter which side wins on Wednesday, the case will return to the lower court to be decided on the merits.

The arguments have taken on added significance in an election year — especially one in which Musk endorsed former president Donald Trump through a post on X. The owner and CEO has also used the platform to impugn incumbent President Joe Biden, as well as spread disinformation about election fraud, the efficacy and dangers of Covid vaccines, and smear people like the cave diver he called “a pedo guy.”

(The case will likely continue whether or not the company actually leaves California, as Musk claimed Tuesday; the California law covers any platform with California-based users.)

While AB 587 represents a relatively modest step to stem the viral conspiracy theories and outright lies that misinform voters, other efforts to moderate content have similarly and inevitably collided with the First Amendment. A recent example is TikTok wielding free speech protections to stop legislation requiring it to be sold to a non-Chinese owner.

The case presents “strong, competing values that pull in opposite directions” said Rory Little, a professor at UC Law San Francisco.

“We don't want hate speech, racist speech, disinformation in social media platforms where one message is read by millions of people,” he said. On the other hand, he added, “you have a really strong First Amendment interest in having a free marketplace of ideas.”

X "submits this report under protest"

When AB 587 was approved by California’s legislature, the bill’s author, Assembly member Jesse Gabriel from Woodland Hills, California, explained its origin. He cited studies linking hateful activity on social media to perpetrators of mass shootings, and misinformation on platforms as the source of the refusal to accept Covid vaccines. Gabriel cited the testimony of Facebook whistleblower Frances Haugen as evidence of social media companies’ refusal to police themselves.

So far, AB 587 remains in effect as the lawsuit advances. The law requires platforms to disclose, twice a year, their content moderation policies concerning extremism or radicalization; harassment; foreign political interference; and controlled substance distribution.

In April, more than three dozen companies filed their second set of reports, including X, TikTok, Meta, and Alphabet’s YouTube, covering October through December of last year. (X is the only company that has sued to block the requirement.)

The companies aren’t required to publish the content of offending posts. Instead, the reports offer basic facts about moderation policies, alongside bare bones data about the number of posts flagged, restricted, and removed.

X’s most recent report has some of that — but it also begins with a gripe, in bold font:

“X Corp. maintains that AB 587 is unlawful, and submits this report — and all Terms of Service Reports under the law — under protest,” the note reads. There is also a single-sentence footnote letting readers know that the company has sued to block the law.

In its reports, X declines to categorize posts the way the law requires. X doesn’t categorize language as “hate speech” or “racism,” for example, but instead has policies addressing “Hateful Conduct” and “Abuse and Harassment.” X doesn’t define “disinformation” or “misinformation,” according to the report; instead it monitors those categories through “Civic Integrity” and “Synthetic and Manipulated Media” policies.

In the last three months of 2023, X flagged more than 25 million items in its “Abuse and Harassment” category, removing 616,226 of them. It reported zero flags in its “Civic Integrity” and “Synthetic and Manipulated Media” categories, and said it removed just 12 such posts combined.

Under the law, if a platform omits the required content moderation activity, or is misleading in its reports, it may be liable for penalties of $15,000 per day. In an email to Gazetteer, Bonta’s press office declined to say whether X’s submission complies with the law.

“Our office receives and posts the reports required by AB 587 so that they are available to researchers and the public,” Bonta’s office wrote. “We have not commented on the content of those reports.”

“The court clearly applied the wrong standard”

Bonta’s case relies on the argument that AB 587 is “only a transparency measure” aimed at informing users how social media companies moderate the content on their platforms.

Unlike laws enacted in Texas and Florida — both of which are currently being challenged in court — AB 587 doesn’t dictate how social media companies moderate their content, nor does it give the attorney general power to coerce the platforms to alter or enforce what gets posted to their platforms, Bonta argues.

“AB 587 is simply a disclosure statute intended to provide the public with information about social media platforms’ voluntarily-adopted content-moderation policies and practices,” Bonta wrote in a recent filing.

X, on the other hand, argues that AB 587 is California’s attempt to incite the public to eliminate so-called “awful but lawful” speech, which the state has determined harmful. That’s a violation of its First Amendment rights, according to the company, and constitutes an effort to “pressure the platforms to limit or eliminate content within [the law’s] categories indirectly, because it would have been too obviously illegal for the state to do so directly.

Neither X nor Joel Kurtzberg, a lawyer representing the company in its suit, responded to requests for comment.

Gill Sperlein, an attorney and president of the First Amendment Lawyers Association, gives X a good chance of winning. The lower court judge ruled against the company based on his conclusion that the policies and terms of service regulated by AB 587 constitute “commercial speech,” a designation generally reserved for advertising and solicitations.

Commercial speech gets fewer protections under the First Amendment, because the government has an obvious interest in preventing companies from deceiving consumers with false claims. If the Ninth Circuit decides that terms of service are not, in fact, commercial speech, then it’s likely to reverse the lower court decision and either block AB 587 or require the lower court to reconsider its rejection, according to Sperlein.

“The court clearly applied the wrong standard by treating what’s being regulated as commercial speech,” Sperlein said, referring to the lower court ruling. “It’s just a huge stretch.”

“In the long run, the truth doesn’t win”

The lawsuit before the Ninth Circuit isn’t the first X has filed over malicious speech on the platform. Last year, in a different but related case, the company sued the Center for Countering Digital Hate over claims that the non-profit’s reports about hate and disinformation on the platform drove advertisers away, costing X tens of millions of dollars in revenue.

But that lawsuit was quickly disposed of. In a blistering ruling throwing it out, lower court Judge Charles Breyer in San Francisco wrote that the platform’s case was really “about punishing the defendants for their speech.”

“X Corp. has brought this case in order to punish CCDH for CCDH publications that criticized X Corp.—and perhaps in order to dissuade others who might wish to engage in such criticism,” Breyer wrote.

While X has also appealed that decision to the Ninth Circuit, Wednesday’s case against Bonta is far more nuanced. Professor Little at UC Law in San Francisco said this case is similar to the lawsuits challenging the social media regulation laws in Florida and Texas. Although the challenges are very different — both of those laws  were written to prohibit social media companies from removing conservative candidates and viewpoints from the platforms — the arguments similarly hinge on the question of free speech.

Both of those cases ended up in front of the Supreme Court, which this month sent them back to the lower courts for a deeper consideration of First Amendment concerns. Little said that, in light of that ruling, the Ninth Circuit will likely follow suit and send X’s case back to the lower district court.

The force of the First Amendment in the U.S. has rested on the idea that, in a free marketplace of ideas, in the long run, the truth will win, Little said. But that position is “really under fire today,” he added.

“A lot of people think that in the long run, the truth doesn’t win — that the social media platforms have transformed the marketplace of exchange of ideas, and that states really need to be more careful,” he told Gazetteer. “The California statute is a good example of the state trying to honor the desire that most people have in our society to not have racist, extremist, disinformation kinds of speech.”

Email this article

Stay in touch

Sign up for our free newsletter

More from Gazetteer SF

Two of the city’s biggest political groups are merging — but it looks more like crisis response than evolution

TogetherSF and Neighbors for a Better SF spent millions on the November election, with little to show for it. Will joining forces get them any more?

January 15, 2025

A humble church cookbook from Stockton defined Cantonese home cooking for a generation

The St. Mark’s cookbook, first published in 1966 to raise funds for a Methodist church in the Central Valley, remains a cult classic across California

January 14, 2025

Meta quietly removed mentions of LGBTQ-affirming care from public benefits page

The company said that they were ‘removed in error’

January 13, 2025

Despite tons of storefronts standing empty across the city, hardly anyone pays the vacant storefront tax

Things may be about to change for non-compliant property owners, who have been getting away scott-free so far

January 10, 2025

Inauguration day took Mayor Lurie from a packed Civic Center speech to a massive street party in Chinatown

SF residents expressed cautious optimism to Gazetteer SF about big swings on the mayor’s 100-day agenda

January 10, 2025