SAN FRANCISCO — Earlier this month, the FBI charged a Florida man with making detailed online threats to murder 100 gay people. He had called previously for Black people to be killed and said that he planned to tear-gas a synagogue, according to the criminal complaint.
Suspect Sean Michael Albert, who said that he had been joking, has found more to appreciate on Twitter since Elon Musk has taken control. The last 11 tweets he liked before his arrest were either from Musk or by or about Andrew Tate, the kickboxer charged with human trafficking Musk recently let back on the platform.
There is no evidence that what Albert saw on Twitter inspired him to make his own posts, which court documents say were made on Discord, and his attorney didn’t respond to a request for comment. But former employees and online researchers say that physical attacks in the United States have been tracking with Twitter spikes in some categories of hate speech, notably antisemitic and anti-gay slurs and rhetoric.
New research to be released later this month by the misinformation tracker Network Contagion Research Institute suggests a connection between real-world incidents and variations of the word “groomer,” often aimed at gays and suggesting that they are adults bent on seducing children. Although polls indicate a significant minority of the population believes otherwise, gay people are not more likely to be predators than straight people.
Pre-Musk, Twitter had classed the word “groomer” as hate speech. But usage began spiking not long after Musk said he would buy the platform, and it has surged repeatedly since, often after real-world incidents like the fatal shootings at a gay club in Colorado.
“In the past three to four months, we have seen an increase in anti-LGBTQ incidents, and you can see a statistical correlation between these real-world incidents and the increased use of the term ‘groomer’ on Twitter,” said Alexander Reid Ross, a Network Contagion analyst who shared the findings with The Washington Post. He did not say that use of the term had led to the violence.
The second biggest spike in tweets with the word “groomed” came just after Musk took control of Twitter. The biggest, to more than 4,000 in a day, came in late November, shortly before a record seven daily antigay attacks were recorded in the Armed Conflict Location and Event Data Project, a nonprofit tracker of worldwide political violence, Ross said.
While hate language leading up to incidents could raise the possibility of it inspiring violence, hate language afterward is important as well, experts say. Calling the victims of violence “groomers” is “feeding into this highly pressurized, toxic discourse that condemns the victims and thus justifies further activity,” said Ross.
Musk plays a role not just by loosening speech policies and slashing moderation staffing but through personal choices in his interactions, researchers say.
Recent antisemitic incidents included direct references to rapper Ye, who issued tweets against Jews after Musk welcomed him back to the platform after his suspension from Instagram. His return to Twitter saw him issue a pledge to go “death con 3 On JEWISH PEOPLE.”
Even after Musk suspended Ye again, tweets referring to Jewish “privilege” or “supremacy” rose, according to Joel Finkelstein, director of Network Contagion and a senior fellow at Rutgers University.
An assailant who attacked a man in New York City’s Central Park last month shouted, “Kanye 2024” along with antisemitic comments as he did so, police said. In November, vandals spray-painted “Kanye was rite” along with swastikas on headstones in an Illinois Jewish cemetery.
Hate crimes against Jews in New York jumped from nine in September to 45 in November, to make up more than half of bias incidents in the city, according to New York Police Department statistics.
White nationalists and some Black Americans at times amplified one another, Finkelstein said. Neo-Nazi groups posted memes on image boards with Ye as a heroic new Hitler, while Cynthia McKinney of Georgia, a Black American Green Party activist who served six terms in Congress, tweeted that 2022 is “the year of #TheNoticing, the year that gaslighting finally began not to work!” That hashtag, driven by hardcore antisemites on Twitter and image board 4chan, refers to a supposed discovery that some Jews are in influential positions. McKinney did not respond to requests for comment.
Finkelstein has seen the same patterns before, including during an Israel-Hamas conflict in May 2021. A team of analysts from Network Contagion and elsewhere examined 6 billion tweets and Reddit posts and recently found that the volume of tweets using human rights language was a better predictor of both U.S. street protests and antisemitic incidents than was the actual fighting in the Middle East.
“We found that in parallel with fighting, there is massive spike in words like colonialism and apartheid, and then there are incidents,” said Aviva Klompas, chief executive of Boundless, a nonprofit group that also worked on the study. “Then you see the long tail of that weaponized language, and the incidents keep coming.”
Twitter keeps both legitimate debate and hate alive longer and spreads them wider, Finkelstein said: “Wars going on in the world are also waged online, and social media has become the weapon to expand it from a local conflict to a clash of civilizations.”
Beyond dismissing most of Twitter’s trust and safety team and its outside safety advisory council, Musk has reinstated accounts that stoke extremism and tweeted an image of Pepe the Frog, mascot of the alt-right.
He also went out of his way to criticize former trust and safety chief Yoel Roth, who resigned after the November midterms and faulted Musk’s habit of deciding content rules on the fly. Musk took an old Roth tweet of an article describing a criminal ruling against a teacher who had sex with an 18-year-old student and added “this explains a lot” amid a push to portray himself as a great foe of child sex abuse images and Roth as one who let it slide on Twitter — prompting hordes of Twitter users to call Roth a “groomer.”
Though Musk had said that “incitement to violence” remains grounds for suspension, suggestions that Roth should be killed remained on the site after being reported by a longtime researcher who uses the Twitter handle @conspirator0.
Roth fled his home as accounts tweeted images to him, including one of a man feeding another into a woodchipper, with the words “how else are you gonna dispose of stupid” and a plural epithet. Other images included a firing squad with no caption and containers of bullets, one marked “box of pills that cure pedophilia.”
Some of the tweets and accounts were removed a day later, the researcher said. But similar replies are still up. Roth has put his house up for sale and moved, according to someone in touch with him. He declined to comment.
Musk’s new head of trust and safety, Ella Irwin, did not respond to an email seeking comment.
Roth also was singled out in tweets by the influential @LibsofTikTok account, which is led by activist Chaya Raichik and has 1.7 million followers.
The account has long crusaded against transgender medical treatment of young people at hospitals. A focus on Boston Children’s Hospital in August preceded threats against doctors there, while a Wisconsin school under fire for an investigation into the bullying of a transgender student temporarily shut down in May because of bomb threats and harassment.
Her spotlighting also has been followed by the Proud Boys and other violent groups protesting parades and other events.
Task Force Butler Institute, a counter-extremism nonprofit group, last month found 281 LibsofTikTok tweets that mentioned a specific event, location or person between April and November. In 66 of those cases, reports followed of digital harassment or real-world incidents, including death threats and bomb threats. On several occasions, organizers canceled events in response.
Before Musk’s takeover, complaints about LibsofTikTok sometimes resulted in individual tweets being deleted or week-long suspensions, including twice in September.
After Musk’s takeover, there have been no such suspensions, and he has personally interacted with the account, convincing some not to bother trying. “There’s no point,” said activist Erin Reed, who follows the account closely. Asked to comment, Raichik responded by accusing The Post of inciting violence against her.
Favorite subjects for the account have been drag shows and book readings, especially those open to minors. In November, it pointed out an upcoming performance at the Sunrise Theater in Moore County, N.C.
Minutes after the Dec. 3 show began, the lights went out. Two separate electronic substations had been shot with guns and disabled, leaving 40,000 people without power for days. The FBI is investigating the incident and declined to say whether it believes the blackout was aimed at the show.
Rumman Chowdhury, formerly Twitter director for machine-learning ethics and accountability, said that the escalation in hate speech and violence were predictable results of Musk’s decisions but still deeply upsetting.
“It’s certainly very jarring. It’s very sad to see this thing that so many of us cared for and built being decimated piece by piece,” she told The Post. “It’s very hard to see where it’s headed and how bad it’s becoming.”
Cat Zakrzewski contributed to this report.
Send questions/comments to the editors.