Connect with us

Research/Study

Anti-LGBTQ Twitter account seems inspired by Florida Gov.’s Press-Sec

Christina Pushaw, who used “grooming” smears to justify Florida’s “Don’t Say Gay” law, said Libs of TikTok “truly opened her eyes”

Published

on

Graphic by Andrea Austria for Media Matters

By Kayla Gogarty | WASHINGTON – After Florida Republican Gov. Ron DeSantis’ press secretary Christina Pushaw used an anti-LGBTQ slander to defend Florida’s “Don’t Say Gay” bill on March 4, right-wing media and figures used similar absurd attacks to defend the legislation, accusing LGBTQ people of “grooming” children to be LGBTQ or to engage in sexual activity.

Recently, Pushaw credited anti-LGBTQ Twitter account “Libs of TikTok” with opening her eyes to the issue, and now Media Matters has found that Pushaw and the account have interacted with each other at least 138 times since June 2021. 

On March 28, DeSantis signed the “Parental Rights in Education” bill, also known as the “Don’t Say Gay” bill, into law. This anti-LGBTQ legislation bans discussion of sexuality or gender identity in kindergarten through third grade, though its vague wording could be used to prevent such discussions — however broadly defined — at any grade level. 

Throughout March, right-wing figures and media responded to criticism of the extreme bill with ramped up attacks accusing LGBTQ people of “grooming” children — the same messaging initially tweeted by Pushaw on March 4. Simultaneously, right-wing media and figures also used anti-LGBTQ content from Libs of TikTok as ammunition for their arguments. 

Libs of TikTok is an anonymous anti-LGBTQ Twitter account that started on TikTok but moved to Twitter in November 2020, where it singles out individual TikTok users, including teachers, for ridicule and harassment in tweets that often go viral. On April 13, Twitter briefly suspended the account for violating its policy against hateful conduct. The account has since been restored — even though Libs of TikTok has repeatedly misgendered public figures and content creators.

Pushaw has been interacting with Libs of TikTok and promoting it since at least July 2021. The account pushed the same anti-LGBTQ messaging Pushaw has used, and Pushaw credited Libs of TikTok in a March 24 tweet with opening her eyes to educators teaching “about sex, sexuality, and LGBT issues.”

Media Matters analyzed tweets from Libs of TikTok and found that in addition to posting videos targeting LGBTQ people, the account specifically used “groomer”-related language in 46 tweets since November 2021. The tweets earned over ​​220,000 total interactions (replies, retweets, like, and quote tweets), or an average of nearly 5,000 interactions per tweet — almost double the average interaction of the account’s other tweets. Notably, the majority of the tweets with “groomer”-related language from Libs of TikTok — 24 out of 46 — were posted before March 4, when Pushaw tweeted similar rhetoric.

Pushaw has directly mentioned Libs of TikTok in 97 tweets, with 70 of them posted before her March 4 anti-LGBTQ tweets. Libs of TikTok similarly mentioned Pushaw in 41 tweets since June 2021, praising her as “one of the greatest accounts” on Twitter. (We coded tweets as “mentions” when a user replied to an account, tagged it in a tweet, or replied to another tweet which had tagged that account.)

Pushaw has also repeatedly praised Libs of TikTok, similarly calling it “one of the best accounts on here.” She has also encouraged other users to look at the account’s “evidence.”

Since Pushaw used “grooming” language in her tweets on March 4, she has tweeted similar language another 22 times, earning over ​​23,000 total interactions, or an average of 1,000 interactions per post. This average is roughly triple that of her other tweets. Meanwhile, Libs of TikTok has continued to post anti-LGBTQ content, even replying to a Pushaw tweet and praising the “Don’t Say Gay” legislation as “literally genius” for exposing “creeps” and those that “identify themselves as pro-grooming.”

Pushaw has continued to interact with Libs of TikTok since March 4, even recently asking for additional information on content that claims to expose Florida teachers and/or curriculum.

The influence of Libs of TikTok is particularly problematic, as the account is run by an anonymous user who openly espouses anti-LGBTQ rhetoric — such as calling LGBTQ identity “narcissism” based on “delusions” — has called for all openly LGBTQ teachers to be fired, and claimed in a recent interview that she is directly responsible for some “evil” teachers being fired already.

*********************

Kayla Gogarty is an Associate Research Director at Media Matters focusing on disinformation.

********************

The preceding article was previously published by Media Matters for America and is republished by permission.

Continue Reading
Advertisement

Research/Study

Busting anti-queer bias in text prediction

Published

on

Screenshot/YouTube Heartstopper text session (NetFlix)

By Lillian Goodwin | LOS ANGELES – Modern text prediction is far from perfect — take, for instance, when a search query suggests something completely different from your intention. But the trouble doesn’t end at inaccuracy. Text prediction can also be extremely exclusive or biased when it comes to predicting results related to marginalized communities.

A team of researchers from the USC Viterbi School of Engineering Information Sciences Institute and the USC Annenberg School for Communication and Journalism, led by Katy Felkner, a USC Viterbi Ph.D. in computer science student and National Science Foundation Graduate Research Fellowship recipient, has developed a system to quantify and fix anti-queer bias in the artificial intelligence behind text prediction.

The project, presented by Felkner at the Queer in AI workshop at the North American Chapter of the Association for Computational Linguistics (NAACL) conference in July, looks at both detecting and reducing anti-queer bias in a large language model, which is used in everything from search bars to language translation systems.

The large language model, or LLM, is the “brain” behind the text prediction that pops up when we type something in a search bar—an artificial intelligence that “completes” sentences by predicting the most likely string of words that follows a given prompt.

However, LLMs must first be “trained” by being fed millions of examples of pre-written content so that they can learn what sentences typically look like. Like an energetic toddler, the LLM repeats what it hears, and what it hears can be heteronormative or even overtly discriminatory.

“Most LLMs are trained on huge amounts of data that’s crawled from the internet,” Felkner said. “They’re going to pick up every kind of social bias that you can imagine is out there on the web.”

FEW WORDS, BIG EFFECT

The project found that a popular LLM called BERT showed significant homophobic bias. This bias is measured through Felkner’s benchmark, which compares the likelihood that the LLM predicts heteronormative sentences versus sentences that include a queer relationship.

“A heteronormative output is something like ‘James held hands with Mary,’ versus ‘James held hands with Tom,’” said Felkner. “Both are valid sentences, but the issue is that, across a wide variety of contexts, the model prefers the heteronormative output.”

While the difference is just a few words, the effect is far from small.

Katy Felkner presents her work at NAACL

Predicted outputs that talk about queer people in stereotypical ways can enforce users’ biases, and the model’s lack of ‘experience’ with queer voices can result in it looking at queer language as obscene.

“A persistent issue for queer people is that a lot of times, the words that we use to describe ourselves, or slurs that have been reclaimed, are still considered obscene or overly sexual,” said Felkner, who is also the graduate representative for Queers in Engineering, Science and Technology (QuEST) chapter of Out in STEM at USC.

“If a model routinely flags these words, and these posts are then taken down from the platforms or forums they’re on, you’re silencing the queer community.”

COMMUNITY INPUT

To tackle this problem, Felkner gave BERT a tune-up by feeding it Tweets and news articles containing LGBT+ keywords. This content used to “train” BERT came from two separate databases of Felkner’s own creation, called QueerTwitter and QueerNews.

Although language processing requires extremely large amounts of data—the QueerTwitter database contained over 2.3 million Tweets—she took care to single out hashtags that were being used primarily by queer and trans people, such as #TransRightsareHumanRights.

As the model was exposed to different perspectives and communities, it became more familiar with queer language and issues. As a result, it was more likely to represent them in its predictions.

After being trained with the new, more inclusive data, the model showed significantly less bias. The tweets from QueerTwitter proved the most effective of the two databases, reducing the prevalence of heteronormative results to almost half of all predictions.

“I think QueerTwitter’s results being more effective than QueerNews speaks to the importance of direct community involvement, and that queer and trans voices — and the data from their communities — is going to be the most valuable in designing a technology that won’t harm them,” Felkner said. “We were excited about this finding because it’s empirical proof of that intuition people already hold: that these communities should have an input in how technology is designed.”

Going forward, the project will look to address bias that affects specific parts of the LGBT+ community, using more refined and targeted sets of data and more customized prompts for the model to work with — such as tackling harmful stereotypes around lesbians. Long term, Felkner hopes the project can be used to train other LLMs, help researchers test the fairness of their natural language processing, or even uncover completely new biases.

“We’re dealing with how to fight against the tide of biased data to get an understanding of what ‘unfair’ looks like and how to test for and correct it, which is a problem both in general and for subcultures that we don’t even know about,” said Jonathan May, USC Viterbi research associate professor of computer science, Felkner’s advisor and study co-author. “There’s a lot of great ways to extend the work that Katy is doing.”

*******************

The preceding article was previously published by the University of Southern California‘s Viterbi School of Engineering and is republished by permission.

Continue Reading

Research/Study

Multiracial LGBTQ youth face heightened suicide risk

Nearly half of multiracial LGBTQ youth (48%) reported seriously considering suicide in the past year, compared to 45% of all LGBTQ youth

Published

on

Photo by Harrison J. Bahe/model: Cameron Sotelo

NEW YORK – A new report released today by The Trevor Project, the world’s largest suicide prevention and mental health organization for LGBTQ young people, is the first of its kind to exclusively explore the mental health and well-being of multiracial LGBTQ youth, highlighting the unique mental health experiences among youth of different racial backgrounds.

Key findings include:

  • Nearly half of multiracial LGBTQ youth (48%) reported seriously considering suicide in the past year, compared to 45% of all LGBTQ youth
  • Nearly one in five multiracial LGBTQ youth (17%) attempted suicide in the past year, compared to 14% of all LGBTQ youth
  • Multiracial transgender and nonbinary youth reported higher rates of suicide risk, with 55% seriously considering suicide and 22% attempting suicide in the past year
  • Multiracial LGBTQ youth who are exclusively youth of color reported higher rates of both seriously considering (52%) and attempting suicide (21%) in the past year compared to multiracial LGBTQ youth who are White and another race/ethnicity

“These findings shine a light on the unique mental health challenges and suicide risk of young people living with the distinctive identities of being multiracial and LGBTQ. The research world has largely overlooked this group of young people and how they might experience various risk and protective factors,” said Myeshia Price, Director of Research Science at The Trevor Project. “These novel findings overwhelmingly point to an urgent need to invest in mental health services and prevention programs that specifically affirm the identities of multiracial LGBTQ youth and are attuned to the nuances of how they navigate and experience the world.”

Multiracial LGBTQ youth reported higher rates of negative risk factors — such as experiences of homelessness, food insecurity, and discrimination and victimization based on their race/ethnicity, sexual orientation, or gender identity — than their peers. In particular, multiracial LGBTQ youth who are exclusively youth of color reported higher rates of race/ethnicity-based discrimination compared to multiracial LGBTQ youth who are White and another race/ethnicity (55% vs. 37%). These findings highlight the potential role that racism contributes to poor mental health among young people of color. 

These data also illustrate protective factors unique to multiracial LGBTQ youth, which may play an important role in uplifting their wellbeing and preventing suicide. Multiracial LGBTQ youth who reported high levels of social support from family and high levels of support from friends had significantly lower odds of attempting suicide in the past year than youth who did not have that support (55% and 39%).

This report was created using data from a national sample of nearly 4,739 multiracial LGBTQ youth ages 13–24 who participated in The Trevor Project’s 2022 National Survey on LGBTQ Youth Mental Health. The full report can be found below or here.

If you or someone you know needs help or support, The Trevor Project’s trained crisis counselors are available 24/7 at 1-866-488-7386, via chat at TheTrevorProject.org/Get-Help, or by texting START to 678678. 

Continue Reading

Research/Study

Twitter & Facebook allowing hate labels “pedophile/groomer” on platforms

“Online hate & lies reinforce offline violence. The normalization of anti-LGBTQ+ narratives in digital spaces puts LGBTQ+ people in danger” 

Published

on

Photo by Christopher Kane

WASHINGTON – According to a report released Wednesday by the Human Rights Campaign (HRC) and The Center for Countering Digital Hate (CCDH), Twitter and Facebook are permitting the spread of content linking LGBTQ+ people to pedophiles or “groomers.”

The authors of “Digital Hate: Social Media’s Role in Amplifying Dangerous Lies about LGBTQ+ People” found a dramatic uptick this year in posts mentioning “grooming,” which refers to the practice of pursuing relationships with children for the purpose of sexually abusing or exploiting them. 

Use of this term and related terms as a slander against LGBTQ+ people is an explicit violation of Twitter’s rules governing hate speech, the company said. And yet, even as the platform saw a 406% increase in such tweets beginning in March, it failed to take action in 99% of reported cases, the study shows. 

Forty-eight million people viewed these tweets, the study estimates, with the majority coming from a small group of right-wing extremists, including lawmakers like Republican Rep. Marjorie Taylor Greene (GA). 

Of the most-viewed “grooming” tweets, 66% of impressions were driven by just ten users, the report finds. 

For its part, Meta prohibits anti-LGBTQ+ content on Facebook and Instagram but removed only one paid advertisement mentioning the “grooming” narrative. 

The findings echo CCDH’s report last year on misinformation concerning the covid pandemic (including vaccines), the online spread of which was linked to just a dozen people with large followings on social media platforms. 

“Facebook, Google and Twitter have put policies into place to prevent the spread of vaccine misinformation; yet to date, all have failed to satisfactorily enforce those policies,” CCDH’s CEO Imran Ahmed wrote in the report. 

Just as with covid, the companies’ failure to intervene and take down misinformation and hate speech can have dire consequences. “Online hate and lies reflect and reinforce offline violence and hate,” Ahmed said in a statement about the new report. “The normalization of anti-LGBTQ+ narratives in digital spaces puts LGBTQ+ people in danger.” 

An old, dangerous slander is resuscitated 

In the 1970s, anti-LGBTQ+ crusader Anita Bryant campaigned against inclusive non-discrimination measures by spreading the lie that gay men and lesbians sought to recruit children for sexual abuse. 

Passage, in March of this year, of Florida’s Parental Rights in Education bill – deemed the “Don’t Say Gay” bill by critics – appears to have been a turning point that led to the resuscitation of the slanderous rhetoric linking LGBTQ+ people to pedophiles or “groomers.” 

The label was weaponized by Florida Gov. Ron DeSantis’s spokesperson, Christina Pushaw, to push back against critics of the legislation, which prohibits public school teachers from discussing sexual orientation or gender identity with students in certain grade levels. 

LGBTQ+ advocates say non-cisgender and non-heterosexual youth will be harmed as the bill effectively erases their identities, while potentially criminalizing something as innocuous as a teacher’s mention of their same-sex spouse. 

“The bill that liberals inaccurately call “Don’t Say Gay” would be more accurately described as an Anti-Grooming Bill,” Pushaw wrote on Twitter. 

She added, “If you’re against the Anti-Grooming Bill, you are probably a groomer or at least you don’t denounce the grooming of 4-8 year old children. Silence is complicity. This is how it works, Democrats, and I didn’t make the rules.” 

According to the CCDH and HRC’s report, the social media platforms saw a corresponding spike in content targeting LGBTQ+ people as pedophiles and child abusers after Gov. DeSantis signed the Parental Rights in Education bill into law.

The narrative has occasionally been used to attack non-LGBTQ+ people, as Michigan State Sen. Mallory McMorrow experienced at the hands of her Republican colleague Sen. Lana Theis. 

McMorrow told The Los Angeles Blade there is a moral as well as a political obligation to stand up to conservative extremists who baselessly accuse LGBTQ+ people, or their political opponents, of being pedophiles or enablers of child sexual abuse. 

Read the full report here: [LINK]

Continue Reading
Advertisement
Advertisement

Follow Us @LosAngelesBlade

Sign Up for Blade eBlasts

Popular