Connect with us

Online Culture

TikTok announces change in algorithm to prevent “content holes”

The company explained that “certain kinds of videos can sometimes inadvertently reinforce a negative personal experience for some viewers

Published

on

Graphic by Molly Butler for Media Matters

CULVER CITY, Ca. – TikTok, the Chinese owned video-sharing app that allows users to create and share 15-second videos, on any topic, announced this week that the company was altering its ‘For You’ recommendation system (feed) algorithm.

TikTok with its 1 billion plus users is one of the biggest social media networks globally. The company explained that “certain kinds of videos can sometimes inadvertently reinforce a negative personal experience for some viewers, like if someone who’s recently ended a relationship comes across a breakup video.”

The goal according to a company spokesperson is to prevent harmful “content holes,” whereby the system may inadvertently be recommending only very limited types of content that, though not violative of TikTok’s policies, could have a negative effect if that’s the majority of what someone watches such as content about loneliness.

Dr. Eiji Aramaki, a professor at the Nara Institute of Science and Technology, (NAIST), located in Ikoma, Osaka Prefecture, Japan, whose background is working on information science, explained that content holes are created when in community-type content such as those created on a social media platform exploits the user’s unawareness of information.

Then as the user seeks more similar content, the feed algorithm manipulation creates a “content hole search.”

According to TikTok’s explanation of how its ‘For You’ feed works, recommendations are based on a number of factors, including things like:

User interactions such as the videos you like or share, accounts you follow, comments you post, and content you create.

Video information, which might include details like captions, sounds, and hashtags.

Device and account settings like your language preference, country setting, and device type. These factors are included to make sure the system is optimized for performance, but they receive lower weight in the recommendation system relative to other data points we measure since users don’t actively express these as preferences.

The company is insisting that its recommendation system is also designed with safety as a consideration, ensuring that; “In addition to removing content that violates our Community Guidelines, we try not to recommend certain categories of content that may not be appropriate for a general audience.”

Social media companies, especially those with a younger user base such as TikTok, Snapchat, and Instagram are under increasing pressure to implement greater safeguards to stave off harmful content.

Last week, Instagram’s Head Adam Mosseri testifying before the Senate Commerce Committee’s Subcommittee on Consumer Protection, was grilled by senators angered over public revelations of how the photo-sharing platform can harm some young users.

This past September in a report by The Wall Street Journal, based on internal research leaked by a whistleblower at Facebook, it was revealed that for some of the Instagram-devoted teens, the peer pressure generated by the visually focused app led to mental-health and body-image problems, and in some cases, eating disorders and suicidal thoughts.

It was Facebook’s own researchers who alerted the social network giant’s executives to Instagram’s destructive potential.

In July, Media Matters conducted independent research into TikTok’s “For You” page recommendation algorithm that circulated videos promoting hate and violence targeting the LGBTQ community during Pride Month, while the company celebrated the month with its #ForYourPride campaign.

The active spread of explicitly anti-LGBTQ videos isn’t a new problem for TikTok, but it appears that the platform has yet to stop it — even though the company claims to prohibit discriminatory and hateful content targeting sexual orientation and gender identity. TikTok also posted an update in early June celebrating Pride Month and promising to “foster a welcoming environment” and “remove hateful, anti-LGBTQ+ content or accounts that attempt to bully or harass people on our platform.” Given the content being circulated by the algorithm once a user begins interacting with anti-LGBTQ videos, it is clear that TikTok has yet to fulfill these promises. 

This week’s announcement by TikTok is seen by some tech/online web industry observers as an effort by the company to change the system from within avoiding the public outcry and resulting reactions from lawmakers that could lead to more restrictive oversight and regulation.

Continue Reading
Advertisement

Online Culture

Anti-LGBTQ+ narrative more than 400% following Florida’s ‘Don’t Say Gay

Human Rights Campaign & Center for Countering Digital Hate warn of growing influence extremists are wielding online

Published

on

The Center for Countering Digital Hate (Photo Credit: Unsplash/Gilles Lambert)

By Henry Berg-Brousseau | WASHINGTON – In the wake of the passage of Florida’s discriminatory “Don’t Say Gay or Trans” bill, extremist politicians and their allies engineered an unprecedented and dangerous anti-LGBTQ+ misinformation campaign that saw discriminatory and inflammatory “grooming” content surge by over 400% across social media platforms, according to a new report released by the Human Rights Campaign and the Center for Countering Digital Hate. 

The report — Digital Hate: Social Media’s Role in Amplifying Dangerous Lies About LGBTQ+ People — reveals that the average number of tweets per day using slurs such as “groomer” and “pedophile” in relation to LGBTQ+ people surged by 406% in the month after the Florida bill was passed, resulting in a sharp spike in online homophobia and transphobia that social media platforms not only failed to crack down on, but also profited from.

The report also reveals that the anti-LGBTQ+ content was largely driven by a small group of extremist politicians and their allies who together are driving a coordinated and concerted campaign to attack LGBTQ+ kids in an effort to rile up extreme members of their base ahead of the midterm elections. According to the report’s findings:

  • In a matter of mere days, just ten people drove 66% of impressions for the 500 most viewed hateful “grooming” tweets — including Gov. Ron DeSantis’s press secretary Christina Pushaw, extremist members of Congress like Marjorie Taylor Greene and Lauren Boebert, and pro-Trump activists like “Libs of TikTok” founder Chaya Raicheck.
  • Posts from these 10 people alone reached more than 48 million views, and the top 500 most influential “grooming” tweets all together were seen 72 million times.
  • The astonishing visibility these posts garnered is a direct result of Twitter’s failure to enforce its own policies banning anti-LGBTQ+ slurs. Twitter failed to act on 99% of the 100 hateful tweets reported to them anonymously by CCDH researchers, even after it had stated ‘grooming’ slurs were against its policies on hate speech.
  • On Facebook and Instagram, 59 paid ads promoted the same narrative. Despite similar policies prohibiting anti-LGBTQ+ hate content on both social media platforms, only one ad was removed.

“As social media platforms fail to enforce their own standards — enabling a wave of online anti-LGBTQ+ hate to grow without restraint — extremists are wielding dangerous influence, seeking to radicalize Americans, incite hate against LGBTQ+ people, and mobilize the extremists within their base ahead of the midterm elections,” said HRC Interim President Joni Madison. “But the rise of this online vitriol doesn’t just have political implications — there are deadly, real world consequences as violent rhetoric leads to stigma, radicalization, and ultimately violence. Nearly one-in-five of any type of hate crime is now motivated by anti-LGBTQ+ bias, and the last two years have been the deadliest for transgender people, particularly Black transgender women. HRC, along with our partners at the Center for Countering Digital Hate, urgently calls on social media companies to act swiftly and transparently to stop the spread of extremist and hateful misinformation, including the grooming narrative.”

“We’re in the middle of a growing wave of hate and demonization targeting LGBTQ+ people – often distributed digitally by opportunistic politicians and so-called ‘influencers’ for personal gain,” said Imran Ahmed, CEO of the Center for Countering Digital Hate. “Online hate and lies reflect and reinforce offline violence and hate. The normalization of anti-LGBTQ+ narratives in digital spaces puts LGBTQ+ people in danger. Facebook and Twitter claim in their rules to prohibit this kind of targeted hate and harassment but they simply don’t enforce those rules on bad actors — rules which are designed to protect others’ rights. The clear message from social media giants is that they are willing to turn a blind eye. LGTBQ+ rights have been transformed after decades of hard-won progress, but progress is fragile unless you continue to defend it.”

Key Findings of the Report

➤ Anti-LGBTQ+ ‘grooming’ rhetoric on social media platforms drastically increased following the passage of Florida’s Don’t Say Gay or Trans law.

  1. Researchers used the social analytics tool BrandWatch to collect a sample of 989,547 tweets posted between January 1 and July 27 that mention the LGBTQ+ community alongside slurs such as “groomer”, “predator” and “pedophile”.
  2. In the month following the passage of the ‘Don’t Say Gay or Trans’ law, the volume of ‘grooming’ related content increased by 406%.
    1. 6,607 tweets a day overall on average, up from 1,307 the month before
    2. 1,385 tweets a day using the phrase “OK groomer” on average, up from 54
    3. 4,053 tweets a day referring to Disney alongside slurs on average, up from 37
  3. In the week following Twitter’s statement that tweets calling transgender or nonbinary people “groomers” violate its policies on hate speech, there were 8,075 tweets per day on average mentioning the slurs alongside the LGBTQ+ community

➤ ‘Grooming’ rhetoric is being spread by a small group of radical extremists as part of a coordinated and concerted effort to attack LGBTQ+ kids to rile up extreme members of their base, the only voting bloc they are moving on these issues, ahead of the midterm elections.

  1. Researchers used BrandWatch to identify the 500 most-viewed hateful ‘grooming’ tweets from our wider sample, which were viewed an estimated 72 million times in total and received 399,260 likes and retweets.
  2. Within this smaller sample, tweets from just ten people were viewed an estimated 48 million times, equivalent to 66% of the reach of the 500 most-viewed tweets. Amongst the top ten people responsible for driving the ‘grooming’ narrative on Twitter are:
    1. Marjorie Taylor Greene – Representative for Georgia’s 14th Congressional District
    2. James Lindsay – “Anti-woke” activist and author
    3. Lauren Boebert – Representative for Colorado’s 3rd Congressional District
    4. Christina Pushaw – Press secretary to Governor of Florida
    5. Frank Drew Hernandez – Contributor to Turning Point USA
  3. The top 500 ‘grooming’ tweets were viewed 72 million times

➤ Meta profits from ads promoting ‘grooming’ narrative on Facebook and Instagram.

  1. Using Meta’s Ad Library, researchers identified 59 ads promoting the narrative that the LGBTQ+ community and its allies are ‘grooming’ children.
  2. Meta accepted up to $24,987 for the ads, which have been served to users over 2.1 million times.
  3. 32 of the 59 ads, receiving 2 million impressions, focus ‘grooming’ accusations on Disney after the company came out in opposition of the ‘Don’t Say Gay or Trans’ bill.
  4. As of August 1, Meta continued to run ‘grooming’ ads despite stating on July 20 that baselessly calling LGBTQ people or the community “groomers” is covered by its hate speech policies.

➤ Hateful content has gone virtually unchecked, despite anti-discrimination polices at Facebook and Twitter.

  1. An audit found that Twitter failed to act on 99% of the 100 hateful tweets reported to them anonymously by CCDH researchers after it had stated ‘grooming’ slurs were against its policies on hate speech.
  2. Just one of the 59 ads promoting the ‘grooming narrative’ was removed by Meta, and the platform has continued to accept such ads after it had stated ‘grooming’ slurs were against its policies on hate speech.

➤ There are real life consequences to anti-LGBTQ+ hate being spread online.

  1. Legislative — Legislators in state houses across the country introduced 344 anti-LGBTQ+ bills this session, and 25 of them passed. These bills and laws attack the LGBTQ+ community, particularly transgender and non-binary young people and their families, preventing them from accessing age-appropriate medical care, playing sports with their friends, or even talking about who they are in school.
  2. Anti-LGBTQ+ Violence — Nearly 1 in 5 of any type of hate crime is now motivated by anti-LGBTQ+ bias; The last two years have been the deadliest for transgender people, especially Black transgender women, we have seen since we began tracking fatal violence against the community.
    1. Reports of violence and intimidation against LGBTQ+ people have been making news across the country: White nationalists targeted a Pride event in Idaho; Proud Boys crashed Drag Queen story hour at a local library in CA to shout homophobic and transphobic slurs.
    2. Mental Health Outcomes: More than 60 percent of LGBTQ+ youth said their mental health has deteriorated as a result of recent efforts to restrict access to things like gender-affirming care for transgender youth.

The full report and dataset can be found on HRC’s website here

Continue Reading

Online Culture

Homophobic & racist posts were left in Disneyland social media hack

The hack which occurred at around 4:30 a.m. Pacific, was allegedly committed by an individual who claimed his name was “David Do”

Published

on

Los Angeles Blade graphic

BURBANK – A spokesperson for Disney Parks, Experiences and Products, Inc., acknowledged that the company’s Disneyland Facebook and Instagram pages had been hacked in the early morning hours on Thursday with a series of homophobic and racist posts.

In a statement released by the company confirming the hack, a spokesperson said; “Disneyland Resort’s Facebook and Instagram accounts were compromised early this morning. We worked quickly to remove the reprehensible content, secure our accounts, and our security teams are conducting an investigation.”

The hack which occurred at around 4:30 a.m. Pacific, was allegedly committed by an individual who claimed his name was “David Do” and he referred to himself a “super hacker.”

Disneyland’s Instagram account has 8.4 million followers and regularly posts photos from attractions at the park and photos of guests, including families and young children.

From KABC 7 Los Angeles:

Continue Reading

Online Culture

FCC asks Apple & Google to remove TikTok app from their stores

Its pattern of surreptitious data practices that are documented show TikTok is non-compliant with app store policies and practises

Published

on

Graphic by Molly Butler for Media Matters

WASHINGTON – In a series of tweets Tuesday, Federal Communications Commissioner Brendan Carr disclosed a letter sent to both Apple and Google’s parent company Alphabet asking the two tech giants to remove TikTok from their app stores over his concerns that user data from the wildly popular social media platform is disclosed and used by bad actors in China.

In his letter dated June 24 to Apple CEO Tim Cook and Alphabet CEO Sundar Pichai, Carr noted that because of its pattern of surreptitious data practices documented in reports and other sources, TikTok is non-compliant with the two companies’ app store policies and practises.

“TikTok is not what it appears to be on the surface. It is not just an app for sharing funny videos or meme. That’s the sheep’s clothing,” he said in the letter. “At its core, TikTok functions as a sophisticated surveillance tool that harvests extensive amounts of personal and sensitive data.”

Carr stated that if the companiest do not remove TikTok from their app stores, they should provide statements to him by July 8.

The statements should explain “the basis for your company’s conclusion that the surreptitious access of private and sensitive U.S. user data by persons located in Beijing, coupled with TikTok’s pattern of misleading representations and conduct, does not run afoul of any of your app store policies,” he said.

Carr was appointed by former President Trump in 2018 to a five-year term with the FCC.

In March of this year, California Attorney General Rob Bonta announced a nationwide investigation into TikTok for promoting its social media platform to children and young adults while its use is associated with physical and mental health harms to youth.

The investigation will look into the harms using TikTok can cause to young users and what TikTok knew about those harms. The investigation focuses, among other things, on the techniques utilized by TikTok to boost young user engagement, including strategies or efforts to increase the duration of time spent on the platform and frequency of engagement with the platform.

TikTok’s computer algorithms pushing video content to users can promote eating disorders and even self-harm and suicide to young viewers. Texas opened an investigation into TikTok’s alleged violations of children’s privacy and facilitation of human trafficking last month.

TikTok has said it focuses on age-appropriate experiences, noting that some features, such as direct messaging, are not available to younger users. The company says it has tools in place, such as screen-time management, to help young people and parents moderate how long children spend on the app and what they see, the Associated Press reported.

“We care deeply about building an experience that helps to protect and support the well-being of our community, and appreciate that the state attorneys general are focusing on the safety of younger users,” the company said. “We look forward to providing information on the many safety and privacy protections we have for teens.”

TikTok has also had a problematic relationship with the LGBTQ+ community. Recently The Washington Post confirmed that the ‘Libs of TikTok,’ an influential anti-LGBTQ account regularly targets LGBTQ individuals and their allies for harassment from its more than 640,000 Twitter followers while serving as a veritable wire service for Fox News and the rest of the right-wing media to push anti-LGBTQ smears.

Libs of TikTok regularly targets individual teachers and their workplaces – releasing their personal information that includes school and individual names as well as social media accounts, and leading its audience to harass the schools on social media.

A year ago, an investigation by Media Matters found that TikTok’s “For You” page recommendation algorithm circulated videos promoting hate and violence targeting the LGBTQ community during Pride Month, while the company celebrated the month with its #ForYourPride campaign. 

Numerous LGBTQ+ content creators have shared stories with the Blade about TikTok’s seemingly arbitrary algorithms that target otherwise benign content that is not listed outside of the platform’s polices and removed the content. In many cases restoring the posts after appeals or in the worst case scenarios banning the users.

Continue Reading
Advertisement
Advertisement

Follow Us @LosAngelesBlade

Sign Up for Blade eBlasts

Popular