The post Facebook’s ‘state-controlled media’ labels appear to reduce engagement appeared first on Best News.
]]>Researchers with Carnegie Mellon University, Indiana University and the University of Texas at Austin conducted the set of studies which “explored the causal impact of these labels on users’ intentions to engage with Facebook content.” When users noticed the label, they tended to reduce their engagement with it when it was a country they perceived negatively.
The first experiment studied 1,200 people with US-based Facebook accounts — with and without state-controlled media labels. Although their engagement with posts originating from Russia and China went down, it only had that effect if they “actively noticed the label.” A second test in the series observed 2,000 US Facebook users to determine that their behavior was “tied to public sentiment toward the country listed on the label.” In other words, they responded positively to media labeled as Canadian state-controlled and negatively toward Chinese and Russian government-run content.
Finally, a third experiment examined how broadly Facebook users interacted with state-controlled media before and after the platform added the labels. They concluded the change had a “significant effect” as the sharing of labeled posts dropped by 34 percent after the shift, and user likes of tagged posts fell by 46 percent. The paper’s authors also noted that training users on the labels (“notifying them of their presence and testing them on their meaning”) significantly boosted their odds of noticing them.
“Our three studies suggest that state-controlled media labels reduced the spread of misinformation and propaganda on Facebook, depending on which countries were labelled,” Patricia L. Moravec, the study’s lead, wrote in the paper’s summary.
However, the studies ran into some limitations in determining correlation vs. causation. The authors say they couldn’t fully verify whether their results were caused by the labels or Facebook’s nontransparent newsfeed algorithms, which downlink labeled posts (and make related third-party research exceedingly difficult in broader terms). The paper’s authors also note that the experiments measured online users’ “beliefs, intentions to share, and intentions to like pages” but not their actual behavior.
The researchers (unsurprisingly, given the results) recommend social companies “clearly alert and inform users of labeling policy changes, explain what they mean, and display the labels in ways that users notice.”
As the world grapples with online misinformation and propaganda, the study’s leads urge Facebook and other social platforms to do more. “Although efforts are being made to reduce the spread of misinformation on social media platforms, efforts to reduce the influence of propaganda may be less successful,” suggests co-author Nicholas Wolczynski. “Given that Facebook debuted the new labels quietly without informing users, many likely did not notice the labels, reducing their efficacy dramatically.”
The post Facebook’s ‘state-controlled media’ labels appear to reduce engagement appeared first on Best News.
]]>The post X reportedly cuts half of its election integrity team appeared first on Best News.
]]>X reportedly cut all four Dublin, Ireland-based members of the team, including leader Aaron Rodericks. Yet only yesterday, CEO Linda Yaccarino said X was planning to expand its safety and election teams around the world, according to The Financial Times. And less than a month ago, the company was planning to hire a civic integrity and elections lead focused on combatting disinformation. “If you have a passion for protecting the integrity of elections and civic events, X is certainly at the center of the conversation,” said Rodericks in a LinkedIn post.
Rodericks was subsequently suspended for liking posts critical of X, Musk and Yaccarino. After The Information published its story and it was quoted by X News Daily, Musk responded: “Oh you mean the ‘Election Integrity’ Team that was undermining election integrity? Yeah, they’re gone.”
Yesterday, the EU released its first report on social media platforms’ handling of disinformation as part of the Digital Services Act (DSA), finding that X had much higher levels of mis- and disinformation than its peers. X said in a series of posts that it disputed the “framing” of the data and remained “committed to complying with the DSA” despite pulling out of a voluntary Code of Practice on disinformation. In a statement accompanying the report, European Vice President Vera Jourova said that “my message for Twitter/X is you have to comply. We will be watching what you do.”
However, since Elon Musk purchased X (née Twitter) last October, the company has cut more than 80 percent of its staff, and the company already had challenges staying on top of disinformation prior to his tenure. Under the DSA, X must comply with the stricter laws or face fines up to 6 percent of its annual global revenue — though to date, Musk has faced very little pushback for all that’s happened with X.
This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.
The post X reportedly cuts half of its election integrity team appeared first on Best News.
]]>The post YouTube warned by EU official to keep a close eye on Israel-Hamas war content appeared first on Best News.
]]>The European Commission has been seeing a “surge of illegal content and disinformation” being disseminated via certain platforms, he said, telling Pichai that Alphabet has an obligation to protect children and teens from “violent content depicting hostage taking and other graphic videos.” Breton also warned Pichai that if Alphabet receives notices of illegal content from the EU, it must respond in a timely manner. Finally, he reminded the CEO that the company must have mitigation measures in place to address “civic discourse stemming from disinformation.” The video-sharing service must also adequately differentiate reliable news sources from terrorist propaganda and manipulated content, such as clickbait videos.
YouTube spokesperson Ivy Choi told The Verge that the service has “removed tens of thousands of harmful videos and terminated hundreds of channels” following the attacks in Israel and the “conflict now underway in Israel and Gaza.” The platform’s systems, she added, “continue to connect people with high-quality news and information.” She also said that YouTube’s teams are “working around the clock to monitor for harmful footage and remain vigilant to take action quickly if needed on all types of content, including Shorts and livestreams.”
Breton was the same the official who had previously sent Elon Musk an “urgent” letter about the spread of disinformation on X amid the Israel-Hamas war. He called out the spread of “fake and manipulated images and facts circulating on [the platform formerly known as Twitter] in the EU, such as repurposed old images of unrelated armed conflicts or military footage that actually originated from video games.” X CEO Linda Yaccarino published the company’s response a day later, claiming to have removed or labeled “tens of thousands of pieces of content” and to have deleted hundreds of Hamas-affiliated accounts from the platform. Even so, the European Union still opened an investigation into X for the lackluster moderation of illegal content and disinformation related to the war.
The EU commissioner also sent Meta a stern letter, voicing similar concerns about misinformation on its platforms. Meta responded by saying that “expert teams from across [ts] company have been working around the clock to monitor [its] platforms while protecting people’s ability to use [its] apps to shed light on important developments happening on the ground.” Breton sent TikTok a letter about disinformation spreading on its platform related to the Israel-Hamas war, as well, giving the company 24 hours to explain how it’s complying with EU law.
In addition to asking YouTube to keep a close eye on Israel-Hamas disinformation, Breton also tackled the issue of election-related disinformation in his letter. He is asking the service to notify his team of the measures it has taken to mitigate deepfakes “in light of upcoming elections in Poland, The Netherlands, Lithuania, Belgium, Croatia, Romania and Austria, and the European Parliament elections.”
The post YouTube warned by EU official to keep a close eye on Israel-Hamas war content appeared first on Best News.
]]>The post TikTok adds comment filtering tools to better handle Israel-Hamas war content appeared first on Best News.
]]>TikTok has also set up a new anti-hate and discrimination task force, in the hopes of proactively spotting antisemitism, Islamophobia and other hate trends before they get out of hand. The team will work with experts on improving training for moderators to better address hate speech, and it will expand its managed creator communities to Jewish plus other inter-faith communities, as well as API and LGBTQ+, next year.
The Information added that TikTok plans to expand access to its research APIs to civil society groups — as the likes of the Anti-Defamation League have been requesting for years, apparently — so they can better understand the types of content spreading on TikTok. This comes in stark contrast to how X — well, Elon Musk, mostly — limited social media researchers’ access to its platform, while it continues to deny any wrongdoing over accusations of antisemetic content.
While TikTok’s stepped-up efforts may not convince those who still accuse its algorithm of bias, the platform has at least continued removing a staggering amount of offending content. The latest figure on removed videos in the conflict region has hit 1.3 million, between October 7 and November 30. These included “content promoting Hamas, hate speech, terrorism and misinformation.”
This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.
The post TikTok adds comment filtering tools to better handle Israel-Hamas war content appeared first on Best News.
]]>