Media Laws. Ylenia Maria Citino. November 8, 2022
The Guardian. Dan Milmo & Alex Hern. October 28, 2022
Elon Musk’s Twitter acquisition has been polarizing, sparking reactions from politicians, regulators and non-profits across different continents.
Some have expressed concerns about potential changes to Twitter’s content moderation policies now that it’s in the hands of the Tesla billionaire, while others celebrated how they expect the platform’s newly minted leader will handle content and speech on Twitter.
Senior politicians in the UK and Europe on Friday warned Musk over content moderation on Twitter, with the EU stressing the platform will “fly by our rules” and a UK minister expressing concerns over hate speech under the billionaire’s ownership.
Centre for American Progress.E. Simpson,A. Conner, A. Maciolek. November 3, 2022.
Social media companies continue to allow attacks on U.S. democracy to proliferate on their platforms, undermining election legitimacy, fuelling hate and violence, and sowing chaos.
This issue brief outlines what is needed from social media companies and identifies three of the top threats they pose to the 2022 midterm elections—the season opener for the 2024 presidential election.
The Intercept. Ken Klippenstein & Lee Fang. October 31, 2022
The Department of Homeland Security is quietly broadening its efforts to curb speech it considers dangerous, an investigation by The Intercept has found. Years of internal DHS memos, emails, and documents — obtained via leaks and an ongoing lawsuit, as well as public documents — illustrate an expansive effort by the agency to influence tech platforms.
PBS. David Klepper (AP). October 21, 2022
With less than three weeks before the polls close, misinformation about voting and elections abounds on social media despite promises by tech companies to address a problem blamed for increasing polarization and distrust.
While platforms like Twitter, TikTok, Facebook and YouTube say they’ve expanded their work to detect and stop harmful claims that could suppress the vote or even lead to violent confrontations, a review of some of the sites shows they’re still playing catch-up with 2020, when then-President Donald Trump’s lies about the election he lost to Joe Biden helped fuel an insurrection at the U.S. Capitol.
CIGI. Heidi Tworek. March 8, 2022
Much of the current misinformation online exists to scam and to manipulate through speed. TikTok has become a key platform for misleading content. TikTok’s algorithm appears to offer up many misleading videos alongside scam calls for donations. These videos often depict older conflicts or conflicts in other places; the posters claim they are occurring in Ukraine and can garner millions of views. Abbie Richards suggests that “TikTok’s platform architecture is amplifying fear and permitting misinformation to thrive at a time of high anxiety,” calling the platform’s design “incompatible with the needs of the current moment.” It is hard to resist the siren call of doom scrolling. But a slower accumulation of knowledge at moments of crisis can avoid hurtful faux pas and prevent inadvertent spreading of disinformation.
The Conversation. October 26, 2022
The 2016 U.S. election was a wake-up call about the dangers of political misinformation on social media. With two more election cycles rife with misinformation under their belts, social media companies have experienced identifying and countering misinformation. However, the nature of the threat misinformation poses to society continues to shift in form and targets. The big lie about the 2020 presidential election has become a major theme, and immigrant communities are increasingly in the crosshairs of disinformation campaigns – deliberate efforts to spread misinformation.
Social media companies have announced plans to deal with misinformation in the 2022 midterm elections, but the companies vary in their approaches and effectiveness. We asked experts on social media to grade how ready Facebook, TikTok, Twitter and YouTube are to handle the task.
CNN Politics. Zachary B. Wolf. October 31, 2022
Misinformation is trending now that Elon Musk, the self-described “Chief Twit,” has bought Twitter, his favourite social media platform.
Meanwhile, displays of hate are breaking out in public now that Kanye West, who now goes by Ye, has despicably fashioned himself as a folk hero for those spewing antisemitic messages, pushing his own anti-Jewish conspiracy theories.
The stories dovetail not just because they are built on the wild spread of false claims, but also because West’s Twitter account – locked in early October for an antisemitic tweet in which he said he was going “death con 3 on Jewish people” – was recently reactivated. More on that below.
Emerald Insight. November 7, 2022
Musk has repeatedly said he wants the platform to prioritize ‘free speech,’ but has also reassured European regulators that he will be complying with local laws, even where they involve content screening. Although Twitter’s policy has yet to be finalized, the turmoil highlights the risks of online disinformation.
The business models of social media companies and tech platforms contain strong incentives that promote misinformation and disinformation. Advertising comprises 80% of the income of Google's parent company Alphabet, and well over 90% for Twitter and for Meta, the owner of Facebook and Instagram.
Social media offer advertisers hundreds of millions of users who are difficult to reach through other media. High levels of engagement ensure that the audience becomes 'captive'. Moreover, using data collected on users enables platforms to match advertisers and potential customers efficiently.
Legal Defence Fund. October 13, 2022
Today, LDF and a coalition of civil rights, public interest, voting rights, and other organizations, sent a letter urging social media companies to take immediate steps to curb the spread of voting disinformation in the midterms and future elections and to help prevent the undermining of our democracy. This letter is a follow up to another sent last May. Many companies have announced updates to their voter interference and disinformation policies in recent weeks but the policies have little effect unless enforced continually and consistently.
Fox News. Gabriel Hays. November 26, 2022
CIA analyst Bob Baer claimed that "Putin is going to be all over Twitter" thanks to billionaire owner Elon Musk’s policies for running the company.
He also stated that the "voice of the people" Musk claimed wants free speech is "Russian intelligence" looking to undermine American support for Ukraine.
During a recent segment on CNN, the analyst argued that Musk’s pro-free-speech attitude towards operating the company, particularly in the way he has decided to reinstate banned accounts and not suspend users for any speech, means Russian hackers will benefit.
International Forum for Democratic Studies. Elizabeth Kerley. October 13, 2022
Since launching its full-scale invasion of Ukraine, Russia has been putting its longstanding aspirations for “cyber sovereignty” to the test. In keeping with its longstanding objectives of “technological independence and information control,” the Kremlin has promoted homegrown tech in the face of sanctions while also halting the flow of independent information. Meanwhile this April, 61 mostly democratic countries signed a declaration articulating a vision for the internet that is “open, free, global, interoperable, reliable, and secure.” What does this clash of visions portend for the digital domain?
Tech Explore. Kelvin Chan. November 24, 2022
Twitter took longer to review hateful content and removed less of it in 2022 compared with the previous year, according to European Union data released Thursday.
The EU figures were published as part of an annual evaluation of online platforms' compliance with the 27-nation bloc's code of conduct on disinformation.
Twitter wasn't alone—most other tech companies signed up to the voluntary code also scored worse. But the figures could foreshadow trouble for Twitter in complying with the EU's tough new online rules after owner Elon Musk fired many of the platform's 7,500 full-time workers and an untold number of contractors responsible for content moderation and other crucial tasks.
Security Org. Aliza Vigderman. November 4, 2022
As the popularity of social media surpasses traditional news sources, information has grown more unreliable, and “fake news” becomes harder to detect. The same digital platforms that empower global communication seed doubt and spread misinformation.
The misinformation and disinformation that have influenced elections and hampered public health policies also damaged faith in all forms of media. Meanwhile, political attacks on some news sources have divided Americans further into partisan camps.
The nation is united, however, in recognizing the problem. Our second annual study of more than 1,000 people revealed that nine out of 10 American adults fact check their news, and 96 percent want to limit the spread of false information.
As digital disinformation grows more and more prevalent, there’s one emerging technology with the potential to address many of the root causes of and risks associated with misleading and manipulated media: blockchain. While it’s no panacea, blockchain can help in three key areas: First, a blockchain-based system could offer a decentralized, trusted mechanism for verifying the provenance and other important metadata for online content. Second, it could enable content creators and sharers to maintain a reputation independent of any publication or institution. And finally, it makes it possible to financially incentivize the creation and distribution of content that meets community-driven standards for accuracy and integrity. Of course, any technological solution will have to be complemented by substantial policy and education initiatives — but in an ever-more complex digital media landscape, blockchain offers a promising starting point to ensure we can trust the information we see, hear, and watch.
PCMag. Nathaniel Mott. August 11, 2022.
By applying its Civic Integrity Policy to the upcoming US elections, Twitter is looking to 'enables healthy civic conversation' on its platform. (Don't laugh.)
The company expanded(Opens in a new window) the Civic Integrity Policy ahead of the 2020 presidential election to "further protect against content that could suppress the vote and help stop the spread of harmful misinformation that could compromise the integrity of an election or other civic process." Now it's looking to apply those same measures to the 2022 midterms being held in November.
We're working to prepare for elections, elevate credible information, and help keep you safe on Twitter.
Our civic integrity policy aims to prevent the use of Twitter to share or spread false or misleading information about a civic process (e.g., elections or census) that may disrupt or undermine public confidence in that process.
This policy is enforced when the risk for manipulation or interference is highest — generally a few months before and a couple of weeks after election day, depending on local and external factors. This policy is an additional, temporary protection on top of all the Twitter Rules, which are enforced year-round.
We're working to prepare for elections, elevate credible information, and help keep you safe on Twitter.
Our civic integrity policy aims to prevent the use of Twitter to share or spread false or misleading information about a civic process (e.g., elections or census) that may disrupt or undermine public confidence in that process.
This policy is enforced when the risk for manipulation or interference is highest — generally a few months before and a couple of weeks after election day, depending on local and external factors. This policy is an additional, temporary protection on top of all the Twitter Rules, which are enforced year-round.
New York Post. Jesse O’Neill. December 26, 2022.
The Biden White House pressured Twitter to both “elevate” and “suppress” users based on their stances on COVID-19 — ultimately “censoring info that was true but inconvenient” to policy makers, according to the latest edition of the “Twitter files”. The coercion campaign during the pandemic began with the Trump administration — which asked Twitter to crack down on stories about panic buying and “runs on grocery stores” in the early days of the outbreak — but was stepped up under Biden, whose administration was focused on the removal of “anti-vaxxer accounts,” according to The Free Press reporter David Zweig.
EdSurge. Nadia Tamez-Robledo. December 7, 2022.
TikTok may have started as the preferred social media platform for modern dance crazes, but the platform’s growth has made it a home for something else—misinformation. Add to that its popularity among teens and its powerful algorithm, and you have a mix that worries some educators about TikTok’s potential negative impacts for young users. A recent study from NewsGuard found that roughly one in five TikTok videos contain misinformation, whether the topic is COVID-19 vaccines or the Russia-Ukraine war.
Brennan Center for Justice. Maya Kornberg et al.. January 5, 2023.
The Brennan Center has developed recommendations on how to fight misinformation based on analysis of how it takes root and circulates. Election-related falsehoods corrode American democracy. Since 2020, lies about a stolen presidential election cropped up in dozens of campaigns for election administrator positions and spurred unprecedented threats to election officials. The result has been a deluge of resignations that drained expertise from election offices across the country. Further, public trust in elections has plummeted amid disinformation promoted by Donald Trump and other prominent election deniers.
CTV News. David Klepper (AP Staff). January 19, 2023.
Search for the word "climate" on Twitter and the first automatic recommendation isn't "climate crisis" or "climate jobs" or even "climate change" but instead "climate scam." Clicking on the recommendation yields dozens of posts denying the reality of climate change and making misleading claims about efforts to mitigate it. Such misinformation has flourished on Twitter since it was bought by Elon Musk last year, but the site isn't the only one promoting content that scientists and environmental advocates say undercuts public support for policies intended to respond to a changing climate.
Debunk. Radovan Ognjenovic & Daniela Vukcevic. December 30, 2022.
Groups and pages on social media have been continuously spreading and amplifying misleading content on four different topics – Russia, NATO, LGBTQIA+, and COVID-19. Although not exclusively, administrators of these groups and pages shared misleading content containing all the aforementioned sentiments: pro-Russian, anti-NATO, anti-LGBT, and questioned the efficiency of COVID-19 measures, unrelatedly to their country of origin or language.
Google. Annette Kroeber-Riel. May 4, 2023.
Today, we are announcing new long-term partnerships we've established across Central and Eastern Europe, a region considered highly vulnerable to disinformation and propaganda due to its geographic proximity to the war in Ukraine. An issue that was highlighted in a recent IPSOS survey, conducted in cooperation with Central European Digital Media Observatory (CEDMO). In the Baltics, we've entered into long-term partnership with the Civic Resilience Initiative and the Baltic Center for Media Excellence. These two established and well-respected organizations will receive €1.3 million in funding from Google to build on their impactful work towards increasing media literacy, building further resilience and actively tackling disinformation in Lithuania, Latvia and Estonia.
The Hill. Shannon Jankowski. May 23, 2023.
Bot detection tools can be a game changer for exposing targeted falsehoods and conspiracy theories, especially for small, local newsrooms serving marginalized communities.
Because of upcoming U.S. election - public will rely on journalists to detect and expose this disinformation in their reporting. But, under Elon Musk’s leadership — which, ironically, began with a focus on eliminating bots on the platform — Twitter’s newly amended application programming interface (API) policy may rob journalists of access to bot detection tools, which are critical to identifying and understanding the spread of disinformation on social media.
CIGI. May 2023.
Influence operations targeting liberal democratic regimes are deeply troubling. They disrupt the twin bedrocks of effective democratic governance: the free flow of information and trust. These campaigns can be undertaken by malicious foreign governments who aim to sow chaos, or by non-state actors, such as ISIS, who seek to radicalize disaffected individuals in the West. Countering these operations is both necessary and possible. Such efforts require the engagement of not only governments but also the platforms. Working together, these actors can preserve liberal democratic governance by minimizing exposure to fake news and other influence operations, promoting user immunity and promulgating counter narratives to misinformation.
The Nobel Prize. May 2023.
Twitter trends, TikTok videos, Instagram reels, Facebook posts, and WhatsApp forwards might have democratized the spaces of communication, but they have also become the most potent platforms to disseminate fake news. As technology continues to advance, the war against misinformation and fake news ironically gets tougher for the world.
Misinformation threatens to be the new ‘true information’ as it aids and enables the most anti-democratic values.”
Associated press. May 26, 2023.
Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday.
European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU's disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter's “obligation” remained, referring to the EU's tough new digital rules taking effect in August.
Euronews. Giulia Carbonaro. May 29, 2023.
The European Commission’s Vice-President for Values and Transparency bashed Twitter’s latest decision to leave the EU’s anti-disinformation code as “irresponsible” at a time when Russia’s disinformation is extremely dangerous.
Twitter’s decision to pull out of the EU’s voluntary code to fight the spread of disinformation and fake news in the bloc was announced by Thierry Breton, the EU’s internal market commissioner.
ABC News. Kelvin Chan. May 26, 2023.
Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday.
European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU's disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter's “obligation” remained, referring to the EU's tough new digital rules taking effect in August.
Euro News. Giulia Carbonaro & Sophia Khatsenkova. May 31, 2023.
Dozens of tech firms have voluntarily signed up to the EU’s anti-disinformation code revamped last year, including Meta (with Instagram and Facebook), TikTok, Google, Microsoft and Twitch.
Despite the fact that Twitter’s withdrawal could appear to be a major setback in the fight against disinformation and fake news in the EU, Jourova said that “the Code remains strong, sets high standards and is at the heart of our efforts to address disinformation”.
The Conversation. Laks V.S. Lakshmanan. June 7, 2023.
Fake news is a complex problem and can span text, images and video.
For written articles in particular, there are several ways of generating fake news. A fake news article could be produced by selectively editing facts, including people’s names, dates or statistics. An article could also be completely fabricated with made-up events or people.
Fake news articles can also be machine-generated as advances in artificial intelligence make it particularly easy to generate misinformation.
Columbia Journalism Review. Mathew Ingram. June 15, 2023.
YouTube, Twitter, and Meta (formerly Facebook) have eased restrictions on election denial content. YouTube announced it will no longer remove videos claiming the 2020 presidential election was fraudulent, while Twitter and Meta dismantled most of their restrictions related to election denial. These decisions have sparked debates about striking a balance between protecting users and fostering open discussion, as well as concerns about the potential spread of misinformation and its impact on democracy.
AXIOS. Dara Fischer. June 2, 2023.
In a reversal of its election integrity policy, YouTube will leave up content that says fraud, errors or glitches occurred in the 2020 presidential election and other U.S. elections, the company confirmed to Axios Friday.
Why it matters: YouTube established the policy in December 2020, after enough states had certified the 2020 election results. Now, the company said in a statement, leaving the policy in place may have the effect of "curtailing political speech without meaningfully reducing the risk of violence or other real-world harm."
Misinformation Review. Shane Littrell, Casey Klofstad, et al. August 25, 2023.
Some people share misinformation accidentally, but others do so knowingly. Using a 2022 U.S. survey, researchers found that 14 percent of respondents reported knowingly sharing misinformation, and that these respondents were more likely to also report support for political violence, a desire to run for office, and warm feelings toward extremists. Furthermore, they were also more likely to have elevated levels of a psychological need for chaos, dark tetrad traits, and paranoia. The findings illuminate one vector through which misinformation is spread.
The Guardian. Lisa O’Carroll. September 26, 2023.
The EU has issued a warning to Elon Musk to comply with sweeping new laws on fake news and Russian propaganda, after X – formerly known as Twitter – was found to have the highest ratio of disinformation posts of all large social media platforms. The report analyzed the ratio of disinformation for a new report laying bare for the first time the scale of fake news on social media across the EU, with millions of fake accounts removed by TikTok and LinkedIn.
Facebook was the second-worst offender, according to the first ever report recording posts that will be deemed illegal across the EU under the Digital Services Act (DSA), which came into force in August.
Time. Vera Bergengruen. August 31, 2023.
A sprawling network of fake accounts linked to Chinese law enforcement was taken down by Meta this week in what the social-media company called “the largest known cross-platform covert influence operation in the world.”
The operation was the largest the company has removed in its history: on Facebook alone, Meta says it removed 7,704 accounts, 954 pages, and 15 groups linked to the effort to push pro-China talking points and attack the government’s critics. But its fingerprints extended beyond Facebook and Instagram, the platforms owned by Meta. The Chinese influence operation targeted at least 50 other platforms and apps, including YouTube, Reddit, Pinterest, TikTok, Pinterest, Medium, and X, the company formerly known as Twitter, according to Meta's analysts.
Oxford Academic. Jim P. Stimpson and Alexander N. Ortega. September 26, 2023.
The study used recently released nationally representative data with new measures on health information seeking to estimate the prevalence and predictors of adult social media users’ perceptions of health mis- and disinformation on social media.
Their study identified specific population groups that could be the target of future intervention efforts, including individuals who rely on social media for decision-making. The perception among social media users that there is a high prevalence of false and misleading health information on these platforms may increase the need for urgent action to mitigate the dissemination of such harmful health misinformation that negatively affects public health.
NewsGuard. McKenzie Sadeghi, Jack Brewster, and Macrina Wang. September 2023.
Until April 20, 2023, users on X (formerly known as Twitter) were notified that China Daily and other state-run outlets that lack editorial independence are “state-affiliated.” But on April 21, X owner Elon Musk stripped the platform of labels indicating which accounts are state-run. This cleared the path for Chinese propaganda sources, as well as Russian and Iranian state outlets, to disseminate disinformation unchecked with X users no longer having transparent information about the nature of the source. The impact was immediate and dramatic.
Global Compact on Refuges. September 2023.
The rise of misinformation, disinformation and hate speech on digital platforms is causing real-world harm to the most vulnerable, especially refugees, displaced and stateless people.
These offline harms include xenophobia, racism, persecution, violence, killings. Misinformation, disinformation and hate speech can be a contributing factor of forced displacement. In the case of people who are already displaced, harms can include trafficking, exploitation, barriers to accessing rights and services.
The pledge will increase the number of stakeholders who are taking action to prevent the harmful impact on displaced and stateless populations, and on humanitarian action, of mis/disinformation and hate speech on their platforms.
Tech Policy Press. Gabby Miller. September 11, 2023.
Elon Musk, the self-proclaimed free speech absolutist, has once again ramped up attacks meant to silence his critics, this time while bolstering an online movement with ties to white nationalists and antisemitic propagandists. His latest target is the Anti-Defamation League (ADL), an anti-hate organization focused on combating antisemitism, which he threatened with legal action via Tweet early last week. Musk blames ADL for the exodus of advertisers from his rapidly deteriorating social media platform.
The platform formerly known as Twitter, referred to as "X," is now required by law to conduct its first annual risk assessment to demonstrate compliance with the European Union's Digital Services Act (DSA). The DSA applies to Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs) like X, and it aims to combat disinformation, similar to the content Musk promotes on the platform.
National Observer. Mickey Djuric. October 25, 2023.
A parliamentary committee is calling on Canada to hold tech giants accountable for publishing false or misleading information online, especially when it is spread by foreign actors.
That was among 22 recommendations the House ethics committee made in a report released Tuesday after its study into threats posed by foreign interference in Canada's affairs, with a focus on China and Russia.
The Guardian. Lisa O’Carroll. October 10, 2023.
The EU has issued a warning to Elon Musk over the alleged disinformation about the Hamas attack on Israel, including fake news and “repurposed old images”, on X, which was formerly known as Twitter.
The letter arrives less than two months after sweeping new laws regulating content on social media seen in the EU came into force under the Digital Services Act.
CIGI. Eric Jardine. October 2023.
Influence operations, whether launched by governments or non-state actors, existed long before social media, but what is new about contemporary influence operations is their scale, severity and impact, all of which are likely to grow more pronounced as digital platforms extend their reach via the internet and become ever more central to our social, economic and political lives. Such efforts represent a clear cyber security challenge.
Reuters. Charlotte Van Campenhout and Bart H. Meijer. October 19, 2023.
Meta (META.O) and TikTok have been given a week by the European Commission to provide details on measures taken to counter the spread of terrorist, violent content and hate speech on their platforms, a week after Elon Musk's X was told to do the same.
Google. Amanda Storey. October 26, 2023.
Google aims to balance access to information with safeguarding users and society. The company emphasizes the importance of ensuring that information is not only accessible but also safe to benefit users.
Google takes its responsibility seriously by prioritizing the provision of trustworthy information and content, safeguarding users from potential harm, ensuring the delivery of reliable information, and collaborating with experts and organizations to contribute to a safer internet.
Nature. October 4, 2023.
Access to social-media data is essential to those who research political campaigns and their outcomes. However, unlike in previous years, scientists will not have free access to data from X, previously known as Twitter. Many still consider X to be among the world’s most influential social-media platforms for political discussion, but the company has discontinued its policy of giving researchers special access to its data. Disinformation campaigns — some armed with AI-generated deepfakes — are likely to be rampant in the coming months, says Ulrike Klinger, who studies political communication at the European University Viadrina in Frankfurt (Oder), Germany. “And we cannot monitor them because we don’t have access to data.”
NewsGuard. Jack Nrewster, Coalter Palmer et al. November 22, 2023.
On X, programmatic ads appear below viral posts spreading false claims about the Israel-Hamas war. Shockingly, a new ad revenue sharing program rewards these misinformation spreaders with a portion of income from major brands, governments, and non-profits.
CNN. Donie O’Sullivan, Curt Devine & Allison Gordon. November 13, 2023.
The Chinese government has built up the world’s largest known online disinformation operation and is using it to harass US residents, politicians, and businesses—at times threatening its targets with violence, a CNN review of court documents and public disclosures by social media companies has found.
UNESCO.Audrey Azoulay. November 6, 2023.
Digital technology has enabled immense progress on freedom of speech. However, social media platforms, in parallel, have expedited and intensified the dissemination of false information and hate speech, presenting considerable threats to societal cohesion, peace, and stability. In order to preserve access to information, there is a pressing need for regulation of these platforms without delay. Simultaneously, it is crucial to safeguard freedom of expression and human rights.
Reuters. Shelia Dang. November 6,2023.
Social media researchers have canceled, suspended or changed more than 100 studies about X, formerly Twitter, as a result of actions taken by Elon Musk that limit access to the social media platform, nearly a dozen interviews and a survey of planned projects show.
Musk's restrictions on critical methods of gathering data on the global platform have suppressed the ability to untangle the origin and spread of false information during real-time events such as Hamas' attack on Israel and the Israeli airstrikes in Gaza, researchers told Reuters.
Politico. Mark Scott. November 9, 2023.
Social media platforms that don’t clamp down on illegal and hate content will face the full force of the United Kingdom’s new online safety rules, according to Melanie Dawes, head of the country’s regulator in charge of the new regime.
Washington Post. Naomi Nix, Cat Zakrzewski. November 30, 2023.
The US federal government has stopped warning some social networks about foreign disinformation campaigns on their platforms, reversing a years-long approach to preventing Russia and other actors from interfering in American politics less than a year before the US presidential elections. Meta no longer receives notifications of global influence campaigns from the Biden administration, halting a prolonged partnership between the federal government and the world’s largest social media company. Federal agencies have also stopped communicating about political disinformation with Pinterest, according to the company. In July 2023, a federal judge limited the Biden administration’s communications with tech platforms in response to a lawsuit alleging such coordination ran afoul of the First Amendment by encouraging companies to remove falsehoods about COVID-19 and the 2020 election.
Nature. January 9, 2024.
In 2024’s super election year, providers of online search engines and their users need to be especially aware of how online misinformation can seem all too credible.
This year, countries with a combined population of 4 billion — around half the world’s people — are holding elections, in what is being described as the biggest election year in recorded history. Some researchers are concerned that 2024 could also be one of the biggest years for the spreading of misinformation and disinformation. Both refer to misleading content, but disinformation is deliberately generated.
CNN. Donie O’Sullivan & Clare Duffy. December 4, 2023.
A nationally recognized online disinformation researcher has accused Harvard University of shutting down the project she led to protect its relationship with mega-donor and Facebook founder Mark Zuckerberg.
The allegations, made by Dr. Joan Donovan, raise questions about the influence the tech giant might have over seemingly independent research. Facebook’s parent company Meta has long sought to defend itself against research that implicates it in harming society: from the proliferation of election disinformation to creating addictive habits in children. Details of the disclosure were first reported by The Washington Post.
Aljazeera. February 26, 2024.
Tech giant’s head of EU affairs says team will bring together experts from across the company.
Facebook owner Meta has unveiled plans to launch a dedicated team to combat disinformation and harms generated by artificial intelligence (AI) ahead of the upcoming European Parliament elections.
EuroNews. Cynthia Kroet. February 14, 2024.
Major online platforms must tackle disinformation, under new EU digital service rules that take effect Saturday. TikTok announced today (14 February) that it will set up what it calls in-app election centres for each of the 27 EU countries.
The move by the social media network is a bid to reduce the spread of online misinformation as the bloc goes to the polls in June. The tool will be available as of next month to ensure people can “separate fact from fiction”, Kevin Morgan, TikTok’s head of trust and safety for Europe, the Middle East and Africa, said in a statement.
EuroNews. Cynthia Kroet. February 26, 2024.
The online platform will add fact-checking organisations in Bulgaria, France, and Slovakia to its network ahead of the EU elections. US tech giant Meta, which owns Facebook and Instagram, is to set up an EU-specific 'operations centre' to combat misinformation around the European Parliament elections in June, the company has announced weeks after its Chinese rival TikTok made a similar move.
Guardian. Rachel Leingang. February 10, 2024.
Innovation is outpacing our ability to handle misinformation, experts say. That makes falsehoods easy to weaponize. As the United States’ fractured political system prepares for a tense election, social media companies may not be prepared for an onslaught of viral rumors and lies that could disrupt the voting process – an ongoing feature of elections in the misinformation age.
Insider Intelligence. Sara Lebow. February 27, 2024.
Key stat: 64% of US adults think disinformation and “fake news” are most widespread on social media, according to a September 2023 survey from UNESCO and Ipsos.
It’s a presidential election year, which means the risk for misinformation and disinformation on social media is rampant. That presents a major brand safety challenge for marketers, whose content could end up next to unsavory posts.
Reuters. Andrew Chung & John Kruzel. March 18, 2024
The U.S. Supreme Court justices on Monday appeared skeptical of a challenge on free speech grounds to how President Joe Biden's administration encouraged social media platforms to remove posts that federal officials deemed misinformation, including about elections and COVID-19.
Carnegie Mellon University. Maryan Saeedi. March 23, 2024
In an era where social media platforms have become battlegrounds for information integrity, a new study sheds light on the mechanics of disinformation spread and offers innovative solutions to counteract it.
Conducted by a team of researchers from Brandeis University, George Mason University, Massachusetts Institute of Technology, and Carnegie Mellon University examined the dynamics of “disinformation wars,” which refers to the intentional spread of fake news while pretending to be an ordinary account or user on platforms like X (formerly known as Twitter). This method has proved to be alarmingly effective in misleading the public.
The Record. Suzanne Smalley. March 22, 2024
Meta’s decision to close its CrowdTangle division — a tool that tracks content across social media — has raised the ire of more than 100 research and advocacy groups who say it will make it harder to fight disinformation.
Groups including the Mozilla Foundation, the Center for Democracy and Technology and Access Now sent the social media behemoth an open letter Thursday decrying the decision to shutter the unit in August, asking Meta to, at a minimum, invest in CrowdTangle through January. Meta announced it would close CrowdTangle last week.
Agence France-Presse. March 20, 2024
The U.S. Treasury Department imposed sanctions Wednesday against two people and their Russia-based companies it accused of supporting a Kremlin-directed disinformation campaign involving the impersonation of legitimate news websites.
The sanctions targeted Moscow-based company Social Design Agency and its founder, Ilya Andreevich Gambashidze, as well as the Russian-based Company Group Structura and its owner, Nikolai Aleksandrovich Tupikin, according to a statement from the Treasury Department.