Brookings Institute: The Role of Technology in Online Misinformation

This report outlines the logic of digital personalization, which uses big data to analyze individual interests to determine the types of messages most likely to resonate with particular demographics. Those same technologies can also operate I the service of misinformation through text prediction, tools that receive user inputs and produce new texts that is as credible as the original text itself. The report addresses potential policy solutions that can counter digital personalization, closing with a discussion of regulatory or normative tools that are less likely to be effective in countering the adverse effects of digital technology.


Algorithmic Transparency

Algorithmic transparency is openness about the purpose, structure and underlying actions of the algorithms used to search for, process and deliver information. An algorithm is a set of steps that a computer program follows in order to make a decision about a particular course of action.


There’s a Fix to Disinformation: Make Social Media Algorithms Transparent

The author cites a number of examples and makes case for us to consider algorithmic transparency as part of our national defence.


Algorithmic Transparency in the Public Sector

A YouTube video presented by Natalia Domagala of AI Ethics: Global Perspectives. Drawing on her professional experience working on data ethics, open data and open government, Domagala explains the concept of algorithmic transparency and why it is a critical need in our society today. She shares different examples of algorithmic transparency measures from Europe and North America with a special focus on the UK’s new Algorithmic Transparency Standard. She concludes her module with an outlook on the field of algorithmic transparency over the next few years and suggestions on what actors in the field ought to focus on going forward.


Tools That Fight Disinformation Online

A list of online tools available to help build understanding of techniques involved in the dissemination of disinformation; detection and tracking of trollbots and untrustworthy Twitter accounts; the tracking and detection of potential manipulation of information spreading on Twitter; tools designed for collaborative verification of internet content; fact-checking tools; verification tools; tools that rate news outlets based on “probability of disinformation on a specific media outlet" and many more.


How Blockchain Can Help Combat Disinformation

As digital disinformation grows more and more prevalent, there’s one emerging technology with the potential to address many of the root causes of and risks associated with misleading and manipulated media: blockchain. While it’s no panacea, blockchain can help in three key areas: First, a blockchain-based system could offer a decentralized, trusted mechanism for verifying the provenance and other important metadata for online content. Second, it could enable content creators and sharers to maintain a reputation independent of any publication or institution. And finally, it makes it possible to financially incentivize the creation and distribution of content that meets community-driven standards for accuracy and integrity. Of course, any technological solution will have to be complemented by substantial policy and education initiatives — but in an ever-more complex digital media landscape, blockchain offers a promising starting point to ensure we can trust the information we see, hear, and watch.


Is AI the only antidote to disinformation?

World Economic Forum. July 20, 2022.

The stability of our society is more threatened by disinformation than anything else we can imagine. It is a pandemic that has engulfed small and large economies alike. People around the world face threats to life and personal safety because of the volumes of emotionally charged and socially divisive pieces of misinformation, much of it fuelled by emerging technology. This content either manipulates the perceptions of people or propagates absolute falsehoods in society.


Break free from misinformation in an escape room (Video clip)

Center for an Informed Public. June 14, 2022.

Our mission is to resist strategic misinformation, promote an informed society, and strengthen democratic discourse.

A research project of the University of Washington‘s Center for an Informed Public in partnership with the UW Technology & Social Change Group, UW GAMER Research Group and Puzzle Break, immerse people in an interactive escape room of manipulated media, social media bots, deep fakes, and other forms of deception to learn about misinformation. These games are designed to improve people’s awareness of misinformation tactics and generate reflection on the emotional triggers and psychological biases that make misinformation so powerful.


Image Provenance Analysis for Disinformation Detection

Composite images are the outcome of combining pieces extracted from two or more other images, sometimes with the intent to deceive the observer and convey false narratives. Consider an image suspected of being a composite, and a large corpus of images that might have donated pieces to the composite (such as photos from social media. In this conversation, we will discuss our most recent advances in provenance analysis, concluding with our latest endeavours towards extending it to unveil disinformation campaigns.

Video of event included in this site.


  • Walter Scheirer, Dennis O. Doughty Collegiate Associate Professor, University of Notre Dame
  • Daniel Moreira, Incoming Assistant Professor, Loyola University


Twitter Looks to Prevent a Disinformation Free-for-All Ahead of 2022 Midterms

PCMag. Nathaniel Mott. August 11, 2022.

By applying its Civic Integrity Policy to the upcoming US elections, Twitter is looking to 'enables healthy civic conversation' on its platform. (Don't laugh.) The company expanded(Opens in a new window) the Civic Integrity Policy ahead of the 2020 presidential election to "further protect against content that could suppress the vote and help stop the spread of harmful misinformation that could compromise the integrity of an election or other civic process." Now it's looking to apply those same measures to the 2022 midterms being held in November.


TWITTER - The mission of our civic integrity work is to protect the conversation on Twitter during elections or other civic processes.

We're working to prepare for elections, elevate credible information, and help keep you safe on Twitter. Our civic integrity policy aims to prevent the use of Twitter to share or spread false or misleading information about a civic process (e.g., elections or census) that may disrupt or undermine public confidence in that process.

This policy is enforced when the risk for manipulation or interference is highest — generally a few months before and a couple of weeks after election day, depending on local and external factors. This policy is an additional, temporary protection on top of all the Twitter Rules, which are enforced year-round.


TWITTER - The mission of our civic integrity work is to protect the conversation on Twitter during elections or other civic processes.

We're working to prepare for elections, elevate credible information, and help keep you safe on Twitter. Our civic integrity policy aims to prevent the use of Twitter to share or spread false or misleading information about a civic process (e.g., elections or census) that may disrupt or undermine public confidence in that process.

This policy is enforced when the risk for manipulation or interference is highest — generally a few months before and a couple of weeks after election day, depending on local and external factors. This policy is an additional, temporary protection on top of all the Twitter Rules, which are enforced year-round.


Justice Sees Fake News Disaster, and TSE Seeks Police Power to Act in The Final Stretch of Brazil's Election

UOL. Patricia Campos MelloOctober 20, 2022.

Court will vote on a resolution that extends the power to act against misinformation and also ban paid advertising on the internet during the election period.

Chief Justice of the TSE (Supreme Electoral Court), Alexandre de Moraes, had a meeting this Wednesday (19) with representatives of the main social media platforms. At the meeting, he said that the platforms' performance was reasonably good in the first round, but that in this second round the fake news situation is disastrous.


Disinformation Day 2022 Considers Pressing Need for Cross-sector Collaboration and New Tools for Fact Checkers

University of Texas. Stacey Ingram-Kaleh. November 9, 2022

October 26, 2022 marked the first annual Disinformation Day hosted by Good Systems’ “Designing Responsible AI Technologies to Curb Disinformation” research team. Approximately 150 attendees from across the globe came together virtually to discuss challenges and opportunities in curbing the spread of digital disinformation. Thought leaders representing a range of disciplines and sectors examined the needs of fact checkers, explored issues of bias, fairness, and justice in mis- and disinformation, and outlined next steps for addressing these pressing issues together.

European Commission to revise Code of Practice against Disinformation

Lexology. Herbert Smith Freehills. March 31, 2022

The Code of Practice against Disinformation was published in September 2018 and was subsequently signed by Facebook, Google, Mozilla and Twitter, among others. The Code is a self-regulatory document and, following European Commission assessments and reports on adherence, guidance was issued in May 2021 to address shortfalls in the Code of Practice and provide a more robust monitoring framework. Most recently, the Commission announced that there will be 26 new signatories joining the drafting process for a revised version of the Code, expected to be released by the end of March 2022.

Brief: Disinformation Risk in the United States Online Media Market, October 2022

Global Disinformation Index. October 21, 2022

GDI’s research looked at 69 U.S. news sites, selected on the basis of online traffic and social media followers, as well as geographical coverage and racial, ethnic and religious community representation. The index scores sites across 16 indicators – indicators which themselves contain many, many more individual data points – and generates a score for the degree to which a site is at risk of disinforming its readers.

The data from the study corroborates today’s general impression that hyperbolic, emotional, and alarmist language is a feature of the U.S. news media landscape.

The Truth in Fake News: How Disinformation Laws Are Reframing the Concepts of Truth and Accuracy on Digital Platforms

BRILL: In European Convention on Human Rights Law Review. Paolo Cavaliere. October 11, 2022  

The European Union’s (EU) strategy to address the spread of disinformation, and most notably the Code of Practice on Disinformation and the forthcoming Digital Services Act, tasks digital platforms with a range of actions to minimize the distribution of issue-based and political adverts that are verifiable false or misleading. This article discusses the implications of the EU’s approach with a focus on its categorical approach, specifically what it means to conceptualize disinformation as a form of advertisement and by what standards digital platforms are expected to assess the truthful or misleading nature of the content that they distribute because of this categorization. The analysis will show how the emerging EU anti-disinformation framework marks a departure from the European Court of Human Rights consolidated standards of review for public interest and commercial speech and the tests utilized to assess their accuracy.


Disinformation and freedom of expression during armed conflict

UN Web TV. October 19, 2022

At the 77th Session of the UN Human Rights Council, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression presented her new report on disinformation and freedom of opinion and expression during armed conflicts.

Hate and disinformation spiked after Musk's Twitter takeover | View

Euronews. Heather Dannyelle Thompson. November 24, 2022

In just over two weeks, Musk’s takeover of Twitter has rocked the internet. Hate speech and disinformation have already spiked in what appears to be mostly trolls and right-wing extremists seeking to test the boundaries of Musk’s approach to unchecked free speech on his newly acquired platform.

The chaos at Twitter comes at a distinct time of transformation of the internet. Not only is the online space facing regulation globally, but the advances in artificial intelligence that power tomorrow’s tools of disinformation are not slowing down either.


Artificial Intelligence and Deepfakes

Designing Responsible AI Technologies to Curb Disinformation

University of Texas. October 2022

The rise of social media and the growing scale of online information have led to a surge of intentional disinformation and incidental misinformation. It is increasingly difficult to tell fact from fiction, and the challenge is more complex than simply differentiating “fake news” from simple facts. This project uses qualitative methods and machine learning models to understand how digital disinformation arises and spreads, how it affects different groups in society, and how to design effective human-centred interventions.

What a Pixel can Tell: text-to-Image Generation and its Disinformation Potential

Disinfo Rada: Democracy Reporting International. September 2022

In recent years, many new tools and tactics have been used to generate and spread disinformation online and while the wider public and experts grapple with the emergence of deepfakes ‒ images, video or audio altered using artificial intelligence (AI) that are difficult to detect as false ‒ a whole new threat is emerging on the horizon: fully synthetic content, such as hyperrealistic images created based on text prompts, powered by AI In contrast to current methods, this technology does not distort existing photos or videos ‒ it creates entirely new ones. When used for disinformation purposes, text-to-image generation models enable disinformation actors to produce imagery to support false narratives to gain a better understanding of how much of a threat text-to-image generation poses to democracy, we interviewed leading global experts who work directly in the fields of AI, disinformation and text-to-image generation.

High-school students should be taught to spot fake videos and disinformation, public safety minister says

Globe and Mail. Marie Woolf. November 18, 2022

High-school students should be educated about how to spot fake videos and photos and disinformation, because they are so prevalent online, federal Public Safety Minister Marco Mendicino says.

Speaking from the G7 summit in Germany, the minister said disinformation is “one of the most pervasive threats to all our democracies right now” and more needs to be done to raise awareness and equip Canadians to navigate its dangers.

Is Europe ready for an information war?

Debating Europe. June 23, 2022

What does it mean to “win the information war”? During the Russian invasion of Ukraine, headlines have proclaimed Ukraine to be “winning” its information war against Russia. But what is an information war? Is it a fancy name for propaganda? Does it also include, for example, controlling the flow of information to open source platforms (which can then be geolocated using Open Source Intelligence (OSINT) techniques)? What might future information war mean in a world of the metaverse and Extended Reality (XR)?


Platforms, Algorithms and Blockchain

Twitter takeover: fears raised over disinformation and hate speech

The Guardian. Dan Milmo & Alex Hern. October 28, 2022

Elon Musk’s Twitter acquisition has been polarizing, sparking reactions from politicians, regulators and non-profits across different continents.

Some have expressed concerns about potential changes to Twitter’s content moderation policies now that it’s in the hands of the Tesla billionaire, while others celebrated how they expect the platform’s newly minted leader will handle content and speech on Twitter.

Senior politicians in the UK and Europe on Friday warned Musk over content moderation on Twitter, with the EU stressing the platform will “fly by our rules” and a UK minister expressing concerns over hate speech under the billionaire’s ownership.

Social Media and the 2022 Midterm Elections: Anticipating Online Threats to Democratic Legitimacy

Centre for American Progress.E. Simpson,A. Conner, A. Maciolek. November 3, 2022.  

Social media companies continue to allow attacks on U.S. democracy to proliferate on their platforms, undermining election legitimacy, fuelling hate and violence, and sowing chaos.

This issue brief outlines what is needed from social media companies and identifies three of the top threats they pose to the 2022 midterm elections—the season opener for the 2024 presidential election.

Truth Cops: Leaked Documents Outline DHS’s Plans to Police Disinformation

The Intercept. Ken Klippenstein & Lee Fang. October 31, 2022

The Department of Homeland Security is quietly broadening its efforts to curb speech it considers dangerous, an investigation by The Intercept has found. Years of internal DHS memos, emails, and documents — obtained via leaks and an ongoing lawsuit, as well as public documents — illustrate an expansive effort by the agency to influence tech platforms.

As 2022 midterms approach, disinformation on social media platforms continues

PBS. David Klepper (AP). October 21, 2022  

With less than three weeks before the polls close, misinformation about voting and elections abounds on social media despite promises by tech companies to address a problem blamed for increasing polarization and distrust.

While platforms like Twitter, TikTok, Facebook and YouTube say they’ve expanded their work to detect and stop harmful claims that could suppress the vote or even lead to violent confrontations, a review of some of the sites shows they’re still playing catch-up with 2020, when then-President Donald Trump’s lies about the election he lost to Joe Biden helped fuel an insurrection at the U.S. Capitol.

History Is a Good Antidote to Disinformation About the Invasion of Ukraine

CIGI. Heidi Tworek. March 8, 2022

Much of the current misinformation online exists to scam and to manipulate through speed. TikTok has become a key platform for misleading content. TikTok’s algorithm appears to offer up many misleading videos alongside scam calls for donations. These videos often depict older conflicts or conflicts in other places; the posters claim they are occurring in Ukraine and can garner millions of views. Abbie Richards suggests that “TikTok’s platform architecture is amplifying fear and permitting misinformation to thrive at a time of high anxiety,” calling the platform’s design “incompatible with the needs of the current moment.” It is hard to resist the siren call of doom scrolling. But a slower accumulation of knowledge at moments of crisis can avoid hurtful faux pas and prevent inadvertent spreading of disinformation.

Experts grade Facebook, TikTok, Twitter, YouTube on readiness to handle midterm election misinformation

The Conversation. October 26, 2022

The 2016 U.S. election was a wake-up call about the dangers of political misinformation on social media. With two more election cycles rife with misinformation under their belts, social media companies have experienced identifying and countering misinformation. However, the nature of the threat misinformation poses to society continues to shift in form and targets. The big lie about the 2020 presidential election has become a major theme, and immigrant communities are increasingly in the crosshairs of disinformation campaigns – deliberate efforts to spread misinformation.

Social media companies have announced plans to deal with misinformation in the 2022 midterm elections, but the companies vary in their approaches and effectiveness. We asked experts on social media to grade how ready Facebook, TikTok, Twitter and YouTube are to handle the task.

Misinformation and hate are trending in this election year

CNN Politics. Zachary B. Wolf. October 31, 2022

Misinformation is trending now that Elon Musk, the self-described “Chief Twit,” has bought Twitter, his favourite social media platform.

Meanwhile, displays of hate are breaking out in public now that Kanye West, who now goes by Ye, has despicably fashioned himself as a folk hero for those spewing antisemitic messages, pushing his own anti-Jewish conspiracy theories.

The stories dovetail not just because they are built on the wild spread of false claims, but also because West’s Twitter account – locked in early October for an antisemitic tweet in which he said he was going “death con 3 on Jewish people” – was recently reactivated. More on that below.

Musk’s Twitter takeover highlights disinformation risk

Emerald Insight. November 7, 2022

Musk has repeatedly said he wants the platform to prioritize ‘free speech,’ but has also reassured European regulators that he will be complying with local laws, even where they involve content screening. Although Twitter’s policy has yet to be finalized, the turmoil highlights the risks of online disinformation. The business models of social media companies and tech platforms contain strong incentives that promote misinformation and disinformation. Advertising comprises 80% of the income of Google's parent company Alphabet, and well over 90% for Twitter and for Meta, the owner of Facebook and Instagram.

Social media offer advertisers hundreds of millions of users who are difficult to reach through other media. High levels of engagement ensure that the audience becomes 'captive'. Moreover, using data collected on users enables platforms to match advertisers and potential customers efficiently.

Coalition Sends Letter Urging Social Media Platforms to Prevent Online Election Disinformation

Legal Defence Fund. October 13, 2022

Today, LDF and a coalition of civil rights, public interest, voting rights, and other organizations, sent a letter urging social media companies to take immediate steps to curb the spread of voting disinformation in the midterms and future elections and to help prevent the undermining of our democracy. This letter is a follow up to another sent last May. Many companies have announced updates to their voter interference and disinformation policies in recent weeks but the policies have little effect unless enforced continually and consistently.

CIA analyst decries free speech 'nonsense' on Musk's Twitter, claims it will benefit Russian disinformation

Fox News. Gabriel Hays. November 26, 2022

CIA analyst Bob Baer claimed that "Putin is going to be all over Twitter" thanks to billionaire owner Elon Musk’s policies for running the company.

He also stated that the "voice of the people" Musk claimed wants free speech is "Russian intelligence" looking to undermine American support for Ukraine.

During a recent segment on CNN, the analyst argued that Musk’s pro-free-speech attitude towards operating the company, particularly in the way he has decided to reinstate banned accounts and not suspend users for any speech, means Russian hackers will benefit.

What Russia’s Cyber Sovereignty Woes Tell Us About a Future “Splinternet”

International Forum for Democratic Studies. Elizabeth Kerley. October 13, 2022  

Since launching its full-scale invasion of Ukraine, Russia has been putting its longstanding aspirations for “cyber sovereignty” to the test. In keeping with its longstanding objectives of “technological independence and information control,” the Kremlin has promoted homegrown tech in the face of sanctions while also halting the flow of independent information. Meanwhile this April, 61 mostly democratic countries signed a declaration articulating a vision for the internet that is “open, free, global, interoperable, reliable, and secure.” What does this clash of visions portend for the digital domain?

Twitter, others slip on removing hate speech, EU review says

Tech Explore. Kelvin Chan. November 24, 2022  

Twitter took longer to review hateful content and removed less of it in 2022 compared with the previous year, according to European Union data released Thursday.

The EU figures were published as part of an annual evaluation of online platforms' compliance with the 27-nation bloc's code of conduct on disinformation.

Twitter wasn't alone—most other tech companies signed up to the voluntary code also scored worse. But the figures could foreshadow trouble for Twitter in complying with the EU's tough new online rules after owner Elon Musk fired many of the platform's 7,500 full-time workers and an untold number of contractors responsible for content moderation and other crucial tasks.

90% of People Claim They Fact-Check News Stories As Trust in Media Plummets

Security Org. Aliza Vigderman. November 4, 2022

As the popularity of social media surpasses traditional news sources, information has grown more unreliable, and “fake news” becomes harder to detect. The same digital platforms that empower global communication seed doubt and spread misinformation.

The misinformation and disinformation that have influenced elections and hampered public health policies also damaged faith in all forms of media. Meanwhile, political attacks on some news sources have divided Americans further into partisan camps.

The nation is united, however, in recognizing the problem. Our second annual study of more than 1,000 people revealed that nine out of 10 American adults fact check their news, and 96 percent want to limit the spread of false information.

Biden admin pushed to bar Twitter users for COVID ‘disinformation,’ files show

New York Post. Jesse O’Neill. December 26, 2022.

The Biden White House pressured Twitter to both “elevate” and “suppress” users based on their stances on COVID-19 — ultimately “censoring info that was true but inconvenient” to policy makers, according to the latest edition of the “Twitter files”. The coercion campaign during the pandemic began with the Trump administration — which asked Twitter to crack down on stories about panic buying and “runs on grocery stores” in the early days of the outbreak — but was stepped up under Biden, whose administration was focused on the removal of “anti-vaxxer accounts,” according to The Free Press reporter David Zweig.

Congressman Schiff, Senator Whitehouse Urge Meta to Maintain Policies on Election Misinformation, Uphold Trump Suspension

News Release. December 14, 2022.

Congressman Adam Schiff (D-Calif.) and Senator Sheldon Whitehouse (D-R.I.) sent a letter to Meta's President of Global Affairs, Nicholas Clegg, urging Meta to maintain its commitment to keeping dangerous election denial content off its platform.

“After each election cycle, social media platforms like Meta often alter or roll back certain misinformation policies, because they are temporary and specific to the election season,” Schiff and Whitehouse write. “Doing so in this current environment, in which election disinformation continuously erodes trust in the integrity of the voting process, would be a tragic mistake. Meta must commit to strong election misinformation policies year-round, as we are still witnessing falsehoods about voting and the prior elections spreading on your platform.”

For Teens (and Adults) Fighting Misinformation, TikTok Is Still ‘Uncharted Territory

EdSurge. Nadia Tamez-Robledo. December 7, 2022.

TikTok may have started as the preferred social media platform for modern dance crazes, but the platform’s growth has made it a home for something else—misinformation. Add to that its popularity among teens and its powerful algorithm, and you have a mix that worries some educators about TikTok’s potential negative impacts for young users. A recent study from NewsGuard found that roughly one in five TikTok videos contain misinformation, whether the topic is COVID-19 vaccines or the Russia-Ukraine war.

ChatGPT: Faking it, a genuine artificial concern

The Economic Times. January 23, 2023.

Spread of misinformation can have serious consequences, from influencing public opinion to undermining trust in institutions. With the ability to generate large amounts of text quickly and convincingly, generative AI tools like ChatGPT could be used to create and disseminate fake news on a large scale.

3 Lessons on Misinformation in the Midterms Spread on Social Media

Brennan Center for Justice. Maya Kornberg et al.. January 5, 2023.

The Brennan Center has developed recommendations on how to fight misinformation based on analysis of how it takes root and circulates. Election-related falsehoods corrode American democracy. Since 2020, lies about a stolen presidential election cropped up in dozens of campaigns for election administrator positions and spurred unprecedented threats to election officials. The result has been a deluge of resignations that drained expertise from election offices across the country. Further, public trust in elections has plummeted amid disinformation promoted by Donald Trump and other prominent election deniers.

Handbook to combat CBRN disinformation

United Nations Interregional Crime and Justice Institute. January 13, 2023.

To produce the Handbook to combat disinformation, UNICRI has monitored several social media platforms, paying specific attention to the role of violent non-state actors, namely: violent extremists; terrorist organizations (particularly those associated with ISIL, also known as Da’esh and Al-Qaida); and organized criminal groups. The Handbook aims at enhancing understanding of CBRN disinformation on social media while developing competencies to prevent and respond to disinformation with a specific focus on techniques for debunking false information. It also equips practitioners with the competencies to effectively analyse, understand and respond to CBRN disinformation in the media and on social media platforms.

Climate change misinformation 'rocket boosters' on Elon Musk's Twitter

CTV News. David Klepper (AP Staff). January 19, 2023.

Search for the word "climate" on Twitter and the first automatic recommendation isn't "climate crisis" or "climate jobs" or even "climate change" but instead "climate scam." Clicking on the recommendation yields dozens of posts denying the reality of climate change and making misleading claims about efforts to mitigate it. Such misinformation has flourished on Twitter since it was bought by Elon Musk last year, but the site isn't the only one promoting content that scientists and environmental advocates say undercuts public support for policies intended to respond to a changing climate.

As Deepfakes Flourish, Countries Struggle With Response

NYT. Tiffany Hsu. Jan 22, 2023.

Deepfake technology — software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone. In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

Over 1330 Facebook groups and pages spreading disinformation identified in the Balkan region

Debunk. Radovan Ognjenovic & Daniela Vukcevic. December 30, 2022.

Groups and pages on social media have been continuously spreading and amplifying misleading content on four different topics – Russia, NATO, LGBTQIA+, and COVID-19. Although not exclusively, administrators of these groups and pages shared misleading content containing all the aforementioned sentiments: pro-Russian, anti-NATO, anti-LGBT, and questioned the efficiency of COVID-19 measures, unrelatedly to their country of origin or language.

DISINFORMATION: Top Risks of 2023

EURASIA GROUP. Ian Bremmer & Cliff Kupchan

Rapid-fire advancements in artificial intelligence could help misinformation thrive in the year ahead, a new report is warning. That’s according to the Top Risk Report for 2023, an annual document from the U.S.-based geopolitical risk analysts at the Eurasia Group. The “weapons of mass disruption” that are emerging from speedy technological innovations “will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the report said.

From deepfakes to ChatGPT, misinformation thrives with AI advancements: report

Global News. Rachel Gilmore. Jan 4, 2023

Rapid-fire advancements in artificial intelligence could help misinformation thrive in the year ahead, a new report is warning. That’s according to the Top Risk Report for 2023, an annual document from the U.S.-based geopolitical risk analysts at the Eurasia Group. The “weapons of mass disruption” that are emerging from speedy technological innovations “will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the report said.