Browse Articles of Interest

 

Obama: I Underestimated the Threat of Disinformation.

April 2022. Atlantic Monthly. By Jacob Stern

Atlantic monthly editor in chief Jeffrey Goldberg in conversation with Barack Obama about the social web, Ukraine, and the future of democracy. April 2022.

 

Feds inch closer to making social media less toxic.

March 2022 National Observer

Canada could soon see stricter rules to tackle disinformation, hate speech and other harmful content on social media and online platforms. Heritage Minister Pablo Rodriguez announced the creation Wednesday of a group of experts who will advise the government on how best to deal with the problem while protecting freedom of speech. The 12-person committee will assess ways to tackle a host of problems, including hate speech, child exploitation and incitements to violence.

 

Aging in an Era of Fake News

Misinformation causes serious harm, from sowing doubt in modern medicine to inciting violence. Older adults are especially susceptible—they shared the most fake news during the 2016 U.S. election. The most intuitive explanation for this pattern lays the blame on cognitive deficits. Although older adults forget where they learned information, fluency remains intact, and knowledge accumulated across decades helps them evaluate claims. Thus, cognitive declines cannot fully explain older adults’ engagement with fake news.

 

Digital media and misinformation: An outlook on multidisciplinary strategies against manipulation

This survey proposes a systematic review with emphasis on exploring interdisciplinary paradigms and the different strategies that have been used to contain misinformation spread. Through the analysis of the existing literature, five main approaches were identified, systematized, and characterized through examples of guidelines, actions, projects and systems designed to curb misinformation. The analysis comprises perspectives on journalism; education; governmental responses; computational solutions; and digital platforms.

 

The Canadian government’s response to foreign disinformation: Rhetoric, stated policy intentions, and practices, Nicole J. Jackson International Journal 2022, Vol. 0(0) 1–20 © The Author(s) 2022

In recent years, governments have considered how to respond to “disinformation.” However, there is little academic literature on Canada’s response in the area of security and foreign policy. This paper addresses this gap by analyzing how and why Canadian government foreign and security actors have “securitized” foreign disinformation. It argues that, since 2014, they have increased awareness about disinformation and transformed it into a matter of “security” through rhetoric and discursive framing, as well as stated policy intentions and actions. This has occurred in response to perceived threats, but without coherent policy. The findings suggest that challenges are linked to persistent difficulties in defining and understanding disinformation. The result has been fragmented actions, some of which may legitimate actions that deviate from “normal political processes.” The implications are that definitional challenges need to be addressed, the role of security actors assessed, and a clearly articulated and holistic strategy drawn.

 

How One Social Media App Is Beating Disinformation

Line, arguably Taiwan’s most popular messaging app, is the main battleground of disinformation in Taiwan. Line quickly took over the Taiwanese market after it was launched in Japan in 2011 by a subsidiary of the Korean tech giant Naver Corporation. In 2019, approximately 90 percent of Taiwanese used the app, sending more than 9 billion messages per day. Like WhatsApp, Line’s design makes it easy to rapidly disseminate harmful and false content: It offers a high degree of anonymity, as user profiles often have only a name and picture, and in combining features such as its own integrated news platform and private and encrypted group chats, it encourages users to share articles within the app. Users have to take an extra step to share to other apps, and this friction point keeps users on Line.

 

How Facebook and Google fund global misinformation. Nov 2022

The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of information ecosystems around the world. An MIT Technology Review investigation, based on expert interviews, data analyses, and documents that were not included in the Facebook Papers, has found that Facebook and Google are paying millions of ad dollars to bankroll clickbait actors, fuelling the deterioration of information ecosystems around the world.

 

Misinformation, Disinformation, and Online Propaganda

The research literature on misinformation, disinformation, and propaganda is vast and sprawling. This chapter discusses descriptive research on the supply and availability of misinformation, patterns of exposure and consumption, and what is known about mechanisms behind its spread through networks. It provides a brief overview of the literature on misinformation in political science and psychology, which provides a basis for understanding the phenomena discussed here. It then examines what we know about the effects of misinformation and how it is studied. It concludes with a discussion of gaps in our knowledge and future directions in research in this area.