Michael Garman, Staff Photographer

The Yale Cyber Leadership Forum held its second session last Friday with two panel discussions on “Disinformation and the Future of Democracy.”

The forum, which this year is centered on “Bridging the Divide: National Security Implications of Artificial Intelligence,” is a collaboration between the Jackson Institute for Global Affairs and the Yale Law School. It aims to connect law, technology, policy and business approaches to cybersecurity.

“Disinformation really was not considered part of cybersecurity as it may have been originally defined,” Executive Director of International Security Studies Ted Wittenstein said. “It’s not about the systems and networks themselves as much as it’s about the people and our perceptions, […] the human dimension of cybersecurity.”

The first panel featured John Ternovski GRD ’21, Joshua Lam GRD ’22 SOM ’23, Libby Lange GRD ’22 and Elizabeth Goldberg GRD ’18. Wittenstein moderated the panel.

The panel opened with a discussion of “white noise,” a concept that Lam and Lange investigated with regards to disinformation campaigns in Xinjiang, China. Lam and Lange co-authored in 2021 a paper titled “White Noise: Pro-Government Tactics to Shape Xinjiang Discourse Online are Evolving,” which included an algorithmic analysis of over 300,000 tweets through a technique called topic modeling — a method wherein an algorithm identifies potential topics based on recurring terms. They found that the hashtag #Xinjiang was bombarded with alternative narratives, including the recurrent pairing of phrases like “joy” and “happiness” with “Xinjiang.”

Lange described creating a “flag” for such posts that users could click to see more information from credible sources as a potential step that social media platforms like Twitter could take, similar to what many social media platforms did for COVID-19 and the presidential election.

As graduate students at Jackson, Lam and Lange had done a smaller scope version of the project as part of a class on the programming language Python, taught by Casey King, the director of the capstone program at the Jackson Institute.

“One great thing about Jackson [is that] we were given more flexibility to take technical courses in other departments,” Lam said.

Wittenstein, Lam and Lange also compared the issue of disinformation created by the Chinese government to suppress advocacy efforts of Uyghur muslims with that by the Russian government towards Ukraine, which they said was comparatively less effective. Lam pointed out that this could be because Ukraine is a much more fast-paced issue in the eyes of western media, while western media has moved much more slowly in covering the crisis in Xinjiang, allowing the Chinese government to reign in their control of information about that region. 

The discussion then moved from what Wittenstein called “manipulated media” to “actual forged media” like deep fakes.

Ternovski shared two side-by-side videos that he designed for a study, one of which was real while the other was a deep fake. When the audience was asked to vote which was the real video, only two of about 20 audience members were correct.

Ternovski highlighted that deep fakes are getting “more and more convincing,” and also that the tools are becoming much more accessible and easy to use.

“To this day, deepfake manufacturing is still a little bit of an art, […] the package will get you 90 percent there but the final 10 percent is a lot of tweaking options, it’s a lot of touching up frame by frame,” Ternovski said. “We’re not at the point where the average American is going to easily make [a deep fake], but we’re moving in that direction.”

His study focused on how warnings about deep fakes on a video might impact a viewer’s ability to discern what is real and what is forged. Ternovski demonstrated that while warnings increased people’s skepticism, it did not make them better able to distinguish real from fake. If a warning was shown on a real video, he found that participants were likely to disbelieve that video too — a phenomenon that Ternovski called a “false positive effect.”

Ternovski’s second study, which looked at Sen. Mitt Romney (R-MI) giving a speech about a real, but lesser known, policy stance — in this case, a pro-choice stance on abortion — and then randomized whether or not participants would receive a warning about potential deep fakes on the video. What he found was that participants who received the warning not only disbelieved the real video, but also interpreted Romney as having the opposite policy stance as what was expressed in the video, namely thinking that he was anti-abortion.

Ternovski concluded by suggesting that his study found a counterintuitive result to Lange’s suggestion of including warning flags on topics that may contain misinformation, as such warnings did not necessarily prime people to think more critically.

Goldberg described some of the strategies she and her coworkers at Jigsaw have been employing to educate people about disinformation and manipulated media online. She showed the audience a video on false dichotomies that Jigsaw had made, an example of something she called an “inoculation video.”

When Goldberg started at Jigsaw almost three and a half years ago, she said, their violent extremism portfolio was exclusively focused on groups like ISIS and al-Qaeda, but she wanted to draw attention to a range of ideological extremism outside of that. At that point, she noticed a lot of “rumblings online” about white nationalism and spent a year and a half speaking to 36 former white supremacists to study patterns of radicalization.

“The radicalization journey is accelerated by the internet — you can be bombarded with and immersed in so much more information and more communities than you ever could before the internet,” Goldberg said. “What we found was very few folks make it all the way down that funnel [and act on the ideas they spread] because there is increasing social costs to joining a group or taking action with a group, but there are so many people on the internet who can discover ideas and start to engage with ideas.”

In one of their studies, they showed participants two “false narratives,” one of white supremacy and one of male supremacy, which were accompanied by inoculation videos. What was interesting, she said, was that a vast majority of people in their sample found that these inoculation videos helped them to question what they were viewing, but the inoculation videos were significantly less effective for viewers who were already believers of those ideologies.

“When we talked about interventions, […] the best approaches are at the beginning. It’s really hard to dislodge people once they have bought into it,” she said.

The second panel focused on harnessing the social sciences and humanities to counter disinformation and online extremism. It featured Molly Crockett, an associate professor of psychology; L.A. Paul, the Millstone Family professor of philosophy and cognitive science at Yale Law School; and Jason Stanley, the Jacob Urowsky professor of philosophy. The panel was moderated by Asha Rangappa, director of admissions and a senior lecturer in Global Affairs at the Jackson Institute.

“The topic of disinformation often gets really focused on the technology or the platforms,” Rangappa said in her introduction. “And I think we fail to see sometimes that it’s cross-disciplinary and how those disciplines intersect with the technology.”

The panel began with a discussion of the link between disinformation and fascism, and the paradoxical nature of social media as both a democratizing force that threatens authoritarians  and a weapon used by authoritarians and far-right movements around the world.

Stanley argued against the idea of a new age of disinformation, instead suggesting that the problem society faces is a historic one to do with how we create social identity — certain groups or governments highlighting an out of context crime in order to create a fascist, social and political movement by unifying people against that entity. 

He illustrated his point through the example of the ethnic cleansing of Rohingya muslims in Myanmar, where five years of genocide and “what we call disinformation” by the Myanmar government grew out of one true incident of crime in 2012, and what Russia is currently trying to do by spreading disinformation that there is a genocide of Russians in eastern Ukraine.

“The problem we face is this kind of social identity connection, this kind of ‘Carl Schmitt point’ that you create a nation by choosing an enemy […] and that’s what we’re seeing again and again,” Stanley said. “It’s not disinformation at all, it’s social identity formation by vilification.”

Crockett chimed in with an examination of the “moralized nature of […] hate speech and propaganda.” Crockett’s research lab investigates how people learn and make decisions in social situations, and they have been tracking expressions of moral outrage on social media at a large scale.

What they found, she said, is that if you look at incidents of hate speech across a variety of topics, they overlap with expressions of moral outrage 75 percent of the time. Moral outrage has become a feature of many fascist movements that have gained traction online, because, as her study found, moral outrage is more likely to go viral online.

“You get the design of social media that really rewards and spreads engaging content,” Crockett said. “And it turns out the kind of content that is most conducive to the rise of these movements is exactly the type of information that, you know, inadvertently but unfortunately, these platforms are designed to amplify.”

Stanley and Crockett debated the value of moral outrage online, as it can be utilized for different reasons. Rangappa pointed out that awe, alongside anger, is another emotion that generates significant online engagement and is currently seen in the visibility of inspiring posts made by Ukrainians.

“My twitter feed is filled with moral outrage by me,” Stanley joked. “You know, we need moral outrage, and we need moral outrage in the face of fascism. […] This is an extreme time and anti-fascism requires an extreme response, building moral outrage.”

Moral outrage itself is not inherently good or bad, Crockett responded. It instead, she said, depends on the target and likewise, the tendency of social media platforms to amplify moral outrage is inherently neither good nor bad — it depends on the outcome.

Crockett added that some data suggests that moral outrage from the right gets a bigger boost on social media than moral outrage on the left, although she emphasized that that is a coarse distinction. Furthermore, news posts from websites that use disinformation create more moral outrage, suggesting that groups that are willing to use disinformation would generate more engagement online.

“Disinformation is engineered to create moral outrage,” she said.

Paul pointed out the problem of certain elite groups having a monopoly on the understanding of emerging technologies and how to use them. When new technology comes on the scene, she said, there is a period of adaptation where the experts have an advantage over everyone else. 

A small group of people are therefore able to exploit the unknowns, which she said is currently occurring with the emergence of AI technology and other ground-shifting changes like the pandemic and climate change.

The panel discussed the question of whether the internet allows for the distortion of moral outrage, with Crockett citing a phenomenon called pluralistic ignorance, where individuals hold a private belief that they worry is at odds with their perception of a collective belief. In her research, she found that when people expressing outrage online were asked to rate how angry or passionate they felt about what they posted, they tended to rank it lower than what a group of observers reading the post thought the poster was feeling.

“I don’t know if algorithms can distinguish between good and bad moral outrage,” Rangappa said, to which Stanley quipped, “Education can.”

Stanley’s most recent book “How Fascism Works” was published in 2018 and zones in on strategies employed by fascist regimes.

The ending discussion, as well as the audience question and answer session that followed, largely centered on the question of how much technology companies should be held responsible for finding solutions to these problems. All three panelists were pessimistic about the role of these companies.

“The tech companies are the enemy, […] they only care insofar as it impacts their profits,” Stanley said. “What’s important for us as academics is not to be complicit.”

Crockett and Paul echoed these thoughts, sharing the difficulties of navigating collaborative research and access to data collected by technology companies for their research as individual academics, particularly coming from the humanities. Crockett criticized industry giants like Peter Thiel and Mark Zuckerberg for conveniently looking towards technological solutions for problems precipitated or exacerbated by technology itself.

Paul said that she has deep concerns about the relationship between academia and the technological world. There is a bastion of intellectual thought, she said, that is free from the capitalism that drives the technological industry. That bastion is being eroded by the massive differential in resources — both monetary and data — between the technological industry and academic research.

“It is thinking in the humanities that reminds us where we come from and where we need to go,” Crockett emphasized.

Stanley agreed with these sentiments, but pointed out that Yale, as an “elitist” and “anti-democratic” institution, will always fold into the technology companies. The onus falls onto individual academics within the institution to protect the values that make academia central to a democracy, he said.

Hongyi Shen ’24, who attended the discussion, said she found the approach of looking to ancient political theory in order to consider issues in modern technology particularly compelling, as well as the argument that social media platforms cannot claim to perform only editorial functions as content on their platforms substantively change over time.

“I thought it was interesting to frame disinformation in the specific language of propaganda, which is identified to be the central problem of democracy,” Shen said.

The third and final session of the forum will be held on April 1.

MIRANDA JEYARETNAM
Miranda Jeyaretnam is the University desk editor. She previously covered the Jackson Institute of Global Affairs and developments at the National University of Singapore and Yale-NUS. Formerly the opinion editor under the YDN Board of 2022, she co-founded the News' Editorial Board and wrote for her opinion column 'Crossing the Aisle' in 2019-20. From Singapore, she is a junior in Pierson College, majoring in English.