Misinformation isn't that bad! (Or maybe I just want you to believe that)
'In Conversation' with former NSA lawyer Susan Hennessey at Duke

INTRODUCTION
In the U.S. media guilty of misinformation on disinformation?
As election day fast approaches, election hacking continues to attract lots of attention. It's a vague term that has been used to describe everything from nation-state actors changing voter registration records to politicians discrediting mail-in ballots.
Perhaps nothing has caused more hand-writing than worries over disinformation campaigns that might confuse the American public. Bot armies have been used to influence elections around the world (I recently did a podcast about alleged Chinese influence in South Korea's election, for example.) The Covid-19 era has been marred by fake cures and lies about treatments and vaccine development. All of it seems to serve the goal of eroding public confidence and trust in institutions that are vital to the future of American democracy.
Or, maybe not.
Former NSA lawyer and current Brookings fellow Susan Hennessy spoke at Duke University last week on a wide range of cybersecurity topics, but when it came to election hacking, she was decidedly, and refreshingly, non-alarmist. America's election infrastructure, while far from perfect, is also far more secure than it has been since the 2000 election debacle, she thinks. And she downplayed the importance of social media mayhem and its impact on American voters. There's a lot of social science saying disinformation isn't much of a driver of U.S. voter behavior, she said.
"We risk ... inflating the significance of it, and doing the adversary's work for them," she warned. By talking so much about disinformation, we risk amplifying what in reality is a relatively small problem, creating the very distrust that America's adversaries are shooting for. We also risk "allocating really limited resources on a problem that might not be having that much of an impact," she said.
In today's "In Conversation," we'll get more of Susan's perspective on this, along with Duke professors David Hoffman and Shane Stansbury.

From: Bob
To: Susan
cc: David and Shane
Susan: As someone who's written about election hacking for a couple of decades, I'm not sure how I feel about your assessment of the current state of play. It sure feels like problems abound.
I want to hone in on your statements about misinformation. I feel like I can't look at Twitter or Facebook for a moment without seeing messages from BobHater73876001 saying, "All journalists are traitors," or FreedomLover$$$99 saying "Look, there's 9,000 pounds of U.S. mail in a trash bin."
But you seem convinced that those kinds of messages don't really impact U.S. voter behavior, which I found reassuring. Could you talk more about that? Do voters see through these kinds of messages? What's the right way to handle them? Should people (and even journalists) just ignore all these posts as noise? Are you suggesting Twitter isn't real life?

From: Susan
To: Bob, David and Shane
Bob,
Thanks for your question. I don't mean to suggest that social media disinformation campaigns aren't problematic nor that we should ignore them entirely. But I think it is important to take an evidence-based approach to understanding the impact, both because we need to best allocate limited resources and because there is a risk that overstating the impact might inadvertently achieve an adversary's goal by undermining confidence in the integrity of our electoral process.
If we take the 2016 Russian social media trolling efforts, for example, the preliminary social science doesn't suggest a very strong impact. [One good paper on the topic is "Exposure to untrustworthy websites in the 2016 U.S. election" by Andrew Guess, Brendan Nyhan, and Jason Reifler: http://www.dartmouth.edu/~nyhan/fake-news-2016.pdf] The public debate about the topic tends to conflate the number of inauthentic accounts or individual posts with the metrics that are more important in measuring impact, such as how many people actually saw the content in question and, of those people, how many changed their voting behavior. If the answer is ultimately that a very small percentage of individuals exposed to disinformation content are persuaded by it, then the efforts of social media platforms are likely best spent on methods to reduce the "viral' spread, not necessarily the more difficult task of preventing content from being posted in the first place.
On the other hand, we have evidence that leaking hacked materials can have a substantial impact on media coverage and public perception of campaign issues. So it is important to invest significantly in the communication security of individuals associated with various campaigns and officials.
Susan

From: Bob
To: Susan, Shane and David
Thanks, Susan. Right on cue, hacked emails related to an election are back in the news this week. Â
I do find it persuasive that few, if any, voters change their minds on a candidate because of social media posts. I suspect they just reinforce existing beliefs, perhaps influencing turnout around the edges. Who cares what I suspect? That paper is great, and its conclusions pretty solid. I do think it would be good to put to rest the idea that "fake news" elected Donald Trump. On the other hand, Channel 4's recent expose on the 2016 Project Alamo, designed to suppress Black voter turnout via highly targeted ads, raises this impact issue in another way: No one thinks advertising influences their own actions, but then, billions of dollars say otherwise. (I strongly recommend the Channel 4 report.). David and Shane, what do you think?
Either way, I certainly agree that many folks are worried about the wrong things. So, how do we keep elected officials from getting into email hacking trouble? And how do we keep social media platforms from turning the most wacko conspiracy theories into trending topics?

From: Shane
To: Susan, David and Bob
Thanks, Bob and Susan. You both make terrific points, and I couldn’t agree more with Susan’s call for an evidence-based approach. We are still in the early stages of really understanding the impact of online disinformation efforts, and of course if we don’t fully understand the problem we can’t implement the right solutions.Â
One of our obstacles is good data. Unfortunately, with a few exceptions, the major social media platforms have not released the kind of information that will allow researchers to fully understand the way disinformation operations work. We hear this from social scientists attempting to study state-sponsored campaigns. To be sure, there are good reasons for companies to remain protective of user data and the analytics they use to identify inauthentic or fraudulent accounts, and some companies have made great strides in sharing information. But collectively we haven’t developed any standards for transparency that could help us better understand the problem. This impacts not only efforts to better understand how people respond to disinformation (which I agree is a critical question), but also efforts to figure out where it is coming from. (An example: A congressional inquiry led Twitter to disclose, in 2017, accounts it had linked to Russia’s Internet Research Agency (IRA); however, the company’s attribution of some of those accounts was later called into question by academic researchers who observed that they appeared to belong to genuine U.S. citizens with no obvious connection to the IRA.)Â
Relatedly, I’m not sure that the question of how to prevent harmful content going viral (or, perhaps more importantly, how to prevent viral content from having harmful consequences), can be divorced from the question of attribution. Some early evidence may suggest that Russian campaigns to influence recent elections fell short, and that domestic variables were more important, but we have a relatively short history with which to work. As we all know, nation-state techniques are always evolving.  If we find, for example, that state-sponsored algorithmic manipulation is having a material impact on the way information is being spread, then we should probably know that so we can develop the right tools to combat it. But again, we’ll need to figure out standards for harnessing that data so we can get better evidence. Â
There is also a larger question of what metrics we care about and want to prioritize. Disinformation is a big problem with lots of dimensions. Where we draw lines and how we identify the harms matter. In our current moment, a lot of attention is being placed on how disinformation efforts influence election outcomes, and the highly publicized Russian efforts to influence the 2016 election helped focus researchers and the public at large on that issue. Now, I’ll be the first to say that our democratic process should be our top priority, and in fact if we are trying to build consensus around policy solutions, focusing on the impact of disinformation on voting and elections is a good place to start.  But there are other equities at play as well. Bob’s example of efforts to suppress Black voter turnout is an example of that. More generally, I worry that we haven’t quite figured out a good metric for measuring -- let alone combatting -- the longer-term corrosive effects of disinformation efforts on public discourse and trust in our institutions. Even if we can’t show a causal link to any particular election outcome, I worry about the cumulative effect on the more elusive connective tissue that keeps our democracy healthy. If someone has figured out a good way to measure that one, I’m all ears.
Shane

From: David
To: Shane, Bob and Susan
I agree we need to gather more data about the impacts of covert foreign nation state efforts to manipulate social media. However, I wonder what time horizon we should be looking at for that impact. If we take a very short time horizon and ask if specific disinformation campaigns changed voting behavior, then we may find support for the idea that they do not make a significant impact. If, though, we instead ask whether these covert campaigns exacerbate social media algorithms’ tendencies to drive political polarization over the longer term, then we may become convinced that the issue is a higher priority. At least partially due to the lack of algorithmic transparency by social media platforms, we have little understanding of how covert foreign nation states may be either manipulating, or taking advantage of, those algorithms to further enflame differences in the American people. While those efforts may not tip the results of a particular election, they may create over time the higher likelihood of politicians with extreme positions to have more popular support. I support gathering more data to determine the scope of these problems, but I am left with the opinion that no matter how much data is produced, allowing covert foreign nation states to influence the content that Americans come into contact with is a bad idea.