How do we combat 'Operation Infektion 2' in the age of misinformation?
In Conversation: Can tech help us find truth?
INTRODUCTION
Four decades ago, the Soviet Union spread a horrible lie around the world. Through what's now known as Operation Infektion, Russian agents spread a conspiracy theory that AIDS had been created in a secret U.S. military biological weapons lab in Maryland. The goal was to create strife among U.S. allies, and perhaps persuade some U.S. military bases they hosted were unsafe. How well did Operation Infektion work? Those rumors and conspiracy theories about AIDS persist today.
We are now living in the age of Operation Infektion 2, where Russian agents are actively spreading anti-vaxx disinformation, says Paladin Capital Group chief investment officer Chris Steed. During a recent talk at Duke University, Steed talking about a current disinformation crisis that has impacted the U.S. response to Covid-19, and most recently, coronavirus vaccination efforts.
A study by the National Institutes of Health published in 2019 titled "Weaponized Health Communication" concluded that Russian trolls were actively working to promote discord in debates surrounding vaccines. "Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination,” the report found. And in March, the Wall Street Journal reported that four websites with links to Russian intelligence services were actively working to undermine confidence in the Pfizer vaccine.
Paladin generally invests in cybersecurity companies, particularly those that might help protect critical infrastructure, and Steed made the point that now disinformation enabled by hacking from nation-state actors is an issue that needs capital investment to spur a collaborative national security response from the public and private sectors. So we decided to discuss: What's the best way to encourage investment in disinformation-fighting tools? Steed is here, along with Nazo Moosa from Energy Impact Partners, a Paladin venture partner; so are Duke Professors David Hoffman and Ken Rogerson.
(If you are new to In Conversation, I am a visiting scholar at Duke University this year studying technology and ethics issues. These email dialogs are part of my research and are sponsored by the Duke University Sanford School of Public Policy and the Keenan Institute for Ethics at Duke University. See all the In Conversation dialogs at this link.)
FROM: Bob Sullivan
TO: Chris Steed, David Hoffman, Ken Rogerson, Nazo Moosa
As a journalist, I can tell you from experience that truth-telling often isn’t a very profitable business. In an age where traditional journalism is fading and consensus on even the most basic facts seems elusive, how can technology help solve this problem?
FROM: Nazo Moosa
TO: Chris Steed, David Hoffman, Ken Rogerson, Bob Sullivan
The large platforms have all developed datasets of fake videos and images to help train an AI. Here I am thinking of Google’s FaceForensics. The challenge of most AI based approaches to detecting deep fakes is that they are computationally intensive especially when scouring through large troves of social media content.
Others have used digital hashtags to ‘fingerprint’ images and then the reader can detect when the image has been tampered with. For example, one of EIP’s investors, Microsoft, as part of its Defending Democracy Program has collaborated with the BBC and New York Times to do just that. Along with Adobe and others, they have created C2PA, a standards-setting body that will develop an end-to-end open standard and technical specifications on content provenance. It aims to link and authenticate content with a news source. Underlying this process is a distributed ledger.
FROM: Ken Rogerson
TO: Chris Steed, David Hoffman, Bob Sullivan, Nazo Moosa
Another way to think about this is the expansion of third-party verification organizations, like fact checkers. The hub of these activities is at Duke University’s Reporter’s Lab which monitors global fact checking orgs. In addition, the lab is constantly working with researchers to develop programs that online information distribution services can use to flag suspicious and concerning content.
The tension here, of course, is whether they will come if you build it. Will companies do enough on their own? Will they be willing to let others’ work into their infrastructures? I know the lab has been working directly with a few social media platforms who seem open to adopting some of these ideas.
In a small cynical streak, I don’t believe the near future will bring any consensus on how to do this well. As an individual information consumer, I have used some of these tools and I like them. They help me navigate information that is unclear, confusing and, at times, unbelievable. If more people can utilize these tools I do think it could help. But individual use does not create societal change. I am not even sure that encouraging journalistic use of these tools is enough, but it could have more of an impact than individuals. This would require journalists to add levels of information verification that some might not be doing because of the deadlines to get the news out.
There probably needs to be some real investment in misinformation management through sustained cooperation between the public and private sector. While I understand that this might be a proverbial “pipe dream,” each side needs to risk a little: the public sector needs to exercise a more authority in federal technology regulation and the private sector will need to open up a little more about its internal processes. This would benefit both individuals and journalists. But asking for real investment in this feels a little bit like shouting into a hurricane. You original question is spot on: it is not profitable so it will be more than challenging to implement.
FROM: David Hoffman
TO: Chris Steed, Bob Sullivan, Nazo Moosa, Ken Rogerson
I am a big fan of the Reporter’s Lab and would love to see it scale. I also think that information intermediaries would be wise to get themselves out of the “arbiters of truth” business as it will inherently result in accusations of partisan political bias and manipulation. However, the result cannot be to say that no one should be an arbiter of truth or that the notion of determining what is true is too likely to result in undesirable incursions on free expression. Most information intermediaries (I am choosing that term intentionally to describe a category of providers that is much broader than social media companies) have standards for what they allow on their platforms. Unfortunately, those standards all differ and each company makes its own determination of how to interpret those requirements.
The end result is bad for users of technology and bad for those companies. For example, if an individual wants to challenge that a claim about vaccines is untrue and harmful, they will likely have to present their case to a large number of companies who allow for it to be posted, indexed and/or displayed. As Kashmir Hill has noted in some of her recent reporting, this process is incredibly burdensome on the individual and those individuals never know when the false information will present itself again. At the same time, each of the platforms must invest in duplicative content moderation resources and be concerned that complaints of bias will be lodged no matter what content moderation decision is made. Structures like Facebook’s Oversight Board have not solved this problem as they apply to only one platform and their degree of independence and their reasoning for determinations are not clear. It is difficult to see how the Oversight Board scales to handle the problem and related issues (defamation concerns, right to be forgotten requests, allegations of hate speech, and other violations of acceptable content policies).
I offer a modest proposal. Significant capital investment should be made in content moderation tools. The major information intermediaries should then invest in setting up a central “content objection” non-profit (this may require government action to exempt them from competition policy issues for working together, similar to actions taken to encourage cybersecurity information sharing). The non-profit would need a rigorous governance structure with oversight intended to minimize the impact of partisan bias. The entity could then establish one common set of policies, and the information intermediaries (likely to also include data brokers) would agree to be bound by the recommendations made by the non-profit. The non-profit could charge fees to the participating companies and invest them in the automated content moderation technology. If the public becomes concerned that the recommendations are creating partisan bias (or other forms of social problems such as racial and gender biased decisions) the participating companies could terminate their relationship and return to each making their individual decisions. This would create a market for technology development, a single complaint mechanism for individuals, and remove the burden of being arbiters of truth on the information intermediaries. And we just might have a better environment for democracy.
FROM: Chris Steed
TO: David Hoffman, Bob Sullivan, Nazo Moosa, Ken Rogerson
Security has a new playing field, and the conflicts of the future will be waged over intelligence, information, and deception. The battleground has now shifted to the mind, as threat actors are working at the human level to achieve their objectives using the internet as the vehicle. Similar to the Cold War Era, hot conflicts may not arise, though there remains a significant impact to security, the economy, and other levers of American and allied power.
In the immediate aftermath of the attacks on 9/11, the concept of “see something, say something” emerged as a whole-of-society approach to re-establish collective trust, challenging the previous model of defense-in-depth that was based on siloed methods of intelligence collection and dissemination. These operating practices were no longer capable of meeting the pressing needs of information awareness in the Cyber Age, and despite countless R&D and investment initiatives over the last two decades, both the public and private sectors continue to be left without a common operating platform that normalizes ground truth against business and mission objectives.
I believe that one of the most effective means to combat disinformation threats is by way of technology infrastructure that scales “see something, say something” for the Cyber Age. Reporter’s Lab at Duke is a great example of this effort, and allowing for a broad swath of individuals, communities, law enforcement, and government to have equal access to monitoring and management technologies will statistically reduce bias and remove the need for “arbiters of truth.”
This is a key investment thesis for Paladin, and we enjoy our relationships with individuals like Nazo Moosa and firms like EIP that share our approach to solving this problem.