Combating Disinformation in the Digital World

Coordinated campaigns of false information are rampant during the pandemic—but they pose a risk to businesses at any time.

Combatting Disinformation in the Digital World

JUST AS SICKNESS, death and uncertainty follow from a COVID-19 outbreak, so do disbelief, disputes and outright falsehoods. One well-traveled theory is that cellular 5G wireless technology is spreading coronavirus. Another is that common household cleaners and homeopathic treatments can be used to inoculate against or cure the virus.

“During this phase of the pandemic where fear is peaking, it is extremely important to stop bad actors who are using the internet to offer misinformation, sell fake tests and cures, or solicit contributions to fake COVID charities,” says Leslie Rutledge, attorney general of Arkansas, where my school is based. At the same time, an April article in The Atlantic warns that “Fighting public suspicion of [pandemic] models is as old as modern epidemiology.”

While false information is hampering the COVID-19 response, coordinated campaigns of lies and fabrications pose an increasingly common hazard for every business in today’s digitally connected world—even when a pandemic is not involved. The groups spreading such lies simply want to sow chaos or hysteria into society, and they will take any opportunity to do so.

For instance, whenever the Thanksgiving holiday is celebrated in America, a very specific product disinformation campaign is rolled out to target a major retailer that sells turkey, a staple of the holiday table. The campaign is supported in part by documented cases of salmonella infections and hospitalizations, but it largely spreads disinformation about the virulence of salmonella and this retailer’s turkey products.

Campaign Characteristics

I lead a team of about 30 students, from undergrads to postdocs, at the Collaboratorium for Social Media and Online Behavioral Studies (COSMOS), where we track incorrect or misleading information that points to a coordinated campaign. We recently partnered with the Office of the Attorney General for the state of Arkansas to identify and track pandemic-related scam websites and social media disinformation. But, over the years, we have also followed responses to autism awareness campaigns, the Islamic State, elections in Singapore and anti-NATO propaganda from Russian state-sponsored agencies.

We know that science has identified three broad categories in this field. Misinformation is false but not intentionally so and not malicious (for example, rumors and pranks). Disinformation is purposefully false or misleading, intended to sow confusion and radicalize audiences in an effort to discredit a person or group. Malinformation is largely or entirely factual and used to inflict harm directly on a person or group. (Strong examples include “revenge porn” and “doxxing,” acts of publishing authentic but private or sensitive information with the intent of causing harm.)

Of these three categories, disinformation is the one with the potential to cause the most widespread harm. At COSMOS, we have determined that disinformation campaigns are defined by four characteristics:

Quick-moving. Campaigns emerge out of nowhere and rapidly gain momentum. Masterminds spark these “flash mobs” by spreading disinformation like bread crumbs and then relying on the social science theory of collective action. Unwitting consumers believe their lies and quickly turn into a mobilized community.

Multi-platform. To be effective, disinformation campaigns have to be reinforced across platforms. Some campaigns begin by referencing a website or blog that appears to have the same veneer of authority as popular news and resource sites. The culprits take bits of “information” from these sites, write about them on blogs to frame a narrative, then use Twitter or Facebook to publicize the articles. These tactics were prominent in the 2016 U.S. presidential election, where several fake news blogs were written and shared.

Evolving. Malicious content creators have migrated to YouTube, where they post sophisticated videos with high-production values, then share the links to Reddit, Twitter, and niche platforms such as Telegram, Signal or Gab. They’re aided in their efforts by computational or AI-based software that allows them to create “deep fakes”—video products that appear real but are entirely manufactured. With this technology, creators can put public figures in places they never were and make them appear to be saying things they never said. Malicious content creators are using this technology to create entire websites—even entire networks of websites and clusters of completely artificial content.

Seductive. To keep users on their sites, YouTube and other platforms rely on algorithms to determine what users are interested in so that they can recommend new content tailored to each individual. Bad actors take advantage of those algorithms to begin recommending more and more extremist content. A recent series in The New York Times, “Rabbit Hole,” noted that, “If you choose to watch a vegetarian [cooking demonstration], you’re pushed vegan and extreme veganism. And if you search health, or medicine, you get anti-vaccination content.”

Fighting Back

To uncover disinformation campaigns, COSMOS relies on sophisticated social network analysis capabilities and cyber forensics to track groups and identify their covert connections. We are funded in part by U.S. agencies such as the National Science Foundation, the Army Research Office, the Office of Naval Research, the Air Force Research Lab and the Defense Advanced Research Projects Agency.

Like governmental and military agencies, many Fortune 500 companies see the need to fight back against malicious campaigns. At COSMOS, we offer free resources to help businesses determine if they have been targeted. These resources include two software programs we have created, BlogTrackers and YouTubeTracker, that sweep the internet for keywords, collect account information, and map social media footprints.

Some corporations are hiring experts and creating their own initiatives to counteract disinformation. For instance, Amazon has deployed an active system for monitoring product reviews, because fake Amazon reviews are problematic for both the company and its many vendors. When an army of commentators or reviewers leaves disparaging comments about products or services, a company’s image and sales can suffer serious damage.

Other firms choose to respond with corrective messaging. For instance, after commentators posted online that Coca-Cola’s business operations weren’t environmentally conscious, the company launched a paid and organic social media campaign showcasing its sweeping sustainability commitments to increase use of recycled plastics and recycling infrastructure globally. This helped turn online sentiment around.

Coordinated Campaigns

Another tool that bad actors use is hateful commenting in online discourse. Assessing toxicity in online language might not be extremely important when the topic is, say, agriculture, but it’s critical when the online dialogue concerns politics or international relations. In my Social Computing class, I teach students how to search social media platforms to conduct both sentiment analysis and toxicity analysis. Recently, we studied disinformation and propaganda campaigns in the Asia Pacific region, using toxicity assessment techniques developed by COSMOS to identify aggression in online comments.

The COSMOS team also has applied toxicity analysis to NATO exercises in Eastern Europe. YouTubeTracker uncovered a surge of anti-NATO messengers uploading manipulated or malignant content that falsely accused the organization of being an aggressor and a danger. We discovered that a relatively small number of accounts were making a heavy volume of interacting comments. They appeared to be orchestrated in order to raise the value of the content within YouTube’s algorithm, and the effort was successful.

The COSMOS team recommended that strategic NATO communicators take three measures to counteract such campaigns in the future. First, the organization should incorporate social network analysis into the overall communications plan, aligning it with operational effects through information activities. Second, NATO would need to conduct a process analysis early in the targeting cycle to acquire baseline metrics, then continue to analyze comments throughout the operation. Finally, it would need to identify influential actors and conduct counter operations as authorized. Our complete analysis was printed in a naval science and technology magazine.

The B-School Response

Today, my COSMOS team and I are seeing disinformation campaigns that are run like business enterprises and that target both corporations and government entities. For that reason, I think it’s important for business schools to help students understand the dangers of disinformation campaigns.

To do this, schools should add classes to the core curriculum that teach basic skills in digital and social media communications analysis. They should bring in communications experts that teach students how to craft messages to get ahead of the story—and how to extinguish the fires started up by these disinformation campaigns. They also can have student teams identify instances where online commentary threatened a business’s reputation and investigate how the company responded.

For instance, in 2018, one of the teams in my Social Computing course analyzed the Philadelphia Starbucks incident, in which two black men were arrested by Philadelphia police for sitting in a Starbucks without purchasing anything. The event made national news and sparked weeks of backlash online. Using tools and concepts taught in the course, students tracked the anti-Starbucks campaign on Twitter. They found that, after the incident, negative sentiment toward Starbucks was on the rise until Starbucks closed all its stores for a day to conduct sensitivity training for all U.S. employees. While that decision was also scrutinized, it proved to be an inflection point for sentiment. The student team made a presentation about the Starbucks event at the 2nd European Symposium on Societal Challenges in Computational Social Science: Discrimination and Bias.

Understanding how to safeguard a company’s reputation online is becoming a critical skill for today’s business graduates. There is a growing body of research investigating how social media analysis allows companies to track their brand reputations and to assess the effectiveness of corrective measures. Today’s students must understand those challenges if they are to become tomorrow’s business leaders.

Nitin Agarwal is the Jerry L. Maulden-Entergy endowed chair and a professor of information science at the University of Arkansas at Little Rock. Visit his website for misinformation and scams about COVID-19. 

Advertise With BizEd