New study raises concerns about impact of automated social media advocacy on education coverage.

By Alexander Russo

At this point it’s no secret that there are all sorts of flaws afflicting social media, from fake news produced by Macedonian teenagers to a Facebook algorithm dedicated to engaging you but rarely offering a contrary point of view.

Twitter – many journalists’ favorite social media platform – has not been exempt from problems, including vicious trolling (including abuse of reporters), fake followers (bought for cheap or created from code), and an abundance of misleading tweets used to flood the zone on hot-button topics.

And now a study from the University of Pennsylvania (UPenn) has revealed an unexpectedly large amount of behind-the-scenes manipulation of the debate on Common Core from 2013 to2016.

As first reported by the Washington Post’s Craig Timberg, the UPenn study found not only a now-familiar pattern of fast-moving disinformation and growing ideological polarization but also an unexpected level of semi-automated advocacy.

In particular, the study discovered that a conservative Florida-based outfit called the Patriot Journalist Network (#PJNET) found a way to use its members’ Twitter accounts to flood the Internet on issues including the Common Core.

The network’s anti-Common Core tweeting began as “a faintly visible presence in the first six months,” according to UPenn’s report, but eventually took over the debate, accounting for nearly 70 percent of Twitter traffic related to the controversial state standards during peak periods. For long periods of time, #PJNET accounted for roughly 25 percent of the Common Core traffic on Twitter – nearly all of it opposed.

That’s right. Within one major social media platform, a relatively small number of people using automated technology and loaned Twitter accounts came to dominate the debate over one of the biggest education initiatives in recent history.

And few – including me – seemed to know the tweets on Common Core were being manipulated this way.

It’s unclear whether those mass tweets influenced the course of events as various states changed their minds about the curriculum standards. And there’s no concrete evidence #PJNET’s efforts directly affected news coverage of the Common Core debate.

Still, many journalists used Twitter as part of their Common Core coverage during this time period. And, looking to future education debates, the UPenn findings suggest journalists should approach social media with more skepticism and deepen their knowledge of how social media advocacy operates.

The good news is that, with the help of research like this, journalists can learn to be more savvy about what’s going on behind the scenes, become familiar with online tools that can help them ferret out how information is being transmitted, and help their readers understand more about this kind of advocacy.

Sample #PJNET-branded anti-Common Core tweet.

The spread of fake news is widely known at this point – and has occasionally touched on education issues. (The never-happened Obama pledge-of-allegiance ban was one of the most-shared instances of made-up news that went viral during the 2016 campaign, according to a recent BuzzFeed deep dive.)

High-frequency, pre-scheduled Twitter activity is not particularly new at this point, either. A recent Washington Post story reported on a conservative Twitter user who set up schedulers to Tweet out roughly a thousand images and text messages each day.

Twitter bots – described in the UPenn report as “unmanned computer programs used to advertise products, articles, companies, and sometimes even ideas” are another problem, though they are at this point relatively easy to detect.

But the least well-known and most dominant technique during the debate over Common Core may have been the #PJNET botnet – a group of human-created Twitter accounts all yoked to a single set of issues, operating like a highly coordinated, centrally controlled swarm.

Here’s how it works:

Both Twitter and Facebook allow users to authorize outside applications to do things including posting content. Some users are extremely wary of giving these kinds of permissions. Others don’t mind.

Sites like #PJNET get followers to pre-authorize the use of their accounts, essentially allowing it to generate enormous amounts of traffic at will. As #PJNET states: “Our platform does not rely on good intentions, remembering, or members taking future volitional action. Our application is able to robotically post (re)tweets on behalf of our members – even if they are not online.”

From an advocacy point of view, the advantages of a botnet are clear. Engaging individual Twitter users and getting them to operate in a coordinated manner over time is no easy feat. Unlike other strategies such as @YourDailyAction & @ThunderclapIt, which try to coordinate action through reminders and other forms, or bots created and controlled by a single entity, the #PJNET approach takes a hybrid approach.

“Automated campaign communications are a very real threat to our democracy,” according to an article in WIRED that describes botnets as a contender for “go-to mode for negative campaigning in the age of social media.”

“The bot-human hybrid is the future,” according to The Atlantic.

For journalists and educators, however, these kinds of efforts make it enormously difficult to feel confident about what they’re seeing.

“There’s a lot of savvy stuff going on that we’re not necessarily fully aware of,” warned UPenn study lead Jonathan Supovitz. “We can’t take anything at face value in this environment,” said the professor of education policy.

“I genuinely want to see what people are saying and discussing,” said the Collaborative For Student Success’s digital media director Ashley Inman in an email. “But it is difficult to sift through 200 tweets of memes with kids behind bars and rotting apples from the #PJNET bot to find the actual tweets and thoughts from real people.”

Indeed, teachers and experts “saw their position in the debate largely supplanted by passionate outsiders who had rarely before tweeted about school issues,” noted an article about the study in The 74. The #PJNET effort “intensified the degree of polarization around Common Core,” noted a Huffington Post story about the study.

The study conducted by the Consortium for Policy Research in Education (CPRE) was funded in part by the Bill and Melinda Gates Foundation, which is a supporter of the Common Core and helps fund The Grade. (The Collaborative is also a funder of The Grade.)

The report’s findings – especially the existence of a botnet campaign against Common Core –came as a surprise to some education observers.

“The thing about the bots is something that people didn’t know,” said USC professor Morgan Polikoff. Like many others, Polikoff saw a substantial amount of anti-Common Core Twitter traffic, but, he said, “I assumed they were just copied and pasted.”

Another academic who followed the Common Core debate did notice some of the signs.

“I was aware of PJNet only because they have an odd format: the tweets all start with MT and are extremely high in volume for many users,” said CU Boulder PhD candidate Ken Libby, who’s writing his dissertation on the Common Core and conspiracy theories.

Of course, this doesn’t necessarily mean that this particular manipulation of Twitter—or many others—influenced public policy.

Patrick (“Eduflack”) Riccards, of the Woodrow Wilson National Fellowship Foundation, hasn’t observed a major impact on Common Core implementation. “The policies haven’t changed,” he said.

Report author Supovitz noted in the Huffington Post that Common Core “won the policy war” despite the efforts of #PJNET and others against it.

But it’s possible that the botnet campaign affected pubic opinion, which in turn is covered by the media.

“The increase in activity on Twitter was co-incident with both declining public support for the standards and rising partisan opinion,” Supovitz said.

In an article about the UPenn report, The 74 described public support for Common Core falling from 83 percent favorable in 2013 to 50 percent three years later. And references to tweets on the subject were part of the coverage.

“We saw that journalists were connected into the Common Core conversation on Twitter and writing stories about the actors in the popular press,” Supovitz wrote, citing an interview with an unnamed teacher who reported that her tweets against Common Core were quoted in the mainstream press (a US World and News Report story).  Said Supovitz: “I believe that this is a way that the insular debates on Twitter get moved into the popular press.”

And it’s clear that policymakers and elected officials are tracking social media pretty closely. “When I meet with policymakers, they are acutely aware of the conversations going on in social media,” said Supovitz. “Many of them (or their aides) are on Twitter and they certainly are attuned to what is the popular sentiment.”

So, while there’s no specific evidence showing that #PJNET’s efforts led to a particular piece of mainstream journalism, it’s easy to imagine that the torrent of social media traffic had at least some cumulative impact.

__

The UPENN study has its critics, including some anti-Common Core advocates from the progressive left.

“The real problem with this ‘report’ is its implication that those using social media were fake parents,” wrote Common Core critic Sandra Stotsky in an email. “By excluding Facebook, Instagram, and all the other ways in which parents have participated as critics of Common Core, it ends up demonizing a ‘Christian technologist.’”

Indeed, the report doesn’t include real-world advocacy or even email, Facebook, or Reddit campaigns. And it starts in 2013, though the debate over Common Core began at least three years earlier.

Anti-Common Core advocates on the right are equally dubious, calling the #PJNET botnet nothing more than a “new boogeyman” for Common Core advocates.

Regardless, these issues are likely to come up again no matter which education issue next receives prolonged or intense consideration – ESSA implementation, federal choice initiatives or something entirely different.

Screen Shot 2017-03-09 at 2.58.23 PM

Network relationships among the top 20 Twitter accounts on the topic of the Every Student Succeeds Act, according to Hoaxy.

In the future, these botnets will likely be employed by a broader collection of stakeholders, both liberal and conservative. Perhaps that’s already happening and we’re just not yet aware.

With that in mind, there are several lessons for those of us who cover education issues, according to the report author and others.

One key takeaway is to be reminded of the amount of false information being spread around on social media, according to Supovitz.  “One thing that we found a lot of is fake news,” he said in a telephone interview last week. “Education is not immune to this phenomenon that we see in the national press.”

The second lesson for journalists is to be extremely cautious about interpreting surges of Tweets and other forms of social media posts they come across, because the number and sentiment of tweets and Facebook posts flying by might not represent public opinion, or even real people expressing thoughts they believe.

“We assume since we’re human that everything we see on Twitter is human,” said Indiana University PhD candidate Clayton A. Davis, who studies how information spreads via social networks and how that information affects people’s behavior.

Report co-author Alan Daly recommends a more general approach: that reporters spend more time understanding the social media space, not just dipping in and out of it, in order to understand better “how it can be manipulated and used to direct and misdirect the narrative.”

The UC San Diego education professor also reminds us to find ways to get out of our bubbles – sources, platforms, and perspectives that we may use too much, too often. In “How to Escape Your Political Bubble,” the NYT lists apps and extensions like PolitEcho, FlipFeed, Read Across the Aisle Outside Your Bubble, Escape Your Bubble, Right Richter, and Today in Conservative Media that can help.

For journalists who use Twitter, Davis suggests some tools:

Launched in 2014, BotOrNot predicts the likelihood that a Twitter account is operated by a human or a “social bot.” In 2016, IU launched a suite of tools called Observatory on Social Media, or “OSoMe” (pronounced “awesome”), which displays trends, networks, animated visualizations, and geographic maps.

Last but not least is Hoaxy, currently in beta, an interactive site that allows journalists to see what claims and fact checks have been applied to a certain topic (Common Core, ESSA) on Twitter and provides a network visualization showing who’s been most active, pro and con, and how they’re connected.

ABOUT THE AUTHOR

Alexander Russo

Alexander Russo

Alexander Russo is founder and editor of The Grade, an award-winning effort to help improve media coverage of education issues. He’s also a Spencer Education Journalism Fellowship winner and a book author. You can reach him at @alexanderrusso.

Visit their website at: https://the-grade.org/