what are you suggesting perking? That they might not be entirely trustworthy or honorable?
Twitter deleted 200,000 Russian troll tweets. Read them here.
Twitter doesn't make it easy to track Russian propaganda efforts — this database can help
www.nbcnews.com/tech/social-media/now-available-more-200-000-deleted-russian-troll-tweets-n844731?cid=sm_npd_nn_tw_ma
Also
Molly McKew
@MollyMcKew
This article on impact, or not, of "fake news" appeared in @nytimes couple days ago. While I respect author's point re needing evidence/data & not speculation -- the data provided doesn't answer questions on the impact of fake news. This is important /1
www.nytimes.com/2018/02/13/upshot/fake-news-and-bots-may-be-worrisome-but-their-political-power-is-overblown.html
There are three broad characterizations that miss the mark in this analysis:
1-- that the content in question is political advertising or comparable to political advertising. It is not. The content in question was often not branded political advertising. /2
The content was video, visual, memetic, & text elements contributing to narrative themes, conspiracies, character attacks. It wasn't sponsored by a candidate or PAC, so it was absent the label that allows people to reject or accept its source as easily. This difference matters /3
2 -- it uses the idea of "persuadability" as the metric of concern. It's not. The metric of concern is activation. This is why targeting a hardened 10% is more effective than trying to persuade people to change their minds. /4
In strict voter turnout terms, a radicalized base is more powerful than most other factors /5
This is mirrored, for example, in the Manafort strategies to win Ukraine for Yanukovych and the US for Trump. They were never going to win the swing voters. They just needed an activated core. /6
However, would also note that persuasion using social media is very effective. Especially when it is matched with data-driven psychological profiling/targeting that is sending one of 100,000 unique pieces of content to your eyeballs that is meant specifically to convince you /7
This kind of content targeting cannot be compared to "advertising" or "political advertising." Most of the time it isn't showing up as an ad, but native content posted by someone you know, or some account an algorithm thinks you might like. Algorithms can be gamed /8
"Bots" and other influencer accounts, for example, can be designed to influence specific groups and thus game specific factors to trick the algorithms into amplifying them. /9
So, while persuadability is the wrong metric -- persuasion via social media also can't be estimated via in simplistic ways; it required looking at network effects. It is about the impact of a complex media environment with many layers and inputs, and its impact over time. /10
3 the idea that it's hard to know how many people were exposed to disinfo, or actually saw it. Coming to a concrete number is, of course, not realistic. But again - the issue is the network effect /11
The question is not "how many people looked at X misinformation website". It is "what is the idea/narrative that was on X website that made it into mainstream media, influencers, verified accts, etc". Not a numeric evaluation, but a "mainstreaming" one -- /12
did specific disinformation or conspiracies bleed into news sources or amplifiers who legitimized it? /13
There aren't good tools to evaluate the impact of shadow campaigns.
Anyone trying to tell you there was little impact on political views from these tools doesn't know. Because none of us know. No one has looked. Social media companies don't want us to, and obfuscate. /14
What we do know: social media is pretty good at radicalizing people.
There isn't much evidence it is good at deradicalizing people.
Confirmation bias is powerful, and commonly used in these kinds of operations. /15
Information warfare is not "fake news" and "bots". The holistic information environment and the narrative it constructs via specific storytelling vehicles to achieve subversive goals and activation is what must be evaluated. /16