Research discusses the disruption of online social communities and highlights some repeated patterns of behaviour. I recommend the whole paper.
A research project involving Cornell and Stanford looked at ways to automate the classification of particular categories of poster: Antisocial Behavior in Online Discussion Communities
cs.stanford.edu/people/jure/pubs/trolls-icwsm15.pdf
There's a section that Germaine Bunbury might find relevant at present, it's almost as if there can be 'sleepers' who post for a while, matching the tone of other posts and then try and move that tone to a more hostile and aggressive one. From the faux-naif to, "You're all transphobic bigots and no wonder people hate you". These people are looking for the screen shot moment, perhaps.
Antisocial behavior, which includes trolling, flaming, and griefing, has been widely discussed in past literature. For instance, a troll has been defined as a person that engages in “negatively marked online behav- ior” (Hardaker 2010), or a user who initially pretends to be a legitimate participant but later attempts to disrupt the community (Donath 1999).
Empirically, we find that many of these banned users exhibit such behavior. Apart from insults and profan- ity, these include repeated attempts to bait users (“Ouch, ya got me. What’s Google?”), provoke arguments (“Liberalism truly breeds violence...this is evidence of that FACT”), or de- rail discussions (“All I want to know is...was there a broom involved in any shape or form?”).