New Cornell University-led research finds that social media platforms and the metrics that reward content creators for revealing their innermost selves to fans open creators up to identity-based harassment.
“Creators share deeply personal – often vulnerable – elements of their lives with followers and the wider public,” said Brooke Erin Duffy, associate professor of communication. “Such disclosures are a key way that influencers build intimacy with audiences and form communities. There’s a pervasive sense that internet users clamor for less polished, less idealized, more relatable moments – especially since the pandemic.”
Duffy is the lead author of “Influencers, Platforms, and the Politics of Vulnerability” published in the European Journal of Cultural Studies.
The research team conducted in-depth interviews with content creators to get a sense of how they experience the demands to make their content – and often themselves – visible to audiences, sponsors and the platforms.
Among their findings:
- The value of vulnerability for platform-based influencers cannot be overstated – authenticity sells, and that means projecting intimacies, insecurities and even secrets;
- These authentic revelations are often tied to one’s identities, which can open a person up to attacks based on gender, race, sexuality and other perceived traits;
- Personal and social vulnerabilities were often compounded by the vulnerabilities of platform-dependent labor: Not only did participants identify the failures of their platforms to protect them from harm (as “contractors” instead of “employees”), many felt these companies incentivize networked antagonism.
“Influencers and creators have relatively few formal sources of support or protection,” Duffy said. “In contrast to those legally employed by Meta, Twitch and TikTok, creators are independent contractors. They’re left wanting for a lot of the workplace protections traditionally afforded to employees.”
The researchers examined informal strategies – both anticipatory and reactive – that creators deploy to manage their vulnerabilities. The former included the use of platform filtering systems to sift out abusive, profane or hurtful language. The latter strategies ranged from simply not reading the comments to employing the platform’s tools to minimize the impact of what, for many, felt like an inevitable onslaught of critique.
The authors acknowledge the difficulties of resolving endemic issues of internet hate and harassment. “‘Getting off the internet’ is hardly a viable option for participants in the put-yourself-out-there neoliberal job economy,” they wrote – and offer a warning to those wishing to join the creator economy.
“It is something of a truism that ‘everyone gets the same platform,’” they wrote. “We would caution, however, that the politics of visibility – and hence, the politics of vulnerability – are far less egalitarian that platforms lead us to believe.”
Leave a Reply