Elon Musk’s acquisition of Twitter has thrust content moderation back into the spotlight. From buying blue ticks for $8, reinstating the accounts of controversial figures to promising a ‘moderation council’ of diverse voices, the self-proclaimed ‘free speech absolutist’ wants to shake up the platform. There are concerns about what this means for bots, abuse and the spread of misinformation.
The UK has its own attempt to respond to these concerns: the Online Safety Bill. But recent changes to the proposed legislation have prompted accusations that it’s been watered down. The changes mean that platforms will be expected to give users greater control over what content they see online – such as filtering out some “legal but harmful” content, or being able to switch on warning labels.
But by focusing on these discussions we risk only seeing half the picture. Musk’s decisions represent the ‘supply side’ of online content: what gets posted, what is allowed to stay up and who is running the accounts. The OSB also focuses on the supply side of information. But what about the ‘demand side’.
We need to also ask: why is there such an appetite for this harmful content in the first place?
Take misinformation as an example. At CASM, the think tank Demos’s digital policy research hub, we recently held a conference in conjunction with the University of Warwick bringing together leading academics, campaigners, policymakers and platforms themselves to talk about the thorny issues of vaccine misinformation.
Across the sessions the message was clear. There are prevailing views in policy discussions that anti-vaxxers and conspiracy theorists are bizarre, irrational, dangerous people who ‘normal citizens’ need to be protected from. Although there are figures who deliberately disseminate and profit from health misinformation, all too often people sincerely hold these beliefs, as a result of worsening relationships that people have with the state and their fellow citizens.
We need to ask why the quality of “in real life” relationships has declined, prompting the kind of disenfranchisement that fuels bad cultures online.
When already disenfranchised have bad experiences in the real world – GPs that don’t take them seriously, isolation in their communities, MPs who they don’t see reflecting their concerns – they can turn to online spaces. Indeed, online spaces can be places of genuine, empowering support that can be hard to find elsewhere. But they can also be places designed to exploit people’s vulnerabilities.
In these spaces they can find like-minded, disenfranchised people waiting to reaffirm their wider suspicions; suspicions that traditional places of support are unable to address. The demonisation of people who are against vaccines compounds this isolation, while anti-vaxx groups are able to give their members a strong sense of identity and belonging. The appeal of these groups is intensified by recommendation algorithms that trap users in these spaces, but when these needs aren’t satisfied elsewhere the desire for misinformation should be seen as symptomatic of something greater.
If we’re to make significant and lasting progress in tackling the spread of misinformation, and harmful content more generally, we can’t afford to focus solely on the decisions made by platforms. We need to ask questions not typically associated with technology policy: how can people have better relationships with their GPs? How can communities be made stronger? How do people get a bigger stake in democracy?
These are notoriously difficult questions for policymakers. One answer we have put forward by Demos is to focus our attention on stronger relationships as a key outcome for policy; whether that is having a consistent GP who gets to know your specific circumstances or more places for communities to come together so people have a wider support network than families that might live miles away.
There are difficult questions for us each, as individuals, too. Polarisation and demonisation entrench anti-vaxx beliefs. Collectively, we need to work towards environments where it is ok for people to change their mind. That means being willing to extend a hand of friendship to people whose views are bewildering at best, painful and offensive at worst.
Even the best content moderation processes in the world can’t begin to answer those questions. Which is why when we are thinking about what we want the digital spaces we inhabit to be like we have to also look at the non-technical. We have to remember that there isn’t a distinct ‘online’ and ‘non-online’ world. The two are blurred and constantly feeding into each other. For content moderation efforts to be worth it, we need to start looking outside our screens, too.
The post What the debate around content moderation gets wrong appeared first on Politics.co.uk.
The UK has its own attempt to respond to these concerns: the Online Safety Bill. But recent changes to the proposed legislation have prompted accusations that it’s been watered down. The changes mean that platforms will be expected to give users greater control over what content they see online – such as filtering out some “legal but harmful” content, or being able to switch on warning labels.
But by focusing on these discussions we risk only seeing half the picture. Musk’s decisions represent the ‘supply side’ of online content: what gets posted, what is allowed to stay up and who is running the accounts. The OSB also focuses on the supply side of information. But what about the ‘demand side’.
We need to also ask: why is there such an appetite for this harmful content in the first place?
Take misinformation as an example. At CASM, the think tank Demos’s digital policy research hub, we recently held a conference in conjunction with the University of Warwick bringing together leading academics, campaigners, policymakers and platforms themselves to talk about the thorny issues of vaccine misinformation.
Across the sessions the message was clear. There are prevailing views in policy discussions that anti-vaxxers and conspiracy theorists are bizarre, irrational, dangerous people who ‘normal citizens’ need to be protected from. Although there are figures who deliberately disseminate and profit from health misinformation, all too often people sincerely hold these beliefs, as a result of worsening relationships that people have with the state and their fellow citizens.
We need to ask why the quality of “in real life” relationships has declined, prompting the kind of disenfranchisement that fuels bad cultures online.
When already disenfranchised have bad experiences in the real world – GPs that don’t take them seriously, isolation in their communities, MPs who they don’t see reflecting their concerns – they can turn to online spaces. Indeed, online spaces can be places of genuine, empowering support that can be hard to find elsewhere. But they can also be places designed to exploit people’s vulnerabilities.
In these spaces they can find like-minded, disenfranchised people waiting to reaffirm their wider suspicions; suspicions that traditional places of support are unable to address. The demonisation of people who are against vaccines compounds this isolation, while anti-vaxx groups are able to give their members a strong sense of identity and belonging. The appeal of these groups is intensified by recommendation algorithms that trap users in these spaces, but when these needs aren’t satisfied elsewhere the desire for misinformation should be seen as symptomatic of something greater.
If we’re to make significant and lasting progress in tackling the spread of misinformation, and harmful content more generally, we can’t afford to focus solely on the decisions made by platforms. We need to ask questions not typically associated with technology policy: how can people have better relationships with their GPs? How can communities be made stronger? How do people get a bigger stake in democracy?
These are notoriously difficult questions for policymakers. One answer we have put forward by Demos is to focus our attention on stronger relationships as a key outcome for policy; whether that is having a consistent GP who gets to know your specific circumstances or more places for communities to come together so people have a wider support network than families that might live miles away.
There are difficult questions for us each, as individuals, too. Polarisation and demonisation entrench anti-vaxx beliefs. Collectively, we need to work towards environments where it is ok for people to change their mind. That means being willing to extend a hand of friendship to people whose views are bewildering at best, painful and offensive at worst.
Even the best content moderation processes in the world can’t begin to answer those questions. Which is why when we are thinking about what we want the digital spaces we inhabit to be like we have to also look at the non-technical. We have to remember that there isn’t a distinct ‘online’ and ‘non-online’ world. The two are blurred and constantly feeding into each other. For content moderation efforts to be worth it, we need to start looking outside our screens, too.
The post What the debate around content moderation gets wrong appeared first on Politics.co.uk.