In the 1950s it was television – particularly commercial television. In the 1980s it was ‘video nasties’ and later violent computer games. And now it’s social media.
Moral panics about new communication technologies and their impacts on young people’s health, wellbeing, behaviour and ‘character’ are nothing new. Faced with a new technology they do not understand or engage with in the same ways as their children, parents often fear that those technologies will do harm: an invisible force damaging their children’s minds and morals.
Such fears are understandable in relation to concerns about social media and their effects on mental health in particular. Young people’s use of social media is, by its nature, often invisible to their parents. Unlike older forms of communication, it can happen 24/7, it is not limited by geography, it cannot be overheard, and it is a more dispersed form of communication in which we are all both the producers as well as consumers of content. No wonder, then, that largely middle-aged policymakers and (traditional) media influencers instinctively want to find ways to curtail social media use: to impel parents to ‘keep an eye’ on their children’s social media activity, to set screen time limits and to regulate what happens in cyberspace.
Faced with a new technology they do not engage with in the same ways as their children, parents often fear that those technologies will do harm: an invisible force damaging their children’s minds and morals
The rights and wrongs of these ideas are widely discussed and hotly debated. The risks should not be taken lightly: the sharing of harmful images is a cause for rightful concern and research has found some links between harmful or heavy social media use and health, most notably in relation to difficulties with sleep. And it’s important that policymakers and companies alike seek to protect children where and when they need to, and that researchers seek to understand some of these emerging concerns more clearly.
But are we, in the process, missing an important opportunity? Can social media, and digital technology more generally, be an ally in helping to improve and support children and young people’s mental health? In obsessing about minimising harm, are we missing chances to maximise help? And if so, what might this look like?
A duty of care?
Recent public statements from campaigners and UK government ministers have suggested that social media companies should be required to accept a ‘duty of care’ towards children and young people’s safety and wellbeing online. This concept draws on many of the ideas that currently underpin the regulation of broadcast media, enforced through the regulator, Ofcom. The aim is for companies that provide social media infrastructure to take more responsibility for the content that is shared through their platforms and to take a precautionary approach: that if something has the potential to do harm, we should be protected from it, even if it cannot yet be proven to be damaging.
Can social media be an ally in helping to improve and support children and young people’s mental health?
A duty of care could, however, also be regarded in a more positive light. Broadcasters are often expected to take positive steps, for example to reflect the diversity of the communities they serve, to give reliable information and to offer political balance. A duty of care for social media companies could incorporate positive as well as negative responsibilities: for example to offer useful information about mental health and to develop tools that promote help-seeking while also protecting their users from dangerous content.
Balancing safety and autonomy
Much of the recent debate about social media and mental health has focused on the presence of harmful images and messages that encourage self-harm, dangerous eating patterns or suicide. There is now a broad consensus that effective action to police such content is necessary to protect children and young people.
At the same time, children and young people are more interested in talking about their mental health than ever before. Indeed this is one of the biggest social changes (for the better) in recent years. So while removing dangerous content is essential, we risk in the process taking away the opportunity for young people to help each other and create spaces that enable them to talk openly and honestly about how they are feeling. And by regulating more ‘mainstream’ platforms are we simply pushing young people to other spaces, free from the adult gaze, to carry on as before?
If we accept that young people will invariably seek out spaces of their own, a balance is going to need to be struck between rooting out harm and allowing autonomous expression and discussion. A lot of this may be uncomfortable and worrying for parents and policymakers, but seeking to over-regulate the ways young people communicate with each other will, in the end, be counter-productive and merely shift the problem to somewhere else.
While removing dangerous content is essential, we risk in the process taking away the opportunity for young people to create spaces that enable them to talk openly about how they are feeling
Digital technology could provide important new opportunities to boost young people’s mental health. Social media companies could be encouraged both to prevent harm and promote wellbeing. They could enable self-help and mutual aid as well as providing reliable information and directing young people and parents alike to helpful resources.
Given their considerable expertise in communication and technological know-how, social media companies could revolutionise the help that’s available online. They could provide teachers, health workers and others who work with families and children with the advice and information they need to understand and support young people to navigate the online world safely and positively. And they could be encouraged and supported to seek to maintain the careful balance between free expression and safety that will always be fraught with difficulty.