Thirty years ago, in 1989, the UN Convention on the Rights of the Child (UNCRC) cast a global spotlight on what societies should do to make rights a reality for all children. The Convention clarifies that human rights (to life and liberty, identity, freedom of expression and assembly, protection, non-discrimination, privacy, education and more) apply to children. It also emphasizes specific child rights – to development to their fullest potential, to play, to support according to their evolving capacity and best interests and to be heard by decision-makers in matters that affect them.
However, just as the vision and task of implementing the UNCRC mobilized child welfare organizations and rights advocates around the world, much else was changing too; 1989 was an eventful year, and among other things, it saw the invention of the World Wide Web, radically reconfiguring the conditions of children’s lives (Livingstone and Bulger Reference Livingstone and Bulger2014).
Initial enthusiasm about the World Wide Web – information at our fingertips, everyone connected, unlimited opportunities for expression – seemed to include and even celebrate children as so-called ‘digital natives’.Footnote 1 But increasingly, these two developments – an authoritative assertion of children’s rights, and children’s participation in what has fast become a digital world – seem set on a collision course. This is salient in the array of online risks of harm to children – commercial exploitation, cyberbullying, exposure to extreme pornography, hate or self-harm materials, and the image sharing and live streaming of child sexual abuse (Livingstone Reference Livingstone2019a; UNICEF 2017).
Also problematic, though attracting less attention, are the missed opportunities for children to learn, create and participate in an increasingly digital world. For even if children have access to the internet, this does not automatically translate into gaining the benefits of the digital world. Digital literacy, imaginative content, parental support and regulation to manage the risks are all vital (Livingstone et al. Reference Livingstone, Kardefelt-Winther, Kanchev, Cabello, Claro, Burton and Phyfer2019). Research shows that both minimizing the risk of harm and maximizing opportunities to benefit depend on school provision (digital literacy curricula and teacher training), on public funding (if positive content is to be provided without heavy marketing, undue risk or data exploitation), on confident (rather than fearful, and thus overly restrictive) parents, and on effective regulation by states. The absence of these conditions results in barriers to meaningful digital inclusion, leading to inequalities in both children’s online opportunities and risks (UNICEF Reference Livingstone and Third2017).
Why should the rights of children in a digital world merit particular attention? Obvious arguments are, first, that children are especially vulnerable and, second, that children have the most at stake in a digital future; as is often said, ‘they are the future’. Both arguments help to draw policy makers’ attention. However, a focus on children’s vulnerability risks underplaying their agency and voice, and it re-inscribes the myth of adult invulnerability, a myth that is peculiarly potent in relation to the internet (even though the majority of internet users can, in some ways, be considered ‘vulnerable’ – consider the treatment of women, ethnic, sexual or religious minorities, elderly people and those with mental health difficulties).Footnote 2 Meanwhile, talk of children as ‘the future’ seems more positive and rights respecting, but, in practice, this argument allows those politicians concerned to win the next election and postpone giving attention to children, some fear indefinitely.
A child rights approach starts instead from the positive assertion that children are rights holders here and now. This view underpins the Convention, itself ratified by every country in the world bar the USA. In 2014, the UN Committee on the Rights of the Child clarified that the full range of child rights apply online as they do offline (OHCHR 2014). However, it is turning out to be a considerable challenge to realize child rights in the context of a fast innovating, highly commercialized and globalized digital environment. One reason is that, although children are fully one in three of the world’s internet users (Livingstone et al. Reference Livingstone, Carr and Byrne2015), both individually and collectively they are invisible in the digital environment.
Invisible Children
Imagine a child trying to enter a sex shop. The shop assistant easily identifies them as a child – and has a trusted and legally valid procedure for checking age if in doubt. The town planner licensing the shop may have rules not to situate it next to a school. Passers-by are likely to notice who goes in and may even intervene if they see a child entering, perhaps contacting their parents or reporting the shop to the authorities. Online, this is difficult if not impossible. Children routinely occupy digital spaces in which both general and specifically ‘adult’ activities take place – sex, gambling, hate, aggression, self-harm, sale of inappropriate products. These spaces are often household names – Google, Instagram, Amazon, Flickr, eBay, Twitter, etc. But the platforms claim that they cannot tell who is a child (since many users are anonymous online or disguise their identity) and, thus, cannot treat them according to their evolving capacity or best interests. It is unclear to what extent (or under which conditions) this claim is itself valid rather than commercially-motivated, though clearly the potential for digital identification is growing.
A series of high-profile scandals concerning suicides, hacked elections, sensitive data breaches, controversies over facial recognition, fears of discriminatory artificial intelligence and more has made it evident that the state is struggling to regulate the platforms – which are often extremely powerful transnational organizations. When it comes to children, the challenge is compounded because no effective way has yet been found to focus regulatory actions just on children – for they are seemingly invisible online. In practice, it seems, platforms must either treat all users (including children) as if they are adults (the current norm) or they must treat everyone as if they were children (a series of failed regulatory efforts testifies to the problems with this approach, the fear being that content regulation that protects children will be used to censor adults; Livingstone Reference Livingstone2011). While this regulatory conundrum is taxing many clever minds (most recently resulting in the UK in the promising introduction of a legally-binding Age-Appropriate Design Code by the Information Commissioner’s Office), also worrying are the changing social norms that lead some to shrug their shoulders at a supposedly lost cause – the genie is out of the bottle, they say.
To pursue the analogy, adult passers-by in the digital environment find themselves bystanders to vast amounts of inappropriate behaviour which, accepted civilized standards dictate, are not appropriate for children. But they, too, can’t identify a child online (let alone his or parent) and they may fear to get involved. So they begin to take their own inaction for granted and to consider the incivility of the online environment ‘the new normal’.
Now imagine a child worried about putting on weight. They may be bullied at school, they may lose confidence, and they may even begin to self-harm. This is not a new problem, but as a society, we try to hold teachers accountable for what happens at school, we regulate the media messages that reach children via television, cinema or the press, and if things go awry, we invest in mental health clinics and other support services. Again, online things are different. The problem is not only that children are left to their own devices in a space conceived of as for adults, and tough ones at that, but also that the digital environment is designed to amplify certain actions in accordance not with the best interests of the child but with commercial interests.
So, online networks enable the permanent recording and widespread sharing of online behaviour – including taunting and aggression – among anonymous audiences. Algorithms are optimized to recommend ever more extreme contents – whether stereotyped visions of perfect faces and perfect lives or, if you are deemed to be interested, images of self-harm, violence or hate. Online, too, social norms lag behind technological developments. Mental health clinics rarely ask their child clients whether or how their difficulties manifest online. Teachers are overwhelmed by society’s expectation that they should deal with online risks along with everything else. Parents are becoming aware that they don’t know much about their child’s online life, but they don’t know how to find out more.
States Need Guidance
In this article my focus is not whether or not children accessing pornography is considered harmful, nor whether or not bullying is on the rise or children’s mental health is getting worse.Footnote 3 Rather, my interest is in the fact that their very prominence on the public and policy agenda makes it clear that society is at a loss as to how to act. To promote the realization of children’s rights online and to prevent their violation, companies point to parents, parents point to government, and government points to companies to take responsibility. The result is a lot of multi-stakeholder discussion, a lot of hand wringing, a series of politically motivated efforts to claim ‘quick wins’ and yet, few effective initiatives to show for it all.
Meanwhile, public trust in platforms is plummeting, as is public trust in states to bring them into line (LSE Truth, Trust and Technology Commission 2018). It’s a moot point rarely tested in the courts whether regulation carries any weight with Facebook, Google and the rest. Nor is it agreed what regulation is needed. Online behaviour is protected as ‘speech’, platforms have ‘intermediary liability’ rather than any duty of care, and many human rights advocates warn that even to talk of children’s vulnerability online is to build a Trojan horse for efforts to usher in a surveillant state which will undermine the rights not just of children but of everyone. Leave it to the parents, they say, and keep the state out of it.
In response to the rising tide of anxiety and distrust replacing once-exciting predictions of a digital future, it is commonly retorted that there should be no problem because children don’t distinguish any more between offline and online. Therefore, the rights of the child (and the laws and institutions which underpin them) apply in the digital world just as they did in the pre-digital world; so we do not need new ones. Both these statements are true up to a point, and they help contradict any lingering assumptions that the digital is only ‘virtual’ and thus immaterial in its consequences. But it would be better to say that there are many different and distinct ways in which the online and offline are becoming entwined.
We should, precisely, investigate not underestimate the emerging and complex interdependencies among social, regulatory and institutional) practices on the one hand, and digital technologies (including platforms, networks, services and contents) on the other (Plantin and Punathambekar Reference Plantin and Punathambekar2018). After all, the digital – itself not easy to define since innovation is continual – converges not simply the online and offline realms but it also converges the once-distinct phenomena of mass media and computing and information systems, resulting in a mix of networked media, user-generated content, smart devices and environments, data analytics, artificial intelligence, virtual reality and more (Lievrouw and Livingstone Reference Lievrouw, Livingstone, Lievrouw and Livingstone2009).
Imagine, now, children playing in the street, as they have always done. Even though today’s risk averse culture makes even this difficult, public play still exists. What’s striking is that such play includes children of mixed ages, the older ones looking out for the younger ones – or putting them at risk. Adults drive by slowly or moderate their language when they see children playing, without feeling their rights infringed. Or they don’t, and then neighbours or passers-by might intervene, drawing the children inside or chastizing the badly behaved adult. Online, as I have already said, no one knows who is a child in, say, a multiplayer game, and the norms may not facilitate civility, though they may. As I’ve further argued, my point is not that multiplayer games are problematic but, rather, that society neither knows if there’s a problem nor how to intervene if there is – no friendly (or unfriendly) copper can be hailed, and even if parents look over their child’s shoulder they cannot grasp what’s going on.
From Invisibility to Hypervisibility
But now there’s another problem. That multiplayer game is likely owned by a multinational corporate which does not make itself accountable to any particular local authority, and which provides no access to information about who is playing, or what happens to the children it hosts, or what action is taken when a problem occurs. If society does judge that the game play is a problem – perhaps on the basis of what children say, or from research evidence – how is it to provide an alternative form of play? The old model of play was, in effect, free. But to replace today’s model will take new public funds – to improve road safety, build play streets and parks, train youth workers, and more. And where are such funds to come from?
Online, of course, there’s no such thing as ‘free’. Unless (or even when) the play is paid for by user subscription, the user now pays with their personal data. Shoshana Zuboff likens today’s datafication of our lives – the monetization of our play, actions, emotions and interactions to the late medieval enclosure of the commons, the privatization of what once belonged to everyone, the sacrifice of the public good to individual gain (Zuboff Reference Zuboff2019). We are moving from a period of online invisibility to one of hypervisibility (and, in fact, from a longer past in which children’s lives were largely unobserved by outsiders because they were lived in private spaces and ignored when in public ones; Cunningham Reference Cunningham2006) to a world in which their every move is observed, recorded, tracked, profiled, targeted, nudged – by powerful digital actors (corporate and state, national and transnational, human and artificial) (Lupton and Williamson Reference Lupton and Williamson2017).
It seems that the problem of not knowing who is a child online is about to be displaced by a yet more challenging problem – the emergence of a digital panopticon, an all-seeing, all-knowing digital environment which knows exactly who is a child, how they live and what they want. Calls to identify children online in order to empower and protect them are giving way to calls from privacy advocates precisely not to identify them, for technological solutions seem likely to be more privacy invasive than the problems they’re designed to solve. Meanwhile, traditional ‘offline’ solutions for children’s problems are also made more difficult, since although children are hypervisible to companies, their play, friendships and problems are newly invisible to anyone actively concerned for their welfare – think of social workers, clinicians, law enforcement and, obviously, parents.
Under conditions of hypervisibility, scholars across many fields are turning their attention to questions of privacy – not only from interpersonal threats, the typical privacy concern when it comes to children, but privacy from institutions (notably the state) and from the private sector. Privacy, it is increasingly recognized, is not only a right in itself but also the vital means by which other rights are realized. Searching for information, communicating with others, building networks and communities – all of these are in jeopardy under conditions of constant surveillance. So, too, is the freedom from persuasion and manipulation – the vital autonomy to make one’s own decisions.
In a recent research project on children’s online data and privacy, we explained to teenagers that, among other things, their search history is retained, shared and monetized. Their response was outrage. Talking of the companies, they cried ‘it’s none of their business!’ (Stoilova et al. Reference Stoilova, Livingstone and Nandagiri2019). But of course, that’s exactly what it is: their business. Moreover, as we share devices, and as we participate in mutual online networks, children’s data are mixed with that of others – think, for example, of smart (or not so smart) home technologies.
Can UNCRC Article 16 provide sufficient protection for children in an age when everything is recorded?Footnote 4 Can the child’s status as a rights holder be respected when they grow up in a hypervisible ‘smart’ (or not so smart) world? Privacy and autonomy are newly under threat in ‘surveillance capitalism’ (Zuboff Reference Zuboff2019). How can we resolve the questions of intergenerational justice that arise when decisions made about children’s data today, with their interests in mind or not, may have consequences far into the future?
The Unfolding Policy Agenda
In the absence, as yet, of a mature, nuanced and trusted regulatory settlement (O’Neill et al. Reference O’Neill, Staksrud and McLaughlin2013), policymakers are facing some seemingly stark choices.
Should they enable children’s participation in the digital world, along with everyone else? Or should they try to minimize risks by restricting them to child-only or even offline-only spaces?
Should they should pay from the public purse for online provision of content and services beneficial to children (as they do offline – think of budgets for parks, schools, libraries, youth clubs, public service media)? Or should they accept the commercialization of children’s lives as inevitable?
Should they insist on the identification of children (and/or adults) online so as to protect them better? Or is this too privacy invasive and risky? This question partly depends on the state of age verification technologies and data protection regulation, both in flux. It also depends on public trust in state and big tech – currently declining, and with good cause.
Should they hold parents responsible for their child’s online well-being? Research is clear that many parents are unequal to the task. More important, those least able to bear the burden are precisely those whose children are most at risk. Moreover, parents’ protective efforts often come at the cost of the child’s privacy and freedom. While policymakers deliberate, the market in child surveillance technologies is burgeoning, fanned by the popular media’s panic over child online safety and rising parental anxiety.
Should they hold industry responsible for the well-being of children who use their services? This is an interesting question, for companies have the money to do a lot more than they do now, and they have the reach – for, necessarily, they already reach all of their users at exactly the moment when those users may need support. But should we trust industry with, say, our children’s digital literacy education? Or with children’s safety?
To advance beyond these over-simple choices, research on the digital world must include and address the circumstances of children’s lives, to guide evidence-based policy and practice. Crucially, it is unknown whether the risk of harm to children has actually increased or, instead, just become more visible because today it has a digital dimension. Nor is it known whether (or when or which) children can sufficiently ignore, evade or resist the persuasive manipulations of big business; but we should find out (Kidron et al. Reference Kidron, Evans and Afia2018; Norwegian Consumer Council 2018). It is vital that research does not become embroiled in the latest media panics or embrace technological determinism. After all, children still love to kick a ball around the park or hang out with their friends. Still, too, most harm to children is caused by poverty, discrimination and abuse rooted in the offline world, although undoubtedly this can be amplified by digital networks. So while the digital environment is on course to reconfigure many dimensions of children’s lives, it is not (yet) the main factor explaining either their opportunities to benefit or the risks of harm they face.
However, the binary nature of these dilemmas regarding policy and practice related to the digital environment surely points to an immature context for realizing children’s rights. So far, I have argued that, faced with technological innovation, platform power and complex transnational systems, states are struggling to ensure that the digital world is law abiding and rights respecting – for the general public and for children in particular. While states and corporations struggle to balance public and private interests in a digital world, young people are remixing digital and non-digital practices, often ‘under the radar’, as they negotiate workarounds to the often-limited opportunities afforded them. Increasingly, the digital is both embedded in and increasingly constitutive of the infrastructure of modern life, and on all scales from local to global. Moreover, the digital world shifts in sometimes unpredictable ways in response to the macro and micro processes it sets in motion. In recent years the network society has become truly global, with most future growth to come in low and middle-income countries, and with digital developments in any one part of the world affecting the rest (Livingstone et al. Reference Livingstone, Carr and Byrne2015).
Thirty years is not many in which to come to terms with the digital revolution, especially as the pace of innovation is hardly slowing. In the offline world, societies have spent decades, centuries even, evolving a mix of design, regulation and social norms. This mix has resulted in a balance of provision and protection which is broadly accepted within each society, even though it varies across them and over time, especially as regards the degree of participation allowed to children. We have only just begun that process in relation to the digital environment (Lievens et al. Reference Lievens, Livingstone, McLaughlin, O’Neill, Verdoodt, Liefaard and Kilkelly2018), and it is not possible to predict where it will take us.
Interestingly, many of these so-called online challenges have their roots in the offline world. In other words, children’s experiences in the digital environment are making it abundantly clear that, in a host of ways, the established settlement was a poor one. Children’s rights have not been met through history but now, with the internet, that failure is suddenly visible, hypervisible even.
We’ve ignored the problem of school bullying for years, but now that cyberbullying attracts popular attention, we’re ready to blame technology for peer-to-peer aggression.
We’ve been too embarrassed to address teenagers’ need for honest and explicit sexual information for years, but now that their search exposes them to online pornography, we finally think something should be done.
I would not blame today’s teen mental health crisis solely on the internet, but clearly now that children’s every cry for help is recorded and spread, we must recognize not only the internet’s role in making their misery visible but also suffering’s deeper societal roots.
Perhaps, then, this new visibility can finally bring some attention where it’s needed. As Brighenti (Reference Brighenti2007, 324) observes, ‘visibility lies at the intersection of the two domains of aesthetics (relations of perception) and politics (relations of power)’ and, significantly, ‘visibility is a double-edged sword: it can be empowering as well as dis-empowering’ (Brighenti Reference Brighenti2007, 335, emphasis in original).
Child Rights in the Digital Environment
Looking back over the past 30 years, it is striking how child rights and wellbeing experts have sidestepped the digital, tempted to see it as separable from the realities of children’s lives and low down the pecking order of problems to be addressed, or solutions to be embraced. For 30 years, too, digital providers, designers and internet governance experts have found it convenient to be age-blind, developing their services and policies for the so-called general public and postponing attention to the specific needs and rights of children as problematic, expensive or marginal.
One reason for this troubling lack of mutual understanding and cooperation is that even the nature of the problem is eluding both child rights experts and industry players – hence my effort here to clarify the problem as I see it. But now we need these two sides to join forces. In 2014, the UN Committee on the Rights of the Child held a Day of General Discussion to bring these groups together. They urged that:
States should recognize the importance of access to, and use of, digital media and ICTs for children and their potential to promote all children’s rights, in particular the rights to freedom of expression, access to appropriate information, participation, education, as well as rest, leisure, play, recreational activities, cultural life and the arts. (OHCHR 2014, para. 85)
In many years of sitting in multi-stakeholder meetings – so often a dialogue of the deaf – I have been more impressed by the commitment and actions of the child rights activists than the technologists, for somehow institutional or commercial interests always seem to intervene, explicitly or covertly trumping the interests of children. However, I note that my call for a child rights approach to the digital environment will not only struggle with the specificities of the digital, but that it will also face some difficulties regarding child rights. Human rights instruments are addressed to states, but it is often states that are, precisely, egregious in their violation of rights (Livingstone Reference Livingstone and Donohue2019b). Human rights frameworks are, arguably, too individualistic and so unable to address collective rights and public values; too middle class and so blind to socio-economic inequality and class struggle (Moyn Reference Moyn2018); and too Western in their assertion of ‘universal rights’ to contend with the politics of post-colonialism (Hanson Reference Hanson2014).
Yet those same instruments are often inspiring in their vision, valued for their authoritative appeal to states, and they come closer than any alternative to achieving a moral consensus for action. For this reason, the Council of Europe developed its Recommendation CM/Rec (2018)7 to Member States on Guidelines to respect, protect and fulfil the rights of the child in the digital environment. On a global basis, in prospect is the UN Committee on the Rights of the Child’s production of a General Comment on the UNCRC focused on the digital environment.Footnote 5 This will set out the obligations of states and responsibilities of stakeholders (including business enterprises, welfare and educational bodies, law enforcement and justice, parents and children) in relation to children’s rights and the business environment. More practically, the International Telecommunications Union working with UNICEF has developed a range of practical guidelines and child rights assessment tools for use by different stakeholder groups – including industry, given the UN’s guiding principles on business and human rights (OHCHR 2011).Footnote 6
The language of rights is also embraced by children. When the Committee on the Rights of the Child held a Day of General Discussion in 2014, presaging the development of a General Comment on the digital environment, they conducted a children’s consultation, recognizing Article 12, the child’s right to be heard. What they said reminds us of their delight and sense of agency in a digital environment where they can pursue their interests, obtain needed health expertise, stay in touch with far-flung relatives or protest against a local or global injustice. Children further explained that they see internet access as ‘a right’ for it mediates their rights to information, expression and participation; and that they are ready to share in the responsibility of managing it, according to their evolving capacity (Third et al. Reference Third, Bellerose, Dawkins, Keltie and Pihl2014).
What Can Be Done?
In this final section, I suggest some actions particularly appropriate to States, since, according to the UNCRC they have the obligation to realize the rights of children, albeit including by holding other stakeholders – especially business enterprises – to their responsibilities. UNICEF’s Child Friendly Cities Initiative already reflects an evolved mix of design, regulation and social norms which is proving helpful in realizing children’s rights in those cities and communities that have committed to the initiative) (Thivant Reference Thivant2018). This offers a valuable source of inspiration, for many practical actions developed for the physical environment can now be extended to the digital environment. I note, as a further preamble, that that states are investing hugely in digital technology to compete economically, so one can hardly say there are no resources. The list that follows is not all that should be done, but these are practical next steps.
(1) Build attention to the digital into all state provision for children, and mainstream child rights considerations in a coordinated manner across government ministries.
(2) Ensure that state actions regarding the digital environment are underpinned by the meaningful participation of children, in all cases where the consequences affect them.
(3) Require due diligence and apply international legal standards in all actions relating to child rights, including through the use of child rights impact assessments before digital innovations are developed, to inform their design and deployment.
(4) Train the children’s workforce (teachers, clinicians, social workers, health visitors, etc.) regarding digital risks and opportunities, including up-to-date strategies for their management.
(5) Keep children’s formal education free from commercial interests, and ensure they can access free and unmonitored spaces for play, autonomous action and development.
(6) Invest in education to teach children and parents/caregivers the critical knowledge and skills they need to operate as agents and rights holders in relation to the digital environment.
(7) Empower young people to take responsibility where they can by training and resourcing young ambassadors and peer mentors to support and help others in digital spaces.
(8) Apply existing child welfare and protection rules to actors operating in digital environments – including providers of games, social media, educational technology, health organizations, etc.
(9) Ensure high standards of data protection are enforced in relation to the digital environment, and scrutinize public and third sector data-sharing partnerships with commercial actors, prioritizing children’s best interests.
(10) Apply laws against discrimination to organizations that use algorithmic decision-making (workplaces, universities, insurance companies, law enforcement etc.) to eliminate bias and ensure accountability and redress.
(11) Embed the principle and practice of child rights assessment, personal data minimization, and child-friendly mechanisms of justice and redress in all actions and institutions supported by public funding.
(12) Ensure that state actions to protect children online are adequately funded, and that, in protecting children, they make every effort not to violate children’s other rights, in particular the rights to freedom of expression, information and participation.
(13) Collect disaggregated data on children’s digital experiences to track inequalities and exclusion, target resources where most needed, and hold industry to account for the consequences and costs of its actions.
(14) Establish independent monitoring mechanisms to evaluate the effectiveness of child rights implementation, share good practice and plan for change.
Research shows that the experiences of online risks and the opportunities are interlinked, but it is mainly the risk of harm, and the need for child online protection, that arouses much interest from policy makers (O’Neill et al. Reference O’Neill, Staksrud and McLaughlin2013). However, in determining what actions should be taken, it is vital to consider the full range of child rights. While I am at times hopeful but more often sceptical of efforts to create dedicated legislation to protect children in the digital environment, I see more benefit in learning from a child rights approach, extending it now to the digital environment for the benefit of children, perhaps of adults too. In this article, I have argued that, although the necessary social norms and effective regulation for child rights in relation to the digital environment will take years to evolve, much can be usefully be done now. And to guide us, we can learn from policy and practice regarding the realization of child rights more generally.
Acknowledgement
An earlier summary version of this article was published by The British Academy.
About the Author
Sonia Livingstone DPhil (Oxon), FBA, FBPS, FAcSS, FRSA, OBE is a professor in the Department of Media and Communications at the London School of Economics and Political Science. She has published 20 books including The Class: Living and Learning in the Digital Age. She directs the projects ‘Children’s Data and Privacy Online’, ‘Global Kids Online’ (with UNICEF) and ‘Parenting for a Digital Future’, and she is Deputy Director of the UKRI-funded ‘Nurture Network’. Since founding the 33-country EU Kids Online network, Sonia has advised the UK government, European Commission, European Parliament, Council of Europe, OECD and UNICEF. See www.sonialivingstone.net. Sonia Livingstone is a Member of Academia Europaea.