Australia’s teen social media ban takes effect – Expert Reaction

Under-16s across Australia will not be allowed on social media platforms, starting on Wednesday.

The European Parliament passed a similar resolution last month, and in New Zealand a private members bill was put into the biscuit tin in May.

The SMC asked NZ experts in law, youth mental health, and tech to comment.


Dr Cassandra Mudgway, Senior Lecturer Above the Bar, Faculty of Law, University of Canterbury, comments: 

“Australia’s under-16 social media ban is being introduced within a much more developed regulatory system than we have in Aotearoa New Zealand. The ban applies to major platforms like TikTok, Snapchat, Facebook, Reddit, Facebook, and Instagram, but not to private messaging services such as WhatsApp or Facebook Messenger (even though Messenger now carries some feed-style content like status updates). That means the measure may reduce algorithm-driven harms, but it will not prevent cyberbullying, harassment, or image-based sexual abuse, which often occur through private messaging.

“Importantly, the ban is only one part of a wider package that has been rolled out over the past year or so. Australia is simultaneously strengthening rules against harmful content (including action against “nudifier” apps and websites), increasing eSafety enforcement powers, and investing in education for young people and parents. The ban sits on top of an existing digital safety infrastructure.”

On whether a ban would work in New Zealand

“A stand-alone ban in New Zealand cannot achieve what its proponents hope. We do not have minimum safety standards for platforms, a dedicated regulator with proactive oversight powers, or a modern enforcement system. Without this scaffolding, a ban risks pushing young people into less visible, less regulated spaces online, while creating a false sense of safety. We have already seen this dynamic play out in the UK with adult users: when age-verification requirements were introduced for pornographic sites, Pornhub complied and experienced a dramatic drop in traffic, while non-compliant (and far riskier) sites saw significant increases.

“There are also concerns regarding the human rights of children online. A ban is a blunt tool that restricts children’s rights to participate in public and cultural life (online) and freedom of expression. Moreover, enforcement mechanisms often require intrusive age-verification systems such as facial recognition or ID uploads.

“Instead of focusing on restricting access, New Zealand should prioritise regulating the platforms themselves: establishing legal duties of care, requiring transparency about algorithmic systems, and ensuring credible penalties for non-compliance.”

Conflict of interest statement: “No conflicts to declare.”


Dr Alex Beattie, Senior Lecturer in Media and Communication, Victoria University of Wellington, comments:

“The proposed age-based social media ban reflects a potent mix of genuine concern for children’s wellbeing, nostalgia for a pre-digital childhood, and a rising moral panic about shifting social norms. It’s also been fuelled by alarmist narratives, such as Jonathan Haidt’s book The Anxious Generation, which often oversimplify the complex and still-evolving evidence base around social media harm. Underpinning it all is a persistent misperception: that social media can be regulated like alcohol or tobacco, rather than as the infrastructural technology it has become.

“Few credible voices deny that social media can cause harm. The problem lies not in the technology itself, but in the business model that drives it: algorithmic targeting, addictive design, minimal content moderation, and relentless engagement metrics. These features degrade online discourse and shape young people’s digital experiences in troubling ways.

“But does a ban address this? Not meaningfully. It risks regulating young people more than the platforms themselves. It asks them to change how they socialise, rather than requiring platforms to create safer, more prosocial environments. And it ignores the reality that young people will seek connection through media technologies. If not on mainstream platforms, then where, and under what conditions?

“As someone who has researched digital disconnection for over a decade, I have learnt that unplugging people from social media is not as simple, or as harmless, as it sounds. There are opportunity and social costs to going offline. Social media (again, the technology, not the business model) is now part of the infrastructure of everyday life. It supports communication, identity formation, and community-building, especially for marginalised groups who may not find belonging elsewhere.

“We should be asking: What digital rights do children need? What literacies and communication skills will they miss out on if we remove them from these spaces? And who will be most affected by a ban?

“Banning social media for young people may feel like action, but it’s a surface fix for a deeper problem. The real issue isn’t kids, its platforms profiting from addictive design, poor moderation, and exploitative algorithms. A ban risks cutting young people off from vital social and cultural infrastructure, especially for marginalised communities. We should be talking about digital rights, not digital prohibition.”

Conflict of interest statement: “Dr Alex Beattie is a recipient of a Marsden Fast Start from the Royal Society of New Zealand in 2024 for a project called Saving Screen Time: How People with ADHD Disconnect from the Internet.”


Professor Alistair Knott, Centre for Data Science and AI, Victoria University of Wellington, comments:

How will the under-16 ban on social media users be implemented in Australia? 

“Each company is responsible for implementing its own measures. In practice, that involves identifying the age of platform users, and identifying their location (in Australia).

“We don’t know exactly what individual companies will do. But some useful information was provided in the Australian government’s Age Assurance Technology Trial project, whose report was released in August. This project aimed to check that there are feasible methods for identifying users’ age without compromising their privacy and security. The report identified three classes of method that can be used to provide ‘age assurance’. The methods have different pros and cons; the report concluded that by combining these methods, there are ways of implementing the ban on under-16s. In brief:

  • Age verification methods identify a data of birth from secure access to identity records (e.g. passport, drivers license) and government databases. These methods are already widely used beyond this particular case, and are most accurate. They are also most invasive, so should only be used as a last resort.
  • The least invasive method is age inference. This consults behaviours of the user on the platform in question (what they say, what they look at), and other accessible online facts, to infer a likely age. Companies do this routinely: they know quite accurately how old each user is, from this method. Age inference is reliable enough to identify most adult users who aren’t subject to the ban. It is the least invasive method: most adults won’t be aware of its application.
  • An in-between method is age estimation. This is mostly done by asking the user for a photo. (They may have to move their head, to show it’s an actual person). Again this gives a likely age range, rather than something precise.

“Platforms also have to identify users in Australia, because the ban only applies in Australia. Location can also be identified from a range of sources. These include geographic location information from Wi-Fi or mobile phone numbers, or GPS information, if this is turned on. They also include information from settings in the app store, or operating system, or user accounts. Finally, they include location tags on photos. Many users will try setting up VPNs, to appear to be elsewhere in the world. The above methods can provide better data, but spoofing location will always remain a possibility.

“An interesting reflection on the Age Assurance Technology Trial report: Australia has an active ecosystem of commercial companies that provide the necessary detection methods. As the report states, ‘we found a vibrant, creative and innovative age assurance service sector with both technologically advanced and deployed solutions and a pipeline of new technologies…We found private-sector investment and opportunities for growth within the age assurance services sector.’ Australia is positioning itself to be a world leader in this area. There are opportunities to export products to other countries who wish to follow Australia’s lead in this area of tech governance. New Zealand would do well to take note: there are commercial and export opportunities in tech regulation, just as there are in every area of tech.”

Conflict of interest statement: “I used to co-lead the social media governance project at the Global Partnership on AI. And I helped to set up the Social Data Science Alliance. Both of those things are tech governance initiatives.”


Dr Samantha Marsh, Senior Research Fellow, General Practive & Primary Healthcare, University of Auckland, comments:

“Our recent research shows that social media use is almost universal among NZ teens, with 22% meeting criteria for problematic use, a pattern of use that mirrors behavioural addiction. Half felt they gained access too young, and 39% said they wished social media had never been invented. Support for a minimum age of 16 was strong among caregivers (77%) and teens (47%), with only 25% of teens opposed to the policy change. Young people also told us what would help when introducing an age restriction in NZ: safer platforms for younger teens, more offline opportunities, and clearer guidance for parents and schools to support teens through the transition.

“On the Australian policy change: I see this as an important first step. It won’t be perfect, but perfection has never been the standard for public-health action. We regulate other harmful commodities despite knowing compliance will not be 100%. This is because we know population-level protections reduce harm. We now have sufficient evidence that social media is associated with poorer mental, physical, and relational wellbeing, and this is happening at scale. These platforms have never been shown to be safe for our children and continuing to give young people unrestricted access to largely unregulated, engagement-maximising systems is no longer tenable.

“Ultimately, this is about shifting social norms, so the healthy choice becomes the easy choice. Right now, the easiest option for many teens is hours alone in their bedrooms on social media.

“In NZ, young people themselves are calling for stronger protections. If we move toward an age restriction, and I believe that is where we are heading, then we need to do it well. That means ensuring the policy is as strong and effective as possible and pairing it with the right supports: parent education, clear communication to teens that avoids a backfire effect, safer regulated online spaces that encourage real-world connection, and school policies that actively shift norms around constant online engagement.

“An age restriction would buy us crucial time for adolescent brains to develop, time to teach young people how to use these platforms safely before they gain access, and time to introduce a more stepped approach. For example, limited daily access at 16 before full access is granted. And while some will argue that we should regulate platforms rather than people, I don’t see this as an either/or. Platform regulation is essential, but it will take years. Age protections are something we can implement now to reduce harm while the broader regulatory system catches up.”

Conflicts of interest statement: Samantha Marsh is a member of Before16, an advocacy group to protect NZ children from the harms of screens. She is the academic advisor on the Board for Smartphone Free Childhood NZ, and she provides parental education around the impact of screens on children and adolescents.


Associate Professor Cara Swit, Faculty of Health, University of Canterbury, comments:

What does the research say about children/teens’ social media use?

“In Aotearoa, there is growing research, including projects I co-led, that shows social media use among rangatahi is far more complicated than “social media is good” or “social media is bad.” 

“In the recent project I co-led (with Jen Smith, Aaron Hāpuku, Helena Cook), rangatahi told us that social media is part of everyday life in really ordinary ways. It’s how they stay in touch with friends, follow their interests, keep up with schoolwork, and connect with culture, identity and community. For underserved rangatahi including Māori, Pacific, Rainbow, and Deaf young people, social media spaces gave them access to peers who understand them. They found “people like me” online, especially if they don’t have that support offline. even when that support wasn’t available offline. 

“They talked about the pressure to be constantly available, the exhaustion of managing attention, and the emotional toll of constant comparison, racism, homophobia or body shaming. They described how platform design (algorithms, endless feeds) can “push” inappropriate content at them without warning. Many rangatahi said they wished they’d been a bit older before getting social media, not because they didn’t want it, but because they didn’t feel prepared for all that came with it. Importantly, many also said that adults don’t really understand what it’s like for young people now. They told us stories of being criticised for “too much screen time,” even while adults around them, including their parents and teachers, were themselves absorbed in phones. 

“We see similar patterns in other NZ survey research (e.g., New Zealand children’s experiences of online risks and their perceptions of harm. Evidence from Ngā taiohi matihiko o Aotearoa – New Zealand Kids Online; Problematic social media use common among NZ teens as government examines age limits | PHCC) where teens say they rely on social media for friendship and support, but a sizeable group also report harms like bullying, unwanted content, sleep disruption and feeling like they use it more than they want to. It’s not a simple “social media is ruining kids” story, it’s a “this is where their social lives are, and sometimes it really helps them, and sometimes it really hurts them” story. Importantly, 22 % of teens in the PHCC report that survey met the criteria for “problematic use” — meaning their use showed patterns that mirror addictive behaviours. Also, nearly half said social media disrupted daily activities such as homework, chores, family time or sleep. Many in the survey reflected negatively on early exposure: half said they got social media too young; and almost 4 in 10 said they wished social media had never been invented. 

“From my own work with younger children (6–9 years old), we also found that children notice and are affected by adult device use. Many described feeling sad or frustrated when they felt overlooked, like when a parent picked up their phone during play, or when they tried to get attention but were competing with a screen. This suggests the issue is not only about teens’ social media use, it’s about the whole family’s digital environment and how media and devices shape attention, relationships, and emotional wellbeing from early childhood. 

“What this body of research highlights is the dual reality many rangatahi live: social media can offer connection, identity, support and belonging, especially for those who might feel marginalised offline. But it can also expose them to harm, anxiety, comparison, and emotional risk. 

“Given that complexity, any policy or public conversation about youth and social media needs to hold those two truths together. It’s not enough to only focus on harms; we also need to acknowledge what social media offers to young people and adults. Similarly, understanding that many young people are still growing and developing — socially, emotionally, cognitively — means that access, support, and design matter, not just prohibition. Most adults haven’t figured digital wellbeing out either. Yet we’re the ones making the rules.”

What are your thoughts on the Australian social media ban?

“I completely understand why governments are worried. There are real risks for young people (ALL people) online. But I’m cautious about blanket bans, because they don’t address the bigger digital environment young people are already living in. 

“One important point that’s often missing from the conversation is that young people spend large parts of their day online because schools require it. Their learning, their homework, their communication with teachers, almost everything is mediated through digital devices. That digital load impacts their wellbeing too: their sleep, their ability to concentrate, their stress levels. If we’re serious about supporting young people’s digital wellbeing, we have to look at the whole digital ecology they’re navigating, not just social media. 

“I’m also concerned that banning or restricting access is, in many ways, an abdication of our responsibility as adults. Prohibition doesn’t teach skills. It doesn’t teach critical thinking, emotional regulation, or how to identify risks and ask for help. Instead of focusing purely on the harms, we could flip the conversation and ask: What does healthy media use actually look like? What does it feel like for a young person? How can we build the skills, confidence, and relationships that help them navigate digital spaces well?

“Young people need opportunities to learn: 

  • how to regulate their use, 
  • how to recognise unsafe content or behaviour, 
  • who their safe adults are, and 
  • how to articulate discomfort or concern. 

“Those life skills can’t be gained if we only rely on bans. 

“We also need to be realistic: young people are incredibly skilled at getting around restrictions. We saw this with school phone bans. Young people had hidden devices, shared accounts, VPNs, and dozens of creative workarounds. Many told us they think adults are naïve for believing a simple rule will solve a complex problem. When policies don’t match their lived reality, it sends a message that we don’t understand their digital worlds and it further widens the understanding gap between adults and teens. 

“Another important point is that turning 16 doesn’t magically equip a young person to handle the pressures of social media. Developmentally, nothing special happens on your 16th birthday that suddenly makes you resilient to algorithms, comparison culture, or online harassment. If we’re going to restrict access until a certain age, we must also think carefully about: 

  • how we prepare young people for the transition onto social media, and 
  • how we support those who have already been on social media for years and are now being told to step back. 

“Right now, that transition plan is missing. 

“We also need to be mindful of equity. Some rangatahi, particularly underserved such as Māori, Pacific, Rainbow, Deaf young people rely on online spaces for connection and belonging. A blanket ban may disproportionately impact those with the fewest offline supports. 

“And finally, age verification itself raises new risks. Strong verification systems often require biometric data, government ID, or large-scale data collection from young people and families. This creates privacy concerns that are not insignificant. This is not my area of expertise, so I can’t comment further on this potential risk. 

“My broader concern is that bans give us the appearance of action without the substance. They address symptoms, not causes. The real harms come from the way platforms are designed such as algorithmic design, addictive engagement loops, lack of safety-by-default, and the pressures in young people’s offline lives, not just from the fact that a young people has a social media account.”

What are your thoughts on NZ’s moves in this area?

“If there is one idea I’d emphasise after years of talking with young people across Aotearoa for our research projects, it’s this: digital wellbeing is not something we create by controlling young people, it’s something we build by supporting them. Rangatahi don’t need us to remove all risk from their lives. They need us to walk alongside them as they learn to navigate it. That’s how every generation has learned to grow up, with guidance, not gatekeeping. 

“For that reason, if Aotearoa New Zealand chooses to move toward age limits or stricter verification, and realistically, I think it’s less an “if” and more a “when,” then what matters most is how we do it. We have a real opportunity to take a different path from Australia by putting rangatahi voice at the centre of the solution, rather than designing policies about them without them. 

“In every project I’ve been involved in, young people have been clear: 

  • They want support from adults, schools, and government. 
  • They want safer digital environments. 
  • But they want solutions that reflect their lived realities. 
  • They want to be part of the conversation, not the recipients of rules they had no say in. 

“If we keep implementing bans, restrictions, or surveillance measures without their buy-in, we risk repeating the same pattern young people keep telling us about: policies that don’t work because they don’t understand their world. And when young people feel misunderstood or excluded, the divide between them and adults only widens, reducing trust and making it harder for them to come to us when something goes wrong online. 

“The bigger picture is that many of today’s young people will live well past the year 2100. Long after we’re gone, they will be the ones navigating the digital world we are shaping right now. The decisions we make today about design, safety, access, and education will echo through their lifetimes. Technology will keep changing; our responsibility doesn’t. Our job is to shape that world with young people, not for them. So if New Zealand goes down this path, the question shouldn’t be “Should there be an age limit?” 

“The question should be: 

““How do we build a system with rangatahi that genuinely prepares them for life online?” That means:

  • involving young people in policy design; 
  • resourcing educators and whānau to teach digital capability, not just impose restrictions; 
  • requiring platforms to do their part (safety-by-design, algorithmic transparency, default protections); and 
  • recognising that digital wellbeing is a shared responsibility across families, schools, and government.”

Conflict of interest statement: “No conflicts.”


Expert comments from the AusSMC are available on Scimex.