Every night, between 9 pm and 5 am, up to a quarter of children aged 8-14 are still active on Snapchat, TikTok, YouTube, and WhatsApp. Most of their parents think they’re asleep. Meanwhile, online grooming crimes have hit a record high of 7,263 offences across the UK โ almost entirely on the same platforms your child uses every day. The ongoing trends send a clear message: child safety cannot be ignored, especially in online spaces.
The risks your child faces online are bigger than most parents realise, and a small number of specific actions dramatically reduce their exposure. This guide covers what the data actually shows, what the genuine threats are, and what you can do about them. It’s part of our broader UK safety guide, which covers digital and physical safety across everyday life in the UK.
What the Data Actually Shows
Ofcom’s 2025 Online Nation report, based on passive device monitoring, not self-reporting, found that children aged 8-14 spend an average of nearly three hours online every day, rising to four hours for 13-14-year-olds. It starts earlier than most parents realise: roughly one in five UK children aged 3-5 already own a smartphone, and the Centre for Social Justice estimates that 814,000 children aged 3-5 are already on social media; platforms built to maximise adult engagement, now running their algorithms on children who haven’t learned to read yet.
For 8-14-year-olds, YouTube and Snapchat dominate โ around 48 and 45 minutes of daily screen time, respectively, together accounting for roughly half of total time online. That late-night figure deserves to land properly: 15-24% of children’s time on the main platforms happens between 9 pm and 5 am. Two-thirds of 8-14-year-olds used a device between 11 pm and 5 am at least once in a four-week period. That’s not occasional. That’s structural.
The Real Risks: What Parents Need to Know
Screen time debates tend to dominate the conversation, but they’re often a distraction from the threats that are harder to see. The risks worth understanding aren’t about how many hours your child spends online; they’re about what’s happening during that time, and who they might be talking to.
Online Grooming: A Record High
The NSPCC’s latest figures show 7,263 Sexual Communication with a Child offences recorded across the UK last year โ a number that has nearly doubled since the offence was first introduced in 2017. The youngest recorded victim was four years old.
Of the offences where the platform was identified, 40% took place on Snapchat. WhatsApp, Facebook, and Instagram each accounted for around 9%. Perpetrators deliberately target apps with disappearing messages and private messaging โ features that make detection harder for both parents and platforms.
The most important thing to understand is how grooming actually works. It doesn’t start with explicit requests. It starts with compliments, shared interests, and private conversations that feel like genuine friendship โ the same psychological tactics used in social engineering and online manipulation more broadly. By the time anything inappropriate happens, a child has often already been coached to keep the relationship secret and to distrust the adults in their life. A child who believes they have a special online connection with someone is frequently the most at-risk one in the room.

Harmful Content and Cyberbullying
Ofcom found that seven in ten children aged 11-17 had encountered harmful content online in the previous four weeks โ from self-harm and eating disorder content to misogynistic and violent material. The recommendation algorithms that keep children scrolling do not distinguish between content that helps and content that harms.
Cyberbullying is persistent in a way that playground bullying rarely is. According to the Anti-Bullying Alliance, around one in five children aged 10-15 in England and Wales has experienced online bullying. Unlike a difficult day at school, it follows children home โ into their bedroom, onto their phone, through notifications at 2 am. For a child already struggling, the removal of any safe space is what makes it so damaging.
AI Chatbots: The Risk Most Parents Haven’t Caught Up With
In 2025, ChatGPT received 1.8 billion UK visits โ up from 368 million the year before. Half of all 8-17-year-olds now say they’ve used AI tools. The risks are different from social media but equally real: children can form emotional attachments to chatbots that feel endlessly patient and non-judgmental; they can receive advice on mental health, relationships, or risky situations from a system with no duty of care; and their personal data can be processed without appropriate safeguards.
This risk is evolving fast. AI is increasingly being used to create synthetic personas โ fake identities that can hold convincing, emotionally responsive conversations at scale. The same technology that powers helpful chatbots can be used to build relationships with children that feel completely real. It’s grooming with automation behind it, and most parents have no idea it exists yet.
In February 2026, the UK data watchdog fined Reddit ยฃ14.47 million for processing children’s personal data without a lawful basis โ leaving them exposed to inappropriate content with no meaningful age checks in place. The government’s March 2026 national consultation is explicitly considering regulations for AI services accessed by children.
This also connects directly to device security. A child’s phone isn’t just a communication tool โ it’s a gateway to their accounts, location, and personal data. The same risks that make a stolen phone dangerous for adults apply to children too: an unlocked device in the wrong hands hands over everything.
Steps to Tackle AI Risks
Three specific things to do about AI and device risk right now:
- Ask your child which AI tools they use and for what. ChatGPT, Snapchat’s My AI, and Character.AI are the most common among under-18s.
- Check whether each service has a minimum age and how it’s actually enforced โ most rely on self-declaration, which children routinely bypass.
- Talk explicitly about the difference between AI responses and human advice, particularly around health, relationships, and safety.
What You Can Actually Do
Most online child safety advice amounts to the same short list: set parental controls, talk to your kids, no phones at night. That advice isn’t wrong โ but it skips the harder question of why it works and how to actually do it. Here’s what the evidence points to.
Make Your Reaction Safe First
This matters more than any control setting or monitoring tool. Children who are being groomed, bullied, or exposed to harmful content most often don’t tell a parent โ not because they don’t want help, but because they’re afraid of the reaction. Devices confiscated, interrogation, panic. The most protective thing you can build is a household where coming to you feels like the obvious move rather than a risk.
Ask about their online life the way you’d ask about school. What are they watching? Who do they talk to on Snapchat? What’s everyone doing right now? Make it normal to talk before something goes wrong, so that when it does, you’re not the last to know.
Set Up Parental Controls โ Knowing Their Limits
Controls are a meaningful layer of protection, but they’re not comprehensive. Children find workarounds, platforms update their settings, and no filter catches everything. Use them alongside conversation, not instead of it.
Controls can be applied at four levels:
- Home broadband โ filters content across all devices on your network
- Mobile networks โ manages access when your child is on mobile data away from home
- Individual devices โ restricts apps, in-app purchases, and screen time on phones, tablets, and consoles
- Within apps โ platforms like YouTube and TikTok have their own restricted modes, though these require regular checking as settings change
Net Aware, the NSPCC and O2’s joint resource, provides parent-and-child written reviews of the most popular apps and games, including specific privacy settings and age-appropriate guidance. Check it before you allow access to a new platform.
Learn the Platforms Before You Try to Manage Them
You can’t have an informed conversation about a platform you’ve never opened. Spend twenty minutes on TikTok. Find out how Snapchat’s location-sharing works โ and whether your child’s contacts can see where they are in real time. Understand what a “streak” is and the social pressure children feel to maintain them daily. You don’t need fluency. You need enough working knowledge to spot when something seems unusual and ask a specific question rather than a vague one.

Remove Phones from Bedrooms at Night
Of everything in this guide, this single change has the most impact per unit of effort. It addresses late-night platform use, disrupted sleep, and the hours of greatest vulnerability โ all at once. Charge all devices in a central location overnight. Children will resist it. Parents who implement it consistently say it’s the most effective thing they’ve done.
The Law Is Moving โ But Not Fast Enough
The Online Safety Act, whose codes of practice came into force in July 2025, now legally requires platforms to protect children. Ofcom can fine companies up to 10% of global turnover for failures. Age verification is mandated. The Reddit fine shows the regulator is prepared to use its powers.
But legislation works on a timescale of years. The March 2026 consultation may produce a minimum social media age โ but even if it does, determined children will find workarounds. The law creates accountability for platforms. It cannot replace what parents do at home.
If You’re Worried Right Now
If you think your child has been contacted by a groomer: Contact the police (101, or 999 if there’s an immediate risk) and report to CEOP at ceop.police.uk. Do not delete messages โ they are evidence.
If your child is being cyberbullied: Screenshot and save everything. Report through the platform’s own tools. Contact the school if classmates are involved. Childline (0800 1111) is available 24/7 for children who want to talk to someone outside the family.
If you’re concerned about harmful content: The NSPCC helpline (0808 800 5000) offers free advice for parents. In case the content involves child sexual abuse material, report it to the Internet Watch Foundation immediately.
If you’re unsure whether something is normal: Talk to your child’s school. Most have a designated safeguarding lead who deals with exactly these questions.

The One Thing That Changes Everything
Every piece of data in this guide points to the same conclusion: the children most at risk online are the ones whose parents feel least equipped to talk about it. The grooming statistics, the late-night platform use, the AI personas โ none of it happens in a vacuum. It happens most easily in the space between what a child encounters and what a parent knows to ask about.
You don’t need to be a digital expert. You need to be present, curious, and harder to embarrass than your child expects. Ask the questions that feel awkward. Check the apps you don’t recognise. Sit with them on TikTok for twenty minutes without commenting. The relationship you build doing that is the actual safeguard โ the one no algorithm, regulation, or parental control can replicate.
The law is moving in the right direction. But your child is online tonight.
For the broader safety picture, from phone theft and street crime to scams, our UK safety guide covers it all with the same data-led approach. And if you want to understand the mechanics of online threats โ phishing, social engineering, digital manipulation โ from the ground up, Get Licensed’s Cyber Security eLearning course is CPD and IIRSM approved, built for non-technical learners, and covers exactly the kind of threat awareness that helps you have more informed conversations at home.
Knowing what to look for is always better than finding out too late.












