Two major internet news stories this week brought Janet Jackson’s 1986 anthem “Control” to mind.
The timeless classic details the desire to take charge of one’s own life, to call one’s own shots, and to decide how to navigate a complex world. Janet sings about breaking free from outside influence and finding autonomy, a message that feels especially relevant today as we wrestle with control over our digital lives.
Because the internet is both a playground and a battlefield for control. Who really holds the reins? Is it the individual, trying to cultivate healthier habits online? The platforms, designing experiences that either empower or exploit? Or the state, weighing whether regulation can protect us without overreaching?
Like Janet’s song, the question isn’t about surrendering control or letting others dictate the terms. It’s about reclaiming it, creating an environment where personal responsibility is not just possible but supported, where the rules aren’t rigged against us.
According to OFCOM’s Online Nations 2025 report, UK adults are now spending four and a half hours online every day, up again, and a full half-hour more than was managed even during lockdown.
Young adults? They’re clocking over an eye watering six hours per day.
But even as we spend more time online, we’re liking it less. Only a third of adults now think the internet is “good for society.” The honeymoon phase isn’t just over; we’ve moved on to the “quietly resentful but still scrolling TikTok at 1AM” stage of the relationship.
Young people feel it too. They’re happy online, but they’re also calling the fallout “brain rot”, a very Gen Z way of saying “I hate this, but I literally cannot stop.” And that’s really the heart of this whole debate: if we know our habits are weird, addictive or unhealthy… can we actually fix them ourselves?
Which brings us to the second major news story. Australia, the country that just grabbed the internet safety steering wheel and yanked it sideways.
Australia Slams the Door Shut
Their new law (The Online Safety Amendment (Social Media Minimum Age) Act 2024) bans the estimated 1 million Australian under-16s from social media entirely. No TikTok. No Insta. No YouTube accounts. It’s one of the boldest state controlled internet interventions we’ve seen, and the rest of the world is watching with popcorn and mild panic.
Supporters say it’s about protecting children. Critics point out that bans don’t make behaviour disappear; they make it invisible. The UK just watched VPN usage double overnight when age checks were introduced on porn sites. If adults bypass rules that easily, imagine a 14-year-old with a YouTube addiction and a spare five minutes.
The risk is clear: close the front doors, and people will find alternative access, and in this case, through potential obscure online platforms where safety standards barely exist.
Are We Blaming young people for the Internet Big Tech Built?
This is the part nobody really likes to say out loud: it’s not the under 16s who designed algorithms that reward outrage, insecurity and compulsion. And it’s not adults who decided everything should autoplay, refresh endlessly or recommend you more of whatever is slowly draining your will to live.
Everyone, from psychologists to charities to regulators, keeps coming back to the same point: tech companies should be held responsible for the structures they’ve built. Not in a punitive, “delete everything” way, but in a “maybe don’t optimise your entire business around human vulnerability” way.
But so far, the incentives haven’t changed. More time online means more data, more ads, more revenue. No wonder the average adult is burning four to six hours a day online, the platforms are literally designed to make that happen.
So Do We Need Regulation? Yes. But Not Fantasy Regulation.
Here’s the trap lawmakers keep falling into: rules that sound good politically but don’t work practically. Banning all under-16s from social media? Great soundbite. Massive loophole. Heavy-handed legislation risks pushing people further underground and actually increasing harm.
The kind of regulation that does work is the boring, technical stuff: safer product design, limits on harmful recommendation loops, proper age assurance, realistic transparency requirements. None of it fits on a campaign poster, but it’s the only way to change the underlying incentives.
And that leads neatly to the part we keep avoiding…
Is the Internet Safe?
Short answer? It depends which part you’re swimming in.
Longer answer? The modern internet isn’t built around safety; it’s built around engagement. That’s why even well-adjusted adults feel fried, why kids feel “brain rot,” and why so many people are starting to wonder whether being online for half the day is … normal and, more importantly, healthy.
Safety isn’t about policing behaviour. It’s about redesigning the environment so the default isn’t damaging.
So, Who’s Actually Responsible?
Honestly? Everyone. Not in the sense that everyone is equally to blame, but in the sense that everyone has a different job to do.
Individuals can build healthier habits. Platforms can redesign the systems that fuel the problem. Governments can push tech companies to stop acting like ‘attention extraction’ is the only business model in town.
It’s not ideal, but it’s workable. The alternatives are extremes – the Internet Wild West or the Internet Nanny State – with nobody wanting either.
The Future of Internet Governance: What Comes Next?
If you zoom out, it’s obvious we’re in the middle of a global course correction. For years the internet grew faster than any government could regulate and faster than most people could understand. Now we’re trying to retrofit governance onto a system built with almost no guardrails.
Here’s where it’s likely heading:
1. Safety-by-design becomes non-negotiable.
Governments will stop trying to police every bad post and instead force platforms to redesign the infrastructure that produces harm.
2. Age assurance moves from clunky to seamless.
Privacy preserving, and accurate, age checks will become normal, and the VPN arms race will calm down once the systems get smarter.
3. Platform responsibility grows teeth.
Companies will be held accountable not just for illegal content, but for predictable harms created by their algorithms.
4. Users get real control – not just illusion-of-choice settings.
Expect tools that genuinely restrict feeds, customise recommendation systems, and let people opt out of engagement driven design.
5. Countries experiment differently, and the best ideas spread.
Australia may be first with bans, but the winning model will be the one that balances freedom, safety and actual feasibility. In the end, the future isn’t a choice between personal responsibility and state control. It’s about creating an online environment where personal responsibility is genuinely possible because the system itself isn’t working against users.
You can regulate the risks, but if the wider environment remains unhealthy, safety will always be limited.
As Janet put it, we want to be in control. To make that real online, the internet itself must change – so control isn’t just a hope, but something we can actually hold.
—————————————————————————————————————–
Want to take full control of your internet access? Reach out to the experts at vXtream – we’d love to hear from you!
Cropped Image of Janet Jackson on her Unbreakable World Tour, California, October 14, 2015 by Rich Esteban
And don’t forget to sign up to our newsletter for up to date industry news and insight delivered straight to your mail box.
Enjoyed this? You may be interested in our previous article: Are the Days of a Free Internet Numbered?


Comments are closed.