In 2025, artificial intelligence crossed a threshold. Not a technical one, a societal one.
This was the year AI moved from experimentation into infrastructure. It began shaping how people work, receive care, express themselves, and increasingly, how they are influenced, harmed or excluded. For business leaders, that shift matters. Because once a technology becomes infrastructure, its risks stop being theoretical and start becoming systemic.
As we look toward 2026, the central question is no longer whether AI will transform organisations. That is already settled. The real question is whether businesses will help shape that transformation responsibly or simply absorb the consequences after the fact.
From productivity tool to emotional presence
One of the clearest signals of this shift came from how personal AI use became in 2025.
Research cited by the UK’s AI Security Institute shows a significant proportion of the public now uses general-purpose AI assistants for emotional support, companionship and social interaction. These systems were not designed as mental health tools, yet they are increasingly filling that role, often in the absence of accessible human support.
This matters because emotional reliance fundamentally changes the risk profile of technology. When AI is treated as a productivity tool, failures are inefficiencies. When it becomes a source of reassurance or advice, failures can carry psychological and social consequences.
For business leaders, the lesson is not that AI should be withdrawn from sensitive contexts, but that use cases evolve faster than policy, design intent and safeguards. Organisations deploying AI at scale must plan for emergent behaviour, not just intended outcomes.
Capability is accelerating. Oversight is catching up
At the same time, government-backed evidence confirms that AI capability is advancing at extraordinary speed.
The AI Security Institute’s Frontier AI Trends Report, based on two years of direct testing of advanced systems, shows that models now complete apprentice-level cyber security tasks around 50% of the time, up from single-digit success rates just two years ago. In 2025, a system completed an expert-level cyber task requiring up to a decade of human experience.
In software engineering, AI can now complete hour-long tasks more than 40% of the time. In biology and chemistry, models are outperforming PhD-level researchers on knowledge tests and enabling non-experts to succeed in laboratory work that was previously out of reach.
Safeguards are improving. The same report shows that “universal jailbreaks”, methods to bypass AI safety controls, now take hours rather than minutes to discover, a roughly forty-fold improvement between model generations. But the Institute is explicit: no system is fully secure, and autonomy is increasing.
For business, this changes the risk equation. AI risk is no longer primarily about accuracy. It is about autonomy, speed, escalation, accountability and whether governance structures can keep pace with systems that learn and act dynamically.
Healthcare: the test case for AI trust
Healthcare has emerged as the most visible test case for how society manages that balance.
In December 2025, the Medicines and Healthcare products Regulatory Agency (MHRA) launched a nationwide Call for Evidence to inform the work of the newly established National Commission on the Regulation of AI in Healthcare. The language used is telling: regulators openly acknowledge this as a “pivotal moment”.
The Commission’s focus goes beyond the technology itself. It examines how AI is used in real clinical settings, how responsibility is distributed between developers, healthcare providers and clinicians, and how patient safety is protected as systems evolve after deployment.
This approach signals a broader shift. Regulation is moving away from static, one-off approval models toward continuous oversight of adaptive systems. Healthcare is likely to become the template for AI governance across other high-stakes sectors, from finance to critical infrastructure.
AI as an amplifier of harm
While much attention has focused on AI’s benefits, 2025 also provided stark evidence of its capacity to amplify harm.
Research commissioned by UN Women and based on a global survey of women journalists, activists and human rights defenders across 119 countries shows that nearly one in four respondents who experienced online violence identified abuse that was generated or amplified using AI tools. These include deepfake imagery, gendered disinformation and coordinated harassment campaigns.
The findings underline a critical point for leaders: AI does not create social harms in isolation, but it dramatically lowers the cost, scale and speed at which existing harms can be inflicted. In many cases, online abuse escalates into offline intimidation, stalking or physical threats, creating a direct risk to democratic participation and freedom of expression.
For organisations developing or deploying AI systems, claims of neutrality are increasingly untenable. Design choices shape power dynamics, and those dynamics have real-world consequences.
Creativity, control and consolidation
High-profile commercial partnerships in 2025 have spotlighted the complex intersection of creativity, control, and commerce in AI’s evolution. The licensing agreement between Disney and OpenAI, which allows AI tools like ChatGPT and the Sora video platform to generate content featuring over 200 of Disney’s iconic characters, is a prime example.
While this deal promises exciting new avenues for fan engagement and content creation, it also underscores the realities of tightly controlled creative ecosystems.
Framed as a democratisation of creativity, such models instead concentrate ownership and control over what content is permissible, who can produce it, and who ultimately benefits economically. This partnership illustrates a critical strategic consideration for businesses: AI does more than automate routine tasks, it actively reshapes creative industries and labour markets. Understanding the balance between harnessing AI for growth and potential labour displacement will be essential for future proofing business strategies.
The environmental bill arrives
Perhaps the most under-accounted consequence of AI’s rapid expansion in 2025 is its environmental impact.
Research published this year estimates that AI-related activity could be responsible for up to 80 million tonnes of CO₂ annually, with water usage exceeding global bottled water demand. The International Energy Agency has warned that AI-focused datacentres already consume electricity on a scale comparable to aluminium smelters, with global datacentre energy demand expected to more than double by 2030.
In the UK, the scale is becoming tangible. The hyperscale datacentre planned at the former coal power station site in Blyth, Northumberland, is expected to emit over 180,000 tonnes of CO₂ per year at full operation, equivalent to the annual emissions of more than 24,000 homes.
These are not abstract sustainability metrics. Energy availability, grid resilience and water access are becoming operational constraints. In 2026, environmental performance will increasingly determine where and how AI can be deployed at scale.
What this means for business in 2026
Taken together, the story of 2025 is not one of uncontrolled technology, but of misaligned systems. Capability surged ahead; governance, accountability and cost allocation are now racing to catch up.
The organisations best positioned for 2026 will be those that treat AI as a socio-technical system, not a standalone tool. They will embed oversight into workflows, not bolt it on after failures. They will measure social and environmental impact alongside financial return. And they will invest in transparency, knowing that trust is now a strategic asset.
The question leaders must answer
The defining question for 2026 is no longer what AI can do.
It is whether our institutions – businesses included – are prepared to live with what it can do, to shape its deployment deliberately, and to take responsibility when it goes wrong.
Those that answer that question early will not just reduce risk. They will define what credible, responsible digital leadership looks like in the next phase of the AI era.
If any of these insights resonate with your business challenges or ambitions, we are here to help you navigate the evolving AI landscape.
Don’t hesitate to get in touch, we’d love to explore how AI can drive value safely and responsibly for your organisation.
Image Vinícius Vieira ft, Pexels
—————————————————————————————————————–
Enjoyed reading this? then don’t forget to sign up to our newsletter for up to date industry news and insight delivered straight to your mail box.
You may be interested in our previous article: Browser Wars 2.0: From Search to Smarts


Comments are closed.