Enterprises Face AI Risks as Only 2% Meet Safety Standards

The Rise of Agentic AI: Promise and Peril

Emerging technologies often come with a double-edged sword — offering both innovation and risk. Agentic AI, a rapidly evolving class of artificial intelligence capable of autonomous decision-making and action, is no exception. According to a recent study by Thoughtworks, titled “Facing Up to the Risks of Agentic AI,” there is a growing gap between organizations leveraging AI’s transformative potential and their ability to mitigate the risks it presents.

86% of global enterprise tech leaders anticipate higher risk exposure with the adoption of autonomous AI systems. Yet astonishingly, only 2% meet recognized benchmarks for responsible AI governance. This underlines a troubling reality: as enterprises race to unlock AI’s benefits, most are ill-prepared to manage its unintended consequences.

What is Agentic AI?

Agentic AI refers to artificial intelligence systems designed to operate with a degree of autonomy, executing complex tasks and making decisions with limited human oversight. Unlike traditional AI that follows scripted instructions, agentic systems demonstrate initiative, can pursue goals, and continuously learn from their environment.

The potential applications of agentic AI are vast, from self-optimizing logistics systems to dynamic customer service agents. But this autonomy also presents new challenges in unpredictability, accountability, and risk management.

Key Capabilities of Agentic AI Include:

  • Autonomous decision-making based on context and dynamic inputs
  • Continuous learning to improve performance over time
  • Proactive goal pursuit without direct human involvement
  • Scalable reasoning across multiple domains and data types

While these capabilities enable powerful efficiencies, they also increase the difficulty of oversight and ethical governance.

Widening Risk Exposure

As enterprises implement AI-driven systems across business functions, they are becoming increasingly aware of the risks. According to the Thoughtworks survey conducted across 12 countries:

  • 61% of enterprises are currently using or exploring agentic AI technologies
  • 86% expect AI to increase reputational, legal, or operational risks
  • Only 2% of organizations meet global benchmarks for responsible AI practices

These statistics spotlight a glaring disconnect between ambition and accountability. Businesses are enthusiastic about AI, but lack the structure and protocols required to mitigate ethical blind spots, hallucinated results, data privacy concerns, and discriminatory outcomes.

The Responsible AI Gap

The low percentage of companies meeting responsible AI “gold standards” is a call to action. Responsible AI refers to the ethical deployment of artificial intelligence systems, ensuring fairness, accountability, transparency, and safety in their design and application.

Key pillars of responsible AI include:

  • Transparency: Clear explanations on how AI systems make decisions
  • Fairness: Avoidance of algorithmic bias and discriminatory impacts
  • Accountability: Defined responsibility for AI behavior and its consequences
  • Safety: Mechanisms to prevent AI from causing harm or malfunctioning

Unfortunately, many enterprises lack the internal processes or cross-functional expertise to uphold these standards. Moreover, regulatory uncertainty and the pace of AI development make compliance even more challenging.

Why Organizations Are Falling Short

There are several reasons why the majority of enterprises have not yet met the required AI safety and ethics standards:

1. Lack of Expertise

Many organizations lack in-house AI specialists or ethical frameworks to evaluate machine learning models critically. There’s often a limited understanding of how these models operate, leading to blind trust in outputs.

2. Speed Over Safety

In the race to adopt AI for competitive advantage, safety often becomes an afterthought. Teams prioritize time-to-market and cost-savings without fully assessing long-term consequences.

3. Fragmented Governance

Without unified policies, many companies have inconsistent AI practices across departments. This fragmentation increases the likelihood of risks going undetected.

4. Insufficient Tools

Despite the availability of AI auditing and monitoring tools, many organizations don’t fully adopt them due to cost, complexity, or lack of awareness.

Steps Toward Responsible AI Adoption

To close the gap between AI excitement and preparedness, enterprises must take immediate and strategic actions. Here are key steps to implement a responsible AI framework:

1. Establish Ethical Oversight Bodies

Create internal AI ethics councils composed of data scientists, legal experts, ethicists, and senior leadership to vet projects and monitor AI deployments.

2. Invest in Explainable AI (XAI)

Adopt tools that offer model interpretability, enabling stakeholders to understand and trust AI outcomes.

3. Conduct AI Risk Assessments Early

Evaluate risks not just in deployment but throughout the entire development lifecycle — including use-case selection and data procurement.

4. Embed Fairness Testing

Ensure algorithms are tested for bias against various demographic groups. Regularly retrain models with diverse and representative datasets.

5. Align with Emerging Regulations

Stay ahead of global and regional regulations such as the EU AI Act or the US AI Bill of Rights. Align company practices with these frameworks to future-proof AI operations.

A New Imperative for Leadership

Enterprises now stand at a crossroads. The push to harness AI’s benefits must be matched with an equally strategic push toward ethical responsibility. Boards and executive teams must make AI governance a top-level priority.

This is not just a technical issue — it’s a reputational, financial, operational, and moral one. The brand damage from AI misbehaving can be severe, while regulatory penalties for noncompliance are climbing fast.

Leadership Accountability Must Cover:

  • Funding for AI risk management programs
  • Mandates for cross-departmental audits
  • Personnel training in responsible AI deployment
  • Public transparency on AI usage and effectiveness

Conclusion: Responsible AI is Not Optional

As agentic AI continues to transform industries from finance to healthcare, ensuring the responsible use of this technology has never been more urgent. The findings from Thoughtworks are a stark reminder: while most organizations recognize the risks, very few are structurally ready to manage them.

To bridge this critical gap, enterprises must invest not only in AI capabilities but in AI conscience. Governance, transparency, and accountability should be woven into every layer of development and deployment.

Companies that rise to meet this challenge won’t just avoid disaster — they’ll lead the way toward a future where AI serves not just business goals, but society at large.

Only 2% of enterprises are currently setting the gold standard. For the rest, the time to act is now.

Leave A Comment