The 5 Biggest Responsible AI Failures

andries-verschelden-thumbnail
Andries Verschelden
Co-founder & CEO

Andries has had a variety of consulting and management roles throughout his career. He has worked with fast-scaling clients across three continents. Prior to founding Good.Lab, Andries led the blockchain practice at Armanino, a top 20 public accounting firm, was CEO at The Brenner Group, a boutique Silicon Valley financial services firm, and was a partner at Moore Stephens in Shanghai. He started his career at PricewaterhouseCoopers.

Andries holds his B.S. in International Politics from Ghent University in Belgium, an MBA from Binghamton University and founded and participated in the Moore Comprehensive Executive Leadership Program at Harvard Business School.

Why the real AI risk has nothing to do with AGI

Most conversations about AI risk focus on the future: superintelligence, mass unemployment, existential threats.

That’s understandable and important, but it’s also a distraction.

The most damaging AI failures companies face today have nothing to do with AGI. They are far more mundane, far more likely, and preventable. They stem from how organizations deploy, manage, and govern AI as it moves from experimentation into everyday operational adoption.

Many of these failures are often described as ethical AI problems: Issues of trust, transparency, accountability, or fairness. In practice, they are usually not caused by bad intent or unethical design choices. They emerge when ethical expectations are not translated into clear governance, ownership, and decision-making processes.

In other words, ethical AI sets the values. Responsible AI is how those values are operationalized.

Below are the five most common Responsible AI failures we see in practice, and why they matter more than speculative futures.

1. Confusing Efficiency with Trust

Late last year, a Deloitte Australia report was found to contain information fabricated by an AI agent. The report included citations that didn’t exist and a court quote that was never spoken. When discovered, Deloitte took a hit to its reputation and was forced to return some of the $290,000 it had been paid

One of the most common failures companies make with AI is assuming that because an AI system produces outputs and looks good at face value, it can be trusted.

Even as models have improved, these ‘hallucinations” have persisted and in some cases worsened. Human oversight and good governance are essential to ensure mistakes are caught.

I am sure Deloitte and all other service providers who write these long reports for their clients have a new approach in the wake of these early mistakes.

2. Treating AI as a Tool Instead of a Decision-Influencer

Another failure is framing AI as “just a tool,” when in practice it influences decisions, behavior, and outcomes.

AI systems are already being used to draft customer communications, summarize research, recommend actions, prioritize work, and shape critical business decisions involving capex and strategy. Once a company builds enough trust in its agents, even when humans remain part of the decision matrix, the AI often influences what they do.

Organizations that fail to recognize this influence will trust the AI over their own intuition. Then, when something goes wrong, who is responsible: the model, the vendor, the team, the prompt, the user? No one clearly owns the outcome.

Responsible AI starts with acknowledging a simple truth: if AI meaningfully influences decisions, there needs to be an accountable governance structure to assign responsibility and learn from mistakes.

3. Underestimating Reputational Risk Because the Use Case Feels Small

Many AI incidents don’t come from large, top-down systems. They come from pilots, experiments, or “temporary” tools that are performed across different functions and become embedded in workflows.

A chatbot confidently shares information that is shared in company content and later called out for inaccuracies. An intern adds company secrets or embargoed information to an LLM, which then uses it in its training and can end up in the responses of other users – as much as 20% of files uploaded on LLMs contain sensitive company data

These use cases were considered low-risk, but they can quickly become material risks.

Reputational risk is not proportional to the technical complexity or level of integration of AI systems. It is proportional to oversight, trust, and context. Small AI failures can have outsized consequences, especially when customers feel misled, deceived, or like their data is unsafe.

The mistake here is assuming risk only appears at scale. In reality, risk appears wherever AI is used.

4. Letting AI Spend Grow Without Knowing What It’s Delivering

AI cost risk can grow slowly, not because the costs are high for single platforms, but because tools are fragmented and use case-specific.

Different teams experiment with different models, vendors, consultants, and tools. Usage grows. Costs accumulate in multiple budgets. Meanwhile, most studies indicate that more than 90% of companies fail to achieve any return on investment.

When budgets tighten or scrutiny increases, leadership is left asking uncomfortable questions: What are we paying for? What’s the ROI on it? What could we stop tomorrow?

This is a Responsible AI failure because cost opacity and attribution of returns erode confidence. If you can’t explain why an AI system is worth keeping, it becomes harder to defend its use, which can slow adoption.

5. Believing Governance Means Slowing Innovation

Perhaps the most damaging failure is treating AI governance as something that comes down the line after experimentation, after scale, when the business value is clearer.

In practice, the absence of lightweight guardrails today often slows organizations down. This paradox results from teams duplicating work, risks materializing after they have done their damage, and mistakes being repeated. 

Innovation stalls not because of governance, but because of uncertainty and risks becoming realities.

Responsible AI governance, done well, doesn’t block experimentation. It defines what’s allowed, what needs review, and where risk actually matters.

The biggest risk in AI governance is not actually having any.

The Good.Lab Point of View

At Good.Lab, we believe Responsible AI is a management discipline and the practical way organizations bring ethical AI commitments to life.

Ethical AI defines what an organization believes is acceptable: how it treats customers, employees, and data. Responsible AI is how those beliefs are embedded into governance, decision-making, and day-to-day operations. This is why we see Responsible AI as a natural extension of broader ESG and governance programs, not a separate technical initiative.

It’s tough running a company not to jump on the next shiny thing that shows a lot of promise in bringing efficiencies. However, a disciplined, measured AI adoption strategy will ensure the upside is realized and risks are minimized. 

Most organizations we speak with are not reckless with their AI adoption. They are curious, thoughtful, and experimenting in good faith. What they lack is visibility into where AI is already being used, into the risks that exist today, and into what “acceptable use” looks like. This is the same governance pattern we see across sustainability more broadly. Companies often have strong values and public commitments, but struggle to translate them into consistent internal practices. With AI, that gap shows up as uncertainty: leaders sense risk but can’t quantify it, and teams move fast without knowing where boundaries should be.

This gap creates a tension. Leaders sense there is risk, but can’t quite name it. Teams want to move faster, but don’t know where the guardrails are or should be. As a result, AI adoption either accelerates without confidence or stalls unnecessarily.

We believe that Responsible AI is the key to helping companies embrace the AI revolution while avoiding the downsides of the failures.

How to Manage this Risk?

Focus on clarity before adoption scales. How you can do that:

  • Understand where AI is already influencing decisions, customers, and operations,
  • Identify reputational, performance, and cost risks that exist today,
  • Define a small number of guardrails that materially reduce downside risk without slowing experimentation.

Don’t start with complex frameworks or rigid approval processes. Instead, start by turning ethical expectations into practical guidance teams can actually use. Focus on:

  • ownership,
  • expectations,
  • visibility,
  • and judgment.

In practice, this means helping teams answer simple but powerful questions:

  • Where is AI being used today?
  • What do we expect from it?
  • Who owns the outcomes?
  • What would tell us it’s no longer “good enough”?
  • Is it delivering enough value to justify its cost?

When you can answer these questions, Responsible AI stops being abstract. It becomes practical. And AI adoption becomes easier, scalable, and beneficial.

The real AI risks are boring. They’re not existential and they’re not abstract ethical dilemmas. They’re failures of judgment, accountability, expectations, and visibility. 

Disclaimer: Good.Lab does not provide tax, legal, or accounting advice through this website. Our goal is to provide timely, research-informed material prepared by subject-matter experts and is for informational purposes only. All external references are linked directly in the text to trusted third-party sources.

andries-verschelden-thumbnail
Andries Verschelden
Co-founder & CEO
Andries has had a variety of consulting and management roles throughout his career. He has worked with fast-scaling clients across three continents. Prior to founding Good.Lab, Andries led the blockchain practice at Armanino, a top 20 public accounting firm, was CEO at The Brenner Group, a boutique Silicon Valley financial services firm, and was a partner at Moore Stephens in Shanghai. He started his career at PricewaterhouseCoopers. Andries holds his B.S. in International Politics from Ghent University in Belgium, an MBA from Binghamton University and founded and participated in the Moore Comprehensive Executive Leadership Program at Harvard Business School.

Ready to elevate your sustainability efforts?

Connect with our sustainability experts today!

From sustainability program development to target setting, data management, and reporting, our team can help you fast-track building a world-class sustainability program.

Welcome to Good.Lab! We're glad you're here and want you to know that we respect your privacy and your right to control how we collect and use your personal data. Please read our Privacy Policy to learn about our privacy practices or to exercise control over your data.
Decline AllAccept All
Strictly necessary

Essential for you to browse the website and use its features.

Preferences

Remember choices you have made in the past.

Statistics

Collect information about how you use a website.

Marketing

Track your online activity to help advertisers deliver more relevant advertising.

Decline AllAccept All