EY: Organizations advancing responsible AI governance linked to better business outcomes
- Companies with real-time monitoring and oversight committees report measurable gains in revenue and cost savings
- All companies report financial losses and widespread impact from biased outputs, hallucinations and legal liability
- Gaps in governance, visibility and workforce preparation highlight the challenges of managing AI agents
The EY organization released findings from the second wave of its Responsible AI Pulse survey, which indicates organizations that implement more advanced RAI measures are reaping positive business outcomes.
As broader adoption of AI technologies continues to accelerate, those furthest along experience the most benefit. While 90% of Singapore respondents (global 79%) said their organizations have achieved efficiency and productivity gains, and 83% (global 81%) have improved innovation, fewer saw gains relating to revenue growth (Singapore 37%, global 54%) and cost savings (Singapore 47%, global 48%).
The survey suggests organizations that embed responsible AI through clear principles, robust execution and strong governance are pulling ahead in areas where AI gains have been the hardest to capture. Globally, organizations with real-time monitoring are 34% more likely to see improvements in revenue growth and 65% more likely to see improved cost savings. This is particularly relevant for Singapore, where AI deployment is accelerating.
Manik Bhandari, EY Asean Data and Artificial Intelligence Leader, says:
“As organizations in Singapore explore the full potential of AI, from real-world applications to intelligent agents, grounding innovation in responsible principles is critical. With transparent, well-governed AI systems, organizations can scale AI safely in more products, markets and customer segments. This contributes to sustained growth and creates new revenue streams. AI systems that follow responsible principles also require less remediation for security gaps, bias correction and regulatory non-compliance, leading to improved bottom-line efficiency.”
This research is the second in a series, following initial findings in June, that evaluate how enterprises perceive and integrate responsible AI practices into their business models, decision-making processes and innovation strategies. The insights were gathered in August and September 2025 from 975 C-suite leaders across 21 countries, including 30 from Singapore.
Other key findings include:
Inadequate controls for AI risks lead to negative impacts
All Singapore organizations surveyed reported financial losses from AI-related risks, with nearly two-thirds (Singapore 63%, global 64%) suffering losses of more than US$1 million. Globally, the average financial loss to companies that have experienced risks is conservatively estimated at US$4.4 million.
The most common AI risks are biased outputs (Singapore 67%, global 53%), hallucinations or misinformation in AI-generated content (Singapore 63%, global 53%) and legal liability in AI use (Singapore 63%, global 48%).
Rise of AI agents and citizen developers highlights governance gaps
The rise of “citizen developers”, i.e., employees independently developing or deploying AI agents, is becoming a growing focus for organizations seeking to maintain responsible AI practices. This is largely driven by the wider availability of user-friendly AI tools and platforms, which democratize AI use, allowing even non-technical staff to create and deploy solutions. In Singapore, 70% of organizations (global 67%) allow this activity in some form. These organizations are also more likely to have begun developing a strategy to manage a hybrid human-AI workforce (Singapore 33%, global 50%). However, 57% (global 50%) report they do not have a high level of visibility in employee use of AI agents.
As organizations increasingly experiment with AI agents, the challenge is to ensure people, technology and policies evolve in step. 71% (global 60%) have formal, organization-wide policies or frameworks to ensure AI agents are deployed in line with responsible AI principles. Majority of respondents also have incident escalation procedures in place in case an AI agent behaves unexpectedly (Singapore 87%, global 80%) and policies that ringfence what AI agents are allowed to do (Singapore 83%, global 87%).
C-suite knowledge gaps in identifying appropriate controls
On average, when asked to identify the appropriate controls against five AI-related risks, only 7% of C-suite respondents in Singapore (global 12%) answered correctly.
Bhandari adds:
“Most leaders recognize the importance of responsible AI, yet many are still navigating how to put it into practice. As AI capabilities accelerate faster than the governance tools and safeguards to manage them, organizations are under pressure to innovate responsibly. Embedding transparency, fairness and privacy from the start is essential. In Southeast Asia, responsible AI will define how organizations innovate and drive progress that benefits both business and society.”
The full survey findings can be found here.
Methodology
In July and August 2025, the global EY organization conducted research to better understand C-suite views around responsible AI – for the current and next wave of AI technologies. To underpin this research, we conducted an anonymous online survey of 975 C-suite leaders across ten roles. All respondents had some level of responsibility for AI within their organization. Respondents represented organizations with over US$1 billion in annual revenue across all major sectors and 21 countries in the Americas, Asia-Pacific, Europe, the Middle East, India and Africa.







