4 in 5 execs call for clearer AI leadership to close responsibility gap

More than 80% of executives acknowledge that leadership, governance, and workforce readiness are failing to keep pace with AI advancements—putting investment, security, and public trust at risk, according to NTT Data.

NTT Data commissioned Jigsaw Research to conduct primary research during late September and early October 2024. The team surveyed 2,307 leaders from organisations in 34 countries across North America, Europe, Asia Pacific, Latin America, and Middle East and Africa. 

“The enthusiasm for AI is undeniable, but our findings show that innovation without responsibility is a risk multiplier,” said Abhijit Dubey, CEO of NTT Data. 

“Organisations need leadership-driven AI governance strategies to close this gap—before progress stalls and trust erodes,” he said.

Results show that the C-suite is divided, with one-third of executives believing that responsibility matters more than innovation, while another third prioritises innovation over safety; the remaining third rates them equally.

More than 80% of leaders say unclear government regulations hinder AI investment and implementation, leading to delayed adoption.

Also, 89% of C-suite leaders worry about AI security risks, yet only 24% of CISOs believe their organisations have a strong framework to balance AI risk and value creation.

Two-thirds  (67%) of executives say their employees lack the skills to work effectively with AI, while 72% admit they do not have an AI policy in place to guide responsible use.

Three quarters (75%) of leaders say AI ambitions conflict with corporate sustainability goals, forcing organizations to rethink energy-intensive AI solutions.

According to NTT Data, without decisive action, organisations risk a future where AI advancements outstrip the governance needed to ensure ethical, secure, and effective AI adoption. 

The company said that AI, including generative AI, must be built responsibly from the ground up and end-to-end, integrating security, compliance, and transparency into development from day one.

Also, leaders must go beyond legal requirements and meet AI ethical and social standards using a systematic approach.

Further, organisations must upskill employees to work alongside AI and ensure teams understand AI’s risks and opportunities.

“AI’s trajectory is clear—its impact will only grow. But without decisive leadership, we risk a future where innovation outpaces responsibility, creating security gaps, ethical blind spots, and missed opportunities,” said Dubey.