AI projects almost always get guaranteed funding — until they fail to scale. While the potential use cases for AI seem limitless, many organisations are overlooking fundamental issues.
Jason Hardy, Chief Technology Officer for AI at Hitachi Vantara, discussed common misconceptions about scaling AI projects and what measures can be taken to address them.
Blind spots
According to Hardy, one of the most common blind spots in scaling AI infrastructure is the assumption that investing in more compute or storage will automatically improve outcomes.
“In reality, if the data pipeline is fragmented or poorly integrated, scaling infrastructure alone won’t lead to meaningful results. Many organisations underestimate the importance of data availability, quality, and orchestration, particularly as AI projects span across departments or regions,” he remarked.
Another issue is the mismatch between infrastructure and the specific needs of AI workloads.
“Training large models, deploying inference at the edge, and performing real-time analytics each have distinct requirements. Without a flexible foundation, teams often overinvest in the wrong areas or encounter performance bottlenecks,” Hardy noted.
He added that many AI projects begin without a clearly defined business objective, which contributes to a high failure rate.
“Teams often focus on what AI is technically capable of, rather than what it should achieve in a specific business context. This can lead to misaligned expectations and vague success metrics,” he said.
Referencing a Hitachi Vantara study, Hardy pointed out that only 34% of organisations have data available when needed, and just 30% of data is structured. As a result, AI models are frequently trained on inconsistent or incomplete datasets, which undermines their performance regardless of how advanced the model may be.
“To recalibrate, companies should adopt a product mindset. This includes aligning stakeholders around clear goals, defining measurable outcomes, and building systems that support ongoing experimentation and iteration. AI success depends on continuous development and alignment with evolving business needs,” he explained.
Balancing governance
Striking the right balance between governance and innovation is a recurring challenge in AI implementation. Some teams, Hardy observed, move quickly by building and deploying models with minimal oversight. While this can speed up delivery, it also introduces risks such as bias, lack of accountability, and potential breaches of privacy regulations. These issues are often difficult and expensive to correct once systems are already in production.
Conversely, excessive governance can slow progress.
“When organisations create multiple layers of approvals and red tape, innovation slows to a crawl. This often happens when governance is treated as a standalone process rather than integrated into the AI development lifecycle,” Hardy said.
A more effective approach, Hardy said, is to embed governance within the infrastructure from the outset. This involves implementing tools and practices that support transparency, traceability, and access control as part of the standard development workflow.
“When governance is built in, not bolted on, it enables responsible innovation without unnecessary delays,” he said.
Other emerging concerns
Hardy also flagged a common misconception: that more automation inherently leads to better outcomes. According to him, this assumption can backfire.
“In fact, complexity often increases when automation is applied to workflows that are poorly understood or not yet optimised. If a process is inefficient to begin with, automating it can simply magnify the problem at scale. This is particularly relevant in areas like customer service, risk modelling, or logistics, where AI systems require clear logic, clean data, and reliable fallback mechanisms. Without these elements, automation can create ambiguity, erode trust, and lead to unintended consequences, especially when sensitive decisions are involved,” he elaborated.
In his view, automation should be treated as a layered capability rather than the default strategy.
“It’s best to begin with well-defined, measurable processes, and then scale with intention. Human oversight, observability, and the ability to intervene should always be built into the system to ensure responsible use,” Hardy said.
Another pressing issue is model explainability — the ability to demonstrate how an AI system reached a particular decision. As explainability becomes a regulatory requirement in more markets, most enterprises still lack the infrastructure and tools to support it, particularly when using complex models or third-party data sources.
“We’re already seeing this in the EU and parts of Asia, where companies are expected to show that their models are fair, auditable, and justifiable. Without this capability, organisations face increased legal risk, operational disruptions, and in some cases, mandated shutdowns of their AI systems,” Hardy said.
He emphasised that explainability and traceability should be embedded in the AI development process from the beginning, rather than added later.
“Companies that prioritise auditability early will be in a stronger position to scale AI responsibly and meet compliance standards as they evolve,” he concluded.