Scaling AI across teams: Lessons from WorkJam

When every gear moves in sync, the whole system accelerates, much like teams aligned around a shared AI strategy. Image courtesy of Laura Ockel.
- Advertisement -

Most companies deploy AI to enhance customer experience first, but WorkJam did the opposite: It began by retooling its own teams, proving that scaling AI starts with internal alignment, not external polish.

Steven Kramer, Chief Executive Officer and Co-Founder of WorkJam, shared with Frontier Enterprise how that inside-out strategy is shaping the company’s approach to experimentation, scalability, and integration.

Birthing pains

According to Kramer, one of the complexities they encountered involved integrating multiple large language models and cloud services in a modular, scalable way. This was especially challenging as WorkJam’s platform serves customers in highly regulated, multilingual, and diverse environments.

“Internally, we had to account for the same variables — geographic dispersion, varied use cases, and different team workflows. We addressed this by investing early in a flexible cloud-native architecture and creating an internal AI governance board to review every implementation before it goes live,” he said.

Beyond technical integration, the company also had to balance the needs of its internal workforce with the demands of its customers.

“Internally, we’re often focused on foundational capabilities: things like automation, personalisation, and how AI can help us reduce operational friction or break down silos between teams. At the same time, customers are coming to us with highly specific needs shaped by their own business environments, whether it’s regional compliance, frontline training, or language accessibility,” Kramer explained.

To support both priorities, WorkJam designed its platform to be modular and extensible. That flexibility, Kramer said, allowed them to experiment internally while delivering high-impact features externally. 

“In some cases, we’ve shifted our internal roadmap to prioritise a customer request that could unlock broader value, and those moments often spark breakthroughs that benefit the whole platform,” he recalled.

Lessons on the go

Instead of chasing perfection, WorkJam adopted a “learn fast, refine fast” feedback loop for AI. This meant running controlled pilots, collecting frontline feedback, and continuously optimising the solution.

Steven Kramer, Chief Executive Officer and Co-Founder of WorkJam. Image courtesy of WorkJam.

Across product, engineering, marketing, and customer-facing roles, WorkJam’s teams are encouraged to use a range of AI tools in their daily work, guided by an in-house framework for evaluating impact.

“We start by looking at output metrics: Are employees producing more, faster? Are we shortening timelines for things like code reviews, content creation, or data analysis? Do people’s jobs become easier? These gains are often measurable quickly. But we also go further by surveying our teams, asking how much time AI tools are saving them, where they’re seeing the most value, and where any friction remains,” Kramer said.

According to him, the feedback has been consistent: AI is reducing time spent on repetitive work and freeing people up to focus on higher-value tasks. This combination of quantitative output and qualitative time-saved insights gives WorkJam a strong indication of whether a capability is genuinely improving efficiency.

This same principle guides the company’s platform development. Every new AI feature is evaluated on whether it simplifies processes, reduces manual steps, or improves access to information. If a feature fails to make the experience more intuitive, it is reworked.

WorkJam also collaborates across functions to collect qualitative input. Product and customer-facing teams provide critical feedback on how features are likely to perform in real-world use, ensuring that every AI capability is shaped by both operational insight and user impact, Kramer said.

“In short, we test AI not in isolation, but in context, always asking: Does this make work easier, faster, and more meaningful?” he emphasised.

Internal discoveries

Using a variety of tools (including Google’s Coding Assistant, Agentspace, and JetBrains, alongside task-specific ChatGPT models) has enabled WorkJam’s teams to work faster and more independently. Yet, what has been most transformative, Kramer said, is how these tools have helped break down operational silos.

“One of the biggest learnings is that the real value of AI doesn’t just come from individual tools, it comes from how they connect. We’ve built internal frameworks that allow us to link role-based and task-based agents across teams. These agents support everything from organising requirements to building and testing code, and they’ve enabled our teams to move faster while staying aligned. Without these frameworks, you end up with disconnected efforts. With them, you unlock enterprise-wide productivity,” he noted.

The company also found that voice and text-based AI play different roles across functions. Some teams use AI for code generation or QA workflows, while others rely on natural language queries to search documents, summarise information, or generate content. In every case, the goal is to remove friction and make work feel more intuitive.

“If there’s one key takeaway, it’s that AI works best when it’s embedded into the flow of work, and when it brings people together, rather than creating yet another layer of complexity,” Kramer said.

Unlearning the old ways

Contrary to popular belief, automation does not solve everything. For WorkJam, the challenge was resisting the temptation to over-automate.

“Just because you can automate something doesn’t always mean you should. We’ve learned to focus automation on areas that eliminate friction, not judgment. That’s especially important in frontline environments, where human context and empathy still play a big role,” Kramer said.

From the outset, WorkJam aimed to build technology that acts as a force multiplier, not a replacement.

“Automate where it adds value, empower where human skill makes the difference; that balance is key to scaling effectively and responsibly,” he remarked.

Finally, Kramer emphasised that a one-size-fits-all approach to AI does not exist, making flexibility essential.

“Some models perform better at translation, others at search or summarisation, so we’ve built flexibility into our architecture to apply the right technology for the job. We have to account for latency, uptime, and regional infrastructure differences, while also staying ahead of evolving data privacy and residency regulations. That’s particularly important in regions like Asia-Pacific, where legal frameworks can vary widely. In some cases, we’ve had to adapt deployment models to meet specific sovereignty requirements, without compromising experience or speed,” he concluded.