RPA vs AI agents: When to stick and when to switch

Agentic AI enables adaptive decision-making, automating complex workflows while maintaining flexibility and human oversight. Image created by DALL·E 3.

Generative AI undeniably captures attention when applied to modern enterprises. Tasks that once required days or weeks can now be completed in hours, or even minutes. With many organisations embarking on projects involving large language models (LLMs), being left out might seem like a bad business decision.

However, deploying tools like AI agents isn’t necessarily for everyone and must be approached with caution. There are scenarios where robotic process automation (RPA) remains the more suitable option, according to Raj Shukla, Chief Technology Officer of enterprise AI company SymphonyAI.

“It’s important to be selective about use cases where companies decide to implement generative AI. The general philosophy is to select the simplest solution that will solve the problem. In cases where the workflows are completely fixed and repetitive, there is little benefit to introducing an LLM-based AI model, which, in exchange for its higher intelligence and adaptability, incurs higher costs, computational complexity, latency, and stochasticity,” he said.

Fit for use

Enterprises have traditionally leveraged RPA to simplify repetitive tasks and enhance operational efficiency. Unlike AI agents, which can perform tasks with a degree of autonomy, RPA relies on explicit instructions for every step.

AI agents, on the other hand, can make independent decisions and tackle complex tasks by interpreting data and applying learned patterns, Shukla noted. These agents can adapt to changing environments and refine their actions based on feedback, making them invaluable in dynamic, multifaceted settings.

“Increasingly, we see adoption of copilot agents — and subsequently, autonomous agents — to take on many tasks where RPA traditionally operated. Already, copilots are often used in overlay environments, and they can be quickly and easily implemented and adopted by users. Copilots fit naturally into workflows, and their natural-language interface makes them simple to adopt and easy to work with,” Shukla said.

Since copilots aren’t bound by heuristics, they can draw relevant information and recommendations from multiple sources, offering more targeted insights and enabling rapid what-if scenario planning.

For repetitive, heuristics-based processes and low-complexity tasks with structured data inputs, RPA is reportedly the preferred choice due to these advantages:

  • RPA uses structured inputs and defined logic to automate processes like data entry, transferring files, and filling out forms.
  • RPA can operate alongside people and is often employed for attended automation.

“In these cases where the workflows are completely fixed and repetitive, there is little benefit to introducing an LLM-based AI model, which, in exchange for its higher intelligence and adaptability, incurs higher costs, computational complexity, latency, and stochasticity,” Shukla explained.

Deploying guardrails

According to Shukla, software companies like SymphonyAI are close to realising agentic AI, or autonomous agents, as viable products. However, certain issues must be addressed to ensure these agents work as intended, particularly the risk of agents going rogue.

Raj Shukla, Chief Technology Officer, SymphonyAI. Image courtesy of SymphonyAI.

“They can potentially make decisions or actions that have negative consequences. That’s why it’s important to design agentic AI with built-in capabilities to set guardrails and trigger human oversight if an agent’s recommendations fall outside defined parameters,” he advised.

In addition, agentic (LLM-based) planning can drift into suboptimal steps or even infinite loops, necessitating careful monitoring. To address this, Shukla stressed the importance of grounding AI agents to enterprise policies and constraining their actions with guardrails, sometimes even at the cost of limiting the LLM’s creativity.

“User interface, traceability, and ability to audit the steps behind an agent’s execution path are critical, but frameworks to support these are still immature. SymphonyAI has invested significant time and resources into this area,” he added.

Shukla also noted that SymphonyAI leverages both LLMs and small language models (SLMs) in their agentic workflows to manage costs and reduce latencies.

Deeper dive

Differentiating AI copilot agents from autonomous agents, Shukla explained that copilots are suited for quick, interactive Q&A. They are prompt-driven, guided by instructions, and designed for human interaction within short-term session memory. Autonomous agents, in contrast, are long-running and task-driven, employing event listeners and anomaly detection models to determine when to act. These agents are policy-driven rather than instruction-based and are intended to persist over the long term.

Shukla outlined five key consideration areas for deploying AI agents:

  1. Data quality, quantity, and integration — AI models require substantial volumes of high-quality data to function effectively. Incomplete, inconsistent, or biased data can significantly impair model performance, especially in enterprises with siloed data. To ensure AI models have access to rich, meaningful data, enterprises should establish robust data collection and preprocessing pipelines to address quality issues and mitigate bias through careful selection and augmentation. They should also employ extract, transform, and load (ETL) processes to integrate diverse data sets into a unified lakehouse.
  1. Infrastructure and scalability — Running large AI models, particularly generative ones, demands significant computational resources. Scalability challenges often arise when deploying models in production. Using a combination of LLMs and SLMs can help mitigate these issues.
  1. Model interpretability and explainability — Many AI models, particularly deep learning ones, are viewed as “black boxes,” making their decision-making processes difficult to understand. To address this, enterprises must ensure full transparency by providing insight into the sources accessed by the models so users can contextualise recommendations. Audit trails with alerting mechanisms should be integrated at every level to enable comprehensive oversight.
  1. Security and privacy—Handling sensitive data in AI models introduces risks related to privacy and security. Therefore, role-based access controls should be implemented across all levels, alongside strong data governance policies, encryption, and anonymisation techniques. Data should only be sourced from known, trusted, and verifiable entities. Authentication protections must be built into data and API layers. Additional safeguards should ensure no data leaves compliance boundaries, and LLMs must be deployed within compliant sandboxes.
  1. Alignment with business goals — For enterprise AI adoption to succeed, solutions must address specific use cases within industries and deliver measurable improvements in productivity and accuracy. Regular involvement from business stakeholders during the AI assessment and selection process is vital to ensure alignment with goals and clear ROI. Products should be tailored to processes and workflows that demonstrably enhance outcomes for defined use cases and verticals.

Risk/Reward

Shukla believes that the rewards of deploying AI agents far outweigh the associated risks.

“We are currently enhancing the complexity and autonomy of AI agents within our applications, and the feedback from customers has been extremely positive. We believe this marks the next exciting evolution in enterprise AI. Predictive and generative AI have advanced to a level where they can automate workflows that were once deemed too complex for traditional software,” he remarked.

To illustrate, Shukla cited anti-money laundering investigations as an example. Investigators often face overwhelming workloads, spending much of their time on labour-intensive tasks such as data retrieval and compiling research from various sources.

“Agentic AI excels in handling these tasks, leading to transformative productivity gains and enabling human resources to focus on more strategic activities,” he concluded.