Accountants worldwide need to maintain awareness of evolving trends in artificial intelligence, particularly as 51% of them are unaware of explainable AI (XAI), according to the Association of Chartered Certified Accountants (ACCA).
“It is in the public interest to improve understanding of XAI, which helps to balance the protection of the consumer with innovation in the marketplace,” said Narayanan Vaidyanathan, head of business insights ACCA.
Explainable AI (XAI) emphasises not just how algorithms provide an output, but also how they work with the user, and how the output or conclusion is reached. The idea is for this information to be available in a human-readable way, rather than being hidden within code.
ACCA’s latest report, “Explainable AI,” addresses explainability from the perspective of accountancy and finance professionals.
The report warns practitioners about oversimplified narratives. In accountancy, AI isn’t fully autonomous, but nor is it a complete fantasy. The middle path of augmenting, as opposed to replacing, the human works best when the human understands what the AI is doing; which needs explainability.
Further, practitioners need to consider the level of explainability needed, and how it can help with model performance, ethical use and legal compliance.
Also policy makers need to understand that improved explainability reduces the deep asymmetry between experts who understand AI and the wider public.
For regulators, explainability can help reduce systemic risk if there is a better understanding of factors influencing algorithms that are being increasingly deployed across the marketplace.
Finally, an environment that balances innovation and regulation can be achieved by supporting industry to continue, indeed redouble, its efforts to include explainability as a core feature in product development.