As AI is having an outsized and unprecedented impact on livelihoods, industries and nations, corporate management of AI has risen to the same level of importance as managing a company’s financial health.
Businesses trialling and deploying AI at scale can ill afford to overlook the risks. From ensuring AI behaves ethically to adapting to the profound changes AI brings to workflows, people, structures, and systems via change management, the management of AI now requires the same type of oversight that governs fiscal operations.
The complexity of managing AI warrants a methodological approach
Managing AI presents several unique challenges for which most enterprises are ill-equipped to cater. Unlike other technologies, AI has cognitive capabilities and is constantly learning from us and evolving. It uses both human and machine-generated — or synthetic — data, making it susceptible to biases and errors.
There is also a significant risk of dependency lock-in. Just look at a recent advertisement made by a major technology company, in which AI is called upon to write a professional worker’s emails down to taking care of the tonality and language. Based on typical human nature, the probability that this becomes commonplace is high, and the implications are huge: If employees are overly reliant on AI systems, there will be a reduction in essential skills and critical thinking.
Furthermore, there are no universally accepted methods to measure AI performance, and its decision-making processes are often inscrutable and difficult to explain. This, coupled with the autonomy we are starting to hand over to AI systems, and the fact that multiple contributors are involved in every stage when developing AI systems, complicates accountability. Who’s to blame and who’s culpable if an AI system were to make a bad decision resulting in harmful consequences?
Strong leadership oversight
Just like how organisations’ fiscal actions and balance sheets are subject to intense and rigorous scrutiny from the leadership, managing AI requires a similar level of attention from the board and C-suite. This includes adopting a governance framework just like financial audits, and implementing policies that address accountability, transparency, and ethical dilemmas.
At a board level, committees should be formed to handle the organisation’s overall AI approach and ensure that responsible AI is baked into the strategy before the fiscal year starts. At a transactional level, individual leads such as CFOs and CIOs are responsible for enabling the right AI set-up within their functions and teams.
In fact, many forward-thinking businesses are creating roles like Chief AI Officers (CAIOs) to ensure AI is managed with the same precision as corporate finances. The CAIO, much like a CFO, is tasked with ensuring AI policies align with company values while maximising its potential. Like the financial health of a company, AI’s performance needs continuous monitoring.
When AI oversight is diffused throughout the organisation, decision-making power is not concentrated solely in the hands of the CIO/CTO or a single team, and there is greater accountability and a fail-safe mechanism in place to mitigate the fallout and risks.
Regular audits
Just as businesses rely on external audits to catch financial irregularities, the complexities of AI could benefit from third-party guidance to navigate ethical grey areas. Ethical considerations in AI are rarely simple right-or-wrong choices. To address this, businesses increasingly require unbiased assistance to evaluate AI decisions by mapping out affected parties and identifying risks that might not be apparent internally.
This raises a question: should audits be self-governed or enforced by outside agencies and regulatory bodies, like how financial audits are conducted by certified accounting firms? While self-governance allows flexibility, it may lack transparency. These audits could provide an objective assessment, ensuring AI systems adhere to a framework of ethical principles.
When evaluating the financial health and long-term sustainability of a company, several financial metrics are considered in tandem. Similarly, the framework for ethical AI development should also comprise several core principles that are evaluated not as standalone but holistically.
Frameworks for ethical AI development are often built on foundational pillars such as diversity and inclusion, privacy and security, accountability and reliability, explainability, transparency, and environmental and social impact. These principles ensure that AI systems operate in compliance with technical standards while adhering to ethical guidelines across various areas.
In addition to board-level oversight and external audits, public disclosures could further support ethical AI. These disclosures might appear in annual reports or be required at the point of AI usage, giving stakeholders insight into AI’s role, impacts, and adherence to ethical standards. Ultimately, establishing an auditing standard will help build public trust in AI.
With the right oversight, AI’s transformative force can be a game-changer. However, without the necessary guidelines, it can become a significant risk. Companies can navigate AI’s complexities while unlocking its full potential by treating AI governance with the same importance as fiscal health.