Responsible AI deployment as the key to long-term success

- Advertisement -

Decades ago, AI was depicted in the media as either a fantastic transformer of worlds or a dystopian entity. With generative AI now being the talk of the town, discussions are centred on its groundbreaking benefits and potential applications.

For example, generative AI could enable financial institutions to tailor investment strategies to individual client preferences. In healthcare, it might automate the transcription of clinician-patient interactions, while manufacturers could use it for predictive maintenance on assembly lines. This technology demonstrates the capability to process vast amounts of data, analyse and derive insights, recognise natural language with human-like accuracy, and replicate and automate human behaviours.

However, the development of AI also presents challenges, both real and potential, as it tests the limits of existing regulatory frameworks. Neglecting these challenges could result in reputational damage, diminished public trust in AI, and potentially onerous government regulations.

The responsibility for ethical AI usage lies with businesses as much as with governments. Companies should proactively integrate principles of ethical AI into their generative AI projects from the outset.

The danger of biases in generative AI models

One concern highlighted in Singapore’s National AI Strategy (NAIS 2.0) is the inadequate transparency around large language models (LLMs). This is crucial given the potential biases embedded in the models, which could impact the validity, credibility, and legality of their outputs.

Like children who emulate their parents, AI models can inadvertently adopt patterns from the data they are trained on, partly due to a lack of awareness among data scientists about historical and societal biases that may be present in the data.

This oversight can have far-reaching consequences across various sectors. In healthcare, biases in data or algorithms could adversely affect patient care and resource allocation. In human resources, they could influence recruitment, evaluation, and decision-making processes.

It is essential to address these biases and actively strive to create more inclusive AI systems. Industry best practices, reflecting commonly adopted principles, include human-centricity, transparency, inclusivity, privacy and security, robustness, and accountability. These principles guide responsible innovation, including the use of AI.

Accountability and transparency as building blocks of responsible AI

To foster accountability in AI use, organisations must recognise that it is a shared responsibility of all people and entities involved in an AI system. Encouraging accountability can be achieved by implementing clear decision workflows that assign ownership and enhance transparency. These workflows enable users to create, approve, annotate, deploy, and audit decisioning processes, while maintaining a record of involvement.

An AI system designed with accountability in mind should also incorporate mechanisms for customer feedback, error remediation, and correction. Monitoring and auditing AI operations will help organisations quickly identify and address any issues, allowing them to proactively address concerns before they escalate. Feedback loops enable AI systems to learn and adjust their behaviour based on user input.

To effectively monitor various aspects of AI system performance, such as data drifts, concept drifts, and out-of-bounds values, organisations need platform capabilities like bias detection, explainability, decision auditability, model monitoring, and other governance measures.

Strengthening responsible AI through inclusivity

Organisations should involve more non-technical roles in discussions about AI. It is not enough for the AI agenda to be determined solely by technologists, given the implications for justice, well-being, and equity. Non-technical domain experts are better positioned to consider these implications and identify risks and opportunities.

Incorporating a diverse range of perspectives puts organisations in an optimal position to recognise and address AI ethical risks at the point of transaction. This inclusivity ensures that potential ethical issues are managed appropriately and enables a more comprehensive approach to managing AI risks.

Future-proofing investments in generative AI

While many organisations already benefit from generative AI, it is crucial to ensure its deployment is responsible, ethical, and safe. Proactively addressing the potential negative impacts of unethical AI use is vital for the long-term success and reputation of an organisation. By prioritising responsible deployment and usage, organisations can mitigate risks and ensure that the benefits of generative AI are maximised while minimising harm.