AI ethics: The key to unlocking AI’s full potential

In today’s hypercompetitive landscape, organisations everywhere are investing heavily on AI to give them a competitive edge. Even as innovation accelerates, companies recognise the crucial role of ethics and regulation in AI development, with 88% of 100 C-level executives surveyed by Deloitte reporting their organisations as taking measures to communicate the ethical use of AI to their workforces.

But why are ethics and regulation so important in the race to put AI innovations to market?

New AI innovations introduce new ethical concerns

Advances in AI have meant that we have moved from building systems that make decisions based on human-defined rules to automated rule definition, content creation, and decision-making by complex models trained on large data sets. An unconstrained AI system will optimise its operations strictly according to the defined objectives, often disregarding broader societal impacts or ethical considerations, which can erode public trust.

Despite advancements, AI today continues to experience issues including bias and hallucinations, which have resulted in some controversial outcomes. In April 2021, six drivers in the Netherlands were reportedly unfairly terminated by “algorithmic means”, leading to an investigation under the European Union’s General Data Protection Regulation (GDPR) for dismissing individuals without human review. Two years later, several food delivery workers in the UK were supposedly unreasonably terminated with little or no explanation for alleged overpayments based on location data.

Similar controversies have emerged worldwide — from unfair loan disbursements due to gender discrimination to the use of privacy-infringing facial recognition technologies to process insurance claims.

Many of these events can largely be attributed to issues with explainability. AI — especially deep learning models — operate in a way that does not follow the straightforward rules used by humans. These models are often seen as a “black box” because of the complex layers of calculations they use to arrive at decisions. Thus, many experts find it challenging to understand how AI comes to conclusions. Without appropriate human supervision and understanding, these biased decisions could spiral into negative outcomes like the ones above.

Keeping the focus on ethics has never been more important, especially as new generative AI innovations, like OpenAI’s SORA AI video generator, promise to accelerate productivity in the workplace and enable organisations to sharpen their competitive edge. Despite their great potential, these generative tools can introduce issues like copyright infringement – and worse still, open the doors to misuse and misinformation.

Public-private collaboration for ethical AI regulation

While many generative AI tools, like ChatGPT, have rules to prevent abuse, many users have found ways to break these safeguards. Cybercriminals have even created their own generative pre-trained transformers (GPTs) to code malware and create highly convincing phishing emails at scale.

There are currently few tools and laws that can effectively detect and deter the production of such harmful outputs. Because of this, the public and private sectors need to increase collaboration to better regulate AI, reduce the risks of misuse, and ensure that models are created with ethics in mind.

Ethical AI involves integrating core ethical principles, accountability, transparency, explainability, and good governance into AI models. Improving explainability and strengthening ethics in models can help organisations address AI’s shortcomings today. It can also greatly improve the accuracy and effectiveness of decision-making.

Many public and private sector entities are working together to advance ethical AI. For example, Australia invested AU$17 million to create an AI Adopt Program, which assists small-to-medium businesses in making informed decisions on leveraging AI for business enhancement. In 2023, the Singapore government worked with private sector leaders to launch the AI Verify Foundation to address the risks brought about by AI. This year, the foundation launched a new framework for generative AI to address emerging issues — like misuse of intellectual property — while facilitating continued innovation.

As regulations and initiatives continue to roll out, organisations can play their part in advancing ethical AI by ensuring the data they use is trusted.

Trusted data: the foundation of ethical enterprise AI

Building AI systems that people trust requires organisations to have trusted information sources. With accurate, consistent, clean, bias-free, and reliable data as the foundation, an ethically designed enterprise AI system can be relied on to consistently produce fair and unbiased results. Organisations can easily identify issues, close any gaps in logic, refine outputs, and assess if their innovations comply with regulations.

Here are some tips for organisations looking to develop better ethical AI systems:

  • Focus on intent: An AI system trained on data has no context outside of that data. There is no moral compass, no frame of reference for what is fair unless we define one. Designers, therefore, need to explicitly and carefully construct a representation of the intent motivating the system’s design. This involves identifying, quantifying, and measuring ethical considerations while balancing these with performance objectives.
  • Consider model design: Well-designed AI systems are created without bias, causality, and uncertainty in mind. Organisations should remember that, apart from data, model designs can also be a source of bias. Organisations should regularly audit them for model drift — when a model starts to become inaccurate over time due to outdated data. Businesses should also extensively model the cause and effect of systems to understand if changes will result in negative consequences down the line.
  • Ensure human oversight: AI systems can reliably make good decisions when trained on high-quality data. However, they lack emotional intelligence and cannot deal with exceptional circumstances. The most effective systems are those that intelligently bring together both human judgment and AI. Organisations must always ensure human oversight, especially in situations where AI models produce outputs with low confidence.
  • Enforce security and compliance: Developing ethical AI systems centred on security and compliance will strengthen trust in the system and facilitate adoption across the enterprise — while ensuring adherence to local and regional regulations.
  • Harness modern data platforms: Leveraging advanced tools, like data platforms that support modern data architectures, can greatly boost organisations’ ability to manage and analyse data across the entire data and AI model lifecycle. Ideally, the platform should have built-in security and governance controls that allow organisations to maintain transparency and control over AI-driven decisions — even as they deploy data analytics and AI at scale.