The Singapore government has done a commendable job in deploying Artificial Intelligence (AI) to accelerate the healthcare, manufacturing, logistics industries over recent years. With its Safer Cyberspace Masterplan aimed to strengthen the nation’s cyber defence with AI-powered detection and analytic capabilities, the government has clearly realised the true value of AI and placed its trust in the technology to support Singapore’s development.
AI adoption in the public sector can already be found in governments across the world. An Accenture survey of public sector leaders found that 83% of global senior public sector leadership teams are both able and willing to adopt intelligent technologies. However, despite the successes witnessed across the globe, there are still some hurdles to overcome that make or break the adoption of the technology, both at the public and private sector level. First and foremost, AI must be developed and implemented with care and consideration, particularly to avoid misuse and unintended consequences.
While the private sector has already made significant inroads with the use of AI, a great deal can also be gained by furthering its ethical application in the public sector. Here’s how focussing on ethical AI can transform the public sector, and why the government needs to be smart in regulating AI applications.
Regulate its application, not the technology
Singapore has a status of being a premier destination for AI investments, ranking top among larger cities in the 2019 Oliver Wyman Forum’s inaugural Global Cities AI Disruption Index. Hence, the Singapore government has a huge responsibility to establish a framework for the deployment and use of AI.
We refer to that responsibility as ethical AI, and legislators and regulators have a key role to play in the sphere of AI accountability which involves specifying the applications for which AI can and cannot be used. This is crucial as the government could risk hindering the advancement of the technology should they take on the wrong approach.
My firm belief is that AI as a technology should not be regulated. Instead, there should be governmental regulations put into place to help standardise how the technology can and should be used and in what appropriate context. For example, regulations should indicate that applying AI is acceptable for particular purposes in specific industries, while other laws or rules should make clear what applications of AI are not allowed. The Singapore government has done a commendable job in this aspect with a framework ensuring that AI decision-making processes are explainable, transparent and fair, while being human-centric.
There are certain regulations and codes of conduct that need to be revised for the AI era, however. Take the Nolan Principles for example, which set the ethical standards and principles upon which public service should be conducted, rules that must be upheld by those working in public service to instil confidence with the public. The Nolan Principles are a solid framework for serving the public sector, but they do need to be put into context as AI goes mainstream. Key Nolan principles such as honesty, integrity and openness, need to be deployed in the context of AI. This means every public body that uses AI must use it responsibly.
Accountability and traceability are key
The effectiveness of an AI application is determined by its capability to analyse and interpret the different forms of data it encounters. A cognitive-based data digitisation platform that processes unstructured and structured data will be able to make better, more informed decisions based on a holistic view and broad insights. This also provides the organisation with greater clarity on activities that are happening, creating a balanced and ethical interpretation of datasets.
For ethical AI to work in the long term, the government and the private sector must collaborate to ensure that AI solutions allow for auditability and traceability. Basic traceability capabilities are already available for technology users today, even on every day devices such as laptops, allowing users to view the history of visited websites, information of when files have been downloaded and where they are saved. However, not all AI algorithms and engines are programmed to work that way. Some may immediately sweep up these identifying footprints, erasing the record of what occurred.
That makes it difficult to learn from mistakes, police for possible infractions and identify those who don’t follow organisational or regulatory rules. Governments using AI need to consider the criticality of auditability and traceability as part of their enforcement and compliance efforts, and only then will AI be able to deliver transparent services to the public.
While it is encouraging to see the Singapore government taking a proactive stance in regulating the use of AI, new challenges are bound to surface due to the increasing adoption of the technology across businesses. It is vital that the government and the organisations it works with do not lose sight of the ethical aspects of AI when confronting these hurdles, but instead, develop frameworks and regulations that prioritise accountability and traceability as an imperative. Without these ethical considerations, the nation’s AI initiatives and plans will be prone to internal and external threats that would prevent the public sector from realising the true benefits of the technology and worse still expose the public at large to damaging misuse.