The promise, possibilities, and potential of AI
The harnessing of AI has and will bring tremendous benefits to society, such as making supply chains more efficient, advancing intelligent automation and robotics to support our everyday living, and offering seamless clearances at borders.
AI technology has advanced so much to the point that it has fast approached — and in some instances exceeded — human intellect. Just as human intelligence needs to be harnessed and orchestrated appropriately in human societies with good ethics and governance, we need to do the same with AI.
The dark side of AI
There are some considerations to bear in mind. If you have heard about GPT-4chan, you would also have heard how problematic it was. Developed by a YouTuber in the Artificial Intelligence (AI) community, the GPT-4chan model was made based on the /pol/ board (politically incorrect board) of 4chan, a controversial internet forum. The result? An AI that spewed hate speech.
As troubling as that may seem, the very basis of AI may inadvertently encourage such practices that make it easy to use the technology for nefarious purposes. That is starkly evident as the community increasingly embraces open-source development. This no longer restricts the development of AI applications to a small number of privileged companies but opens it up for all to use — including bad actors.
Should we be surprised? The likes of Bill Gates, Elon Musk, and Jeff Bezos have all expressed concerns and issued warnings about the potential dangers of AI, especially around their usage within weapons systems and job displacement. However, much of what we experience today involves AI to some extent, such as the ads that target us specifically on social media, and our favourite streaming platforms recommending the latest content based on our previous choices and habits. If we truly want to transform AI’s potential into reality, then there are concerns that must first be addressed.
Growing concerns
AI systems have grown exponentially in recent years. It has spawned numerous benefits, but it is not insulated from certain drawbacks that have sparked concerns, especially around topics like compliance, ethics, and governance.
- Bias – Eliminating bias is necessary to reflect society with greater precision. This requires identifying all the potential areas of bias and calibrating AI solutions to address them. An AI system that suffers from biases will deliver skewed insights and recommendations if they’ve not been trained with varied data sets.
- Loss of control – With the increasing use of AI, machines have become more capable of making important decisions. However, it is still necessary to have human involvement in any decision-making that may affect humanity in any way. AI is still unable to properly account for emotion, apply empathy to situations that call for it, pass moral judgement, or derive creative outcomes.
- Technology is not foolproof – Innovation is a constant work in progress, and there is always the risk of potentially grave errors if decision-making was completely entrusted to an AI system with little to no oversight or calibration. Technology, after all, is not perfect.
- Privacy – Privacy has long been a major ethical concern associated with AI. For instance, your smart devices are constantly picking up cues from its environment such as speech, which can then be mined for insights and recommendations. AI-based toys that can collect data from children are also a genuine concern.
- Erosion of trust – the indefinite collection and storage of sensitive data such as biometrics raises questions around trust. Could the people in charge be doing anything else with our data “off the books”?
Compliance is not enough
Addressing these issues requires us to look beyond mere legal compliance. Instead, several factors such as privacy, human rights, and social acceptability must be considered. This type of problem solving should not be limited to firms involved in the development and marketing of AI. AI-related issues must be dealt with across the entire supply chain — including by individuals and organisations that provide AI-based services. These include:
- Complying with relevant laws and regulations around the globe such as the European Union’s AI Act, the United States’ AI Risk Management Framework, China’s Internet Information Service Algorithmic Recommendation Management Provisions, and Singapore’s AI Ethics and Governance Body of Knowledge.
- Developing principles to guide and recognise respect for human rights as the highest priority in all stages of business operations in relation to AI utilisation.
- Face social issues generated by future AI use by leveraging technologies.
Technologists are not the only stakeholders when it comes to addressing AI issues. Policymakers will also play an important part in helping us address the potential risks of AI application. They’ll be responsible for:
- Strengthening existing AI regulatory and governance frameworks.
- Championing trial and error, repeated testing, and sandboxes to fine-tune and calibrate AI models and systems to avoid the risk of bias.
- Working on sector-specific regulation that is tailored to the varied applications of AI across various industries.
Lastly, investments must also be made to ensure that the ecosystem remains viable, and that there is a deep and robust talent pool to continually staff AI-related roles.
Consulting firm Korn Ferry estimates that Asia Pacific’s TMT (technology, media, and telecommunications) sector could face a talent shortfall of 2 million – including AI professionals, at an annual opportunity cost of more than US$151.60 billion by 2030.
Singapore, for instance, has committed SG$180 million to accelerate AI research and launched programmes to upskill people with AI skills. While those might address the “hard” skills such as AI engineering and development, “soft” skills are important too. Singapore’s Nanyang Technological University and Singapore Computer Society have also launched a course in AI ethics and governance, aiming to recognise and certify professionals in those areas.
Great promise, but more progress is needed
Ultimately, there needs to be a deep and open discussion on what AI can and cannot do. Organisations and governments alike need to make sure that AI will be used to benefit as many people as possible. Measures must be adopted to ensure that AI:
- Does not omit — by design or by accident — certain subsets of society.
- Assuages privacy and trust concerns.
- Implements safeguards that give humans a degree of control.
The increasing use of AI in daily life will undoubtedly continue to raise questions around ethics, compliance, and governance. Are humanity’s AI goals ambitious or just simply dangerous? That is a question that all involved stakeholders — governments, regulators, innovators, tech firms and consumers — need to collectively work towards answering.