APAC and the EU’s AI Act: Potential impacts explained

Image created by DALL·E 3.

A new regulation in the European Union, aimed squarely at artificial intelligence, is transforming the regulatory landscape. The AI Act, recognised as the world’s first comprehensive law for this technology, underscores the 27-nation bloc’s commitment to the responsible use of AI.

While AI has undoubtedly unlocked countless possibilities for businesses, it has and is still being used to steal money, misinform the public, and cripple organisations. In Hong Kong, deepfake technology was leveraged to steal millions of dollars from a multinational. 

In Asia-Pacific, separate early initiatives from governments hold promise for safer use of AI. However, could a similar encompassing legislation be looming on the horizon? What does the new law mean for APAC firms and the regional market?

Early strides

The closest initiative to the EU’s AI act would probably be the ASEAN Guide on AI Governance and Ethics, which is meant as a reference for organisations in “designing, developing, and deploying traditional AI technologies in commercial and non-military or dual use applications.”

However, since the document merely provides recommendations, it lacks the authority to enforce rules against the misuse of the technology.

In Singapore, the establishment of a “National AI Strategy” provided much-needed direction for organisations developing and leveraging the technology, given that the country is leading the region in AI innovation.

Andy Ng, VP and Managing Director for Asia South and Pacific Region, Veritas Technologies. Image courtesy of Veritas Technologies.

However, a recent study also found out that while 95% of office workers in the country recognise the importance of AI usage policy and guidelines, only 43% of employers currently impose mandatory AI usage rules.

“Few can deny the benefits of generative AI, but critical questions associated with its use, such as ethical and cybersecurity concerns, remain to be addressed,” shared Andy Ng, VP and Managing Director for Asia South and Pacific Region, Veritas Technologies.

Additionally, 36% of Singapore employees said they used customer details, employee information and company financials on generative AI platforms. 

Meanwhile, in Japan, G7-member countries established the “Hiroshima AI Process Comprehensive Policy Framework,” which includes both the “Hiroshima Process International Guiding Principles for All AI Actors” and the “Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems.”

Japan has also inaugurated its AI Safety Institute, tasked with researching AI safety evaluation methods, among others, and is planning to build the Tokyo Center of the Global Partnership on AI (GPAI), a public-private partnership dedicated to research and analysis on generative AI.

EU policy

Under the EU’s AI Act, artificial intelligence systems are classified into four levels of risk:

  • Unacceptable risk: Such AI practices are prohibited, including manipulative AI and social scoring systems.
  • High-risk systems: These systems are subject to regulations.
  • Limited risk AI systems: Subject to lighter regulation.
  • Minimal risk systems: Includes AI-enabled video games and spam filters, which are not regulated.

Additionally, the AI Act prohibits the following:

  • Using subliminal techniques that impair an individual’s or a group’s ability to make informed decisions.
  • Exploiting vulnerabilities due to age, disability, or specific social or economic situations to influence behaviour.
  • Biometric categorization systems that classify individuals based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
  • Implementing social scoring systems.
  • Conducting risk assessments of committing crimes based solely on the profiling of individuals or assessing their personality traits and characteristics. 
  • Creating facial recognition databases by indiscriminately scraping facial images from the internet or CCTV footage. 
  • Inferring the emotions of individuals in workplaces and educational settings, except for valid medical or safety reasons.
  • Utilising real-time remote biometric identification systems in publicly accessible spaces for law enforcement, with specific exceptions for: 
  1. Targeted searches for victims of serious crimes like abduction, human trafficking, and sexual exploitation, as well as missing persons.
  2. Preventing a specific, substantial and imminent threat to life or physical safety or a genuine and present or foreseeable threat of a terrorist attack.
  3. Locating or identifying individuals suspected of criminal offences for the purposes of conducting criminal investigations, prosecutions, or executing criminal penalties for offences.

Furthermore, the AI Act classifies the following as high-risk AI systems:

  • Biometrics
  • Critical infrastructure
  • Education and vocational training
  • Employment, workers management, and access to self-employment
  • Access to and enjoyment of essential private and public services and benefits
  • Law enforcement
  • Migration, asylum, and border control management
  • Administration of justice and democratic processes
Christina Montgomery, Vice President and Chief Privacy & Trust Officer, IBM. Image courtesy of IBM.

Brando Benifei, Internal Market Committee co-rapporteur from Italy, highlighted the significance of this legislation: “We finally have the world’s first binding law on AI, aimed at reducing risks, creating opportunities, combating discrimination, and ensuring transparency. Thanks to Parliament, unacceptable AI practices will now be banned in Europe, safeguarding the rights of workers and citizens. An AI Office will be established to assist companies in complying with the rules before they become mandatory. This ensures that human beings and European values are central to the development of AI.”

The AI Act will take effect 20 days following its publication in the Official Journal and will be fully applicable 24 months later. Exceptions include bans on prohibited practices, which will apply six months after the act comes into force; codes of practice (nine months after enactment); rules for general-purpose AI including governance (12 months after enactment); and obligations for high-risk systems (36 months after enactment).

Industry support

The adoption of AI, ranging from chatbots to anti-phishing tools, is widespread across industries. Consequently, the tech sector has welcomed the EU’s AI Act as a major stepping stone towards responsible use of the technology.

IBM, for its part, highlighted transparency, accountability, and human oversight in developing and deploying AI technologies as key qualities that will be upheld by the new law moving forward.

Shama Patari, Executive Director of Government Relations, Lenovo. Image courtesy of Lenovo.

Christina Montgomery, Vice President and Chief Privacy & Trust Officer at IBM, said, “I commend the EU for its leadership in passing comprehensive, smart AI legislation. We believe this risk-based approach will contribute to the creation of open and trustworthy AI ecosystems. IBM is ready to assist our clients and other stakeholders in complying with the EU AI Act and upcoming legislation worldwide, emphasising the collective effort needed to realise the potential of responsible AI.”

Lenovo is also in support of the EU’s AI Act, which it expects to foster trustworthiness and address potential risks of AI.

Shama Patari, Executive Director of Government Relations at Lenovo, said, “With any evolving technology such as AI, it’s critical to balance advancements in innovation together with transparency and regulation. The EU’s AI Act is a welcome first step in bringing clarity to a complex and fast-moving landscape and ensuring that the industry at large is focused on its responsibilities. The legislation underscores the importance of accessibility, responsibility, and ethics in AI development, promoting a framework where advancements are balanced with the necessary checks and balances.”

As for Salesforce, which is also heavily invested in AI, particularly its Einstein solution, the new law provides an important guideline for both private and public sectors concerning AI usage.

Eric Loeb, EVP of Global Government Affairs, Salesforce. Image courtesy of Salesforce.

Eric Loeb, EVP of Global Government Affairs at Salesforce, remarked, “We view it as an appropriate and constructive role for governments to consult with other stakeholders and take definitive, coordinated action. This approach builds trust in AI and advances responsible, safe, risk-based, and globally interoperable AI policy frameworks. By creating such frameworks like the EU AI Act, and pushing for commitments to ethical and trustworthy AI through convening multi-stakeholder groups, regulators can make a substantial positive impact. We applaud EU institutions for their leadership in this domain.”

Cybersecurity implications

With AI being increasingly used by threat actors to launch their attacks, regulations such as the EU’s AI Act serves as a north star for cybersecurity firms in developing their solutions, alongside enterprises looking to fortify their defences.

Peter Sandkuijl, VP of EMEA Engineering and Evangelist at Check Point Software, notes that while the initial focus may be on the hefty fines, the legislation has broader implications for the cybersecurity landscape:

Peter Sandkuijl, VP of EMEA Engineering and Evangelist, Check Point Software.
  • Stricter Development and Deployment Guidelines: AI developers and deployers are required to follow stringent guidelines, developing AI systems with security as a foundational element. This includes integrating cybersecurity measures from the ground up, adopting secure coding practices, and ensuring AI systems are resilient against attacks.
  • Increased Transparency: The Act mandates clear disclosures about AI operations, particularly for high-risk applications. This may involve detailed information on training data, AI decision-making processes, and privacy and security measures, aiding in vulnerability identification and threat mitigation.
  • Enhanced Data Protection: With AI systems relying heavily on large datasets, the Act emphasises data governance, necessitating stronger data protection protocols to safeguard personal data’s integrity and confidentiality, which are core aspects of cybersecurity.
  • Accountability for AI Security Incidents: The Act’s provisions are set to enforce organisational accountability for security breaches involving AI systems. This encompasses the implementation of more stringent incident response protocols and the requirement for AI systems to be equipped with advanced mechanisms for detecting and responding to cybersecurity incidents effectively.
  • Mitigation of Bias and Discrimination: The Act tackles the risks associated with bias and discrimination within AI systems, indirectly bolstering cybersecurity. Systems that are fair and unbiased are less likely to be exploited through their vulnerabilities. Ensuring AI systems are trained on diverse, representative datasets can reduce the risk of attacks that exploit biased decision-making processes.
  • Certification and Compliance Audits: AI systems identified as high-risk will be subjected to thorough testing and certification processes to verify compliance with the EU’s safety standards, including those related to cybersecurity. Regular compliance audits are mandated to ensure that these AI systems consistently meet the required standards over their operational lifespan.
  • Prevention of Malicious AI Use: The legislation intends to curb the use of AI for harmful purposes, including the production of deepfakes and the automation of cyberattacks. Through the regulation of specific AI applications, the Act forms part of a comprehensive cybersecurity strategy designed to reduce the threat of AI being leveraged in cyberwarfare and criminal activities.
  • Research and Collaboration: The Act could spur research and collaboration in the fields of AI and cybersecurity. This initiative seeks to foster the creation of new technologies and strategies for safeguarding AI systems against emerging threats.

“The rapid speed of AI adoption demonstrates that legislation alone cannot keep pace and the technology is so powerful that it can and may gravely affect industries, economies, and governments. My hope for the EU AI law is that it will serve as a catalyst for broader societal discussions, prompting stakeholders to consider not only what the technology can achieve but also what the effects may be,” he concluded.