Why authenticity is the next big thing in managing threats

As images, chatbots, and augmented and virtual realities created by AI continue to populate our digital world, there has been growing interest in the threat of bad actors and their damage to brand reputations. This is especially concerning given that 79% of APAC consumers (as per Accenture’s Meet Me in the Metaverse study) are expecting to interact more with AI and AI-generated content over the next three years.

The next big cybersecurity threat 

The advantages of AI-generated content are huge. On a consumer level, individuals are now enabled through AI to experience both the physical and virtual worlds seamlessly. From a business perspective, synthetic data grants organisations an increased pool of diverse data, countering the bias commonly associated with data from the real world.

Despite this, AI brings about its fair share of controversy. There is a deep sense of mistrust – deepfakes and phishing instances continue to be on the rise. An Accenture study showed that there were, on average, 270 attacks per company over 2021, an increase of 31% compared with 2020.

- Advertisement -

These have deeply consequential effects on both the top and bottom lines of companies, particularly as 65% of global consumers report lacking confidence in identifying deepfake videos or synthetic content (also as per the metaverse study). From privacy and safety to security concerns, companies are left wondering how they can leverage AI in a way that is authentic for both their customers and stakeholders.

Managing the risks with human authenticity 

There is now an urgency to manage bad actors, particularly as trust in these technologies within the technology sector are at an all-time low in 17 out of 27 countries including Singapore, Australia, and Japan.

Authenticity should be the main framework to unlocking new attitudes. More concretely, using AI in an authentic manner means taking heed of provenance, policy, people, and purpose within the business to increase and regain trust.

To do so, leaders need to consider the following: 

  1. Is your organisation prepared?
    Understanding the full extent and benefits of how AI-generated images, content, and videos can help boost and create new avenues of customer engagement for your organisation is the first and foremost step to deciding if your organisation is ready to take on these technologies. Organisations must define the purpose behind the use of synthetic content, its advantage over non-synthetic content, and the key metrics that can attest to it.

    This can also be achieved via greater exploration of the use of synthetic data. For instance, simply using a basic customer service bot for cost-savings purposes does not aid in its intended purpose of serving customers. However, if organisations are using synthetic data in a model to insert counter-bias, this becomes an authentic use of generative AI.
  1. How is your organisation protecting itself against adversarial forces?
    It is important to identify emerging risks before they become a systemic one. Before choosing to embark on these technologies, organisations must examine the provenance of the information coming in and out the organisation, such as potential scams and disinformation. This can be combated easily with the use of distributed ledger technologies (DLTs). For instance, Project Origin — led by Microsoft, BBC, CBC, and The New York Times — is tackling the spread of disinformation using DLTs to establish provenance from publishing to presentation.

    Malicious impersonation and use of deepfakes can be threatening to the business. By differentiating themselves through incorporating verifiable identity markers throughout the platforms and content, businesses can help build greater trust with customers. 
  1. How will your organisation choose to shape the future of an increasingly AI-focused world?
    To be truly authentic, organisations need to be both kept informed of the regulations in this new territory, as well as aligning to close the gaps between those policies and ones that are established internally.

    Being careless or negligent does more harm than good and could affect how your stakeholders embrace and trust these technologies at large. One way to prevent this is engaging in the standards-making process, and empowering people to not just ask the tough questions but find solutions to them. It bodes well when organisations have a consistent approach to their decision-making processes pertaining to big issues such as security, privacy, safety, and ethical conduct.

A world that is still a work in progress  

How we use AI and even other foundational technologies such as edge and 5G determines their value. We could use them in ways that either help improve the world or have us fall victim to malicious actors.

As organisations continue to deep-dive into the capabilities of these technologies, they will see implications across several business functions including security, R&D, marketing, and beyond. Elevating authenticity within the business becomes imperative to helping organisations tap into the potential of these technologies to drive greater positive change for the business.

- Advertisement -