In an era of data overload, artificial intelligence (AI) gives us a tool to cut through the noise. Today’s AI models, especially the large language models behind generative AI, can process vast quantities of data. If we are careful and smart with this technology, it can and should be used to elevate the lives of almost everyone.
AI holds significant economic potential for the Asia-Pacific region, with the market expected to grow to US$136 billion by 2025, with an average growth of 47.71% per year. In businesses, AI can create insights and outputs that would take humans much longer to generate. It can act as an amplifier to our human business intelligence, benefiting all involved.
However, AI is a serious tool that needs a careful operator and human guidance. Knowing how to provide and oversee that critical guidance is perhaps the most important new business skill that every executive needs to learn.
Why AI needs us
Technology has no moral compass. It is critical that when we use extraordinary tools, we take extraordinary measures to ensure we are protecting the people these tools touch.
This has to go beyond just a notion and a talking point. Everyone who uses AI to aid decision-making must cast a critical eye on the answers they obtain and more essentially, data those answers are derived from. It really is more of a cultural issue than a technological one. It is necessary for responsible parties to oversee these powerful tools to respect the people we are aiming to serve with them.
Controlling for bias
The trust journey needs to start with AI bias, which can creep into company strategies when analysis comes from an AI system that was trained on a slanted collection of data.
For example, an AI-based job recommendation system could inadvertently exclude women from certain job opportunities due to historical biases in the hiring data. Similarly, financial institutions that use AI systems for credit risk assessments can sometimes perpetuate socio-economic biases against the unbanked or underbanked present in historical data.
You cannot guarantee that your company is not reinforcing AI bias without knowing about the data your AI models rely on.
Finding clean, reliable data
It’s critical that AI users understand the lineage of the data behind the AI systems they use, especially now that we are in an AI-driven explosion of data. By 2025, there will be 120 zettabytes of data created, captured, or used – we are drowning in data. However, this ocean of data is mostly unusable: unrefined, duplicate, or inaccessible.
Finding the valuable information in the sea of raw data means filtering through a lot of unusable or polluted content. While technology can do some of this, we still need human discernment to find the rivers of clean and reliable data that should inform AI training models.
I believe the most important job people will have as we use AI more is being critical thinkers. We cannot become blinded by information just because we see it as an analysis or read it on a screen. Anyone who has ever read a story they have been personally involved with, covered in a newspaper, knows that little errors can creep into any retelling of an event. AI and algorithms can amplify these little errors at machine speed. Sometimes, only a human gut check can stop little data flaws from spreading and becoming institutional data bias.
Explainability and trust
Skepticism towards AI results is especially critical for ‘black box’ systems that don’t reveal the reasons behind their results. Fortunately, many tools now allow you to query the lineage of AI reasoning. All models that generate results impacting people or businesses should be explainable: the answers you get should come with the reasons for those answers.
Explainable AI (XAI) allows humans to correct AI model bias and accelerates the pace of trust and the adoption of AI tools. It’s also a core component of keeping AI in compliance with emerging regulations and risk management programs.
Knowing how you’re using customer data is essential to building trust in your brand, which ultimately is the most important thing your company has to manage. When users are given clear options and authority over their data, they are willing to share it with businesses they trust. However, according to research from Think with Google, only 30% of marketers in the Asia-Pacific region have developed a specific strategy for engaging with consumers regarding data privacy.
You earn trust in little pieces, but you can lose it in a heartbeat. Transparency builds trust, and trust is currency. Tell your customers how you plan to use the data they give you, and be sure to live up to what you say. When you do this, over time, you can earn the right to ask for more information, benefiting both parties.
With great power…
For communicators of all kinds, modern AI is the most powerful tool we have seen in several generations. It can bring you closer to your customers and your partners, and them closer to you.
However, there remains a baseline responsibility to your stakeholders, including your customers and their families, to wield this power ethically and responsibly. The emerging regulations are a good start, but ultimately, using AI well requires you to know how the tools work, what resources they are using, and how you can grow to apply them in the best possible ways.
Originally posted in Fast Company.