There has been much ado about whether generative AI like ChatGPT will be the death of journalism and creativity, and the arguments are not completely unfounded.
In 2014, The Guardian dubbed AI as “journalists who never sleep” and predicted that “robot writers may soon be commonplace.”
Meanwhile, concerns have been raised about ChatGPT’s ability to write short-form and long-form content, posing a considerable threat to writers everywhere.
According to JV Rufino, former director for mobile and social at Inquirer Group and former editor-in-chief of Inquirer.net, ChatGPT is a useful tool for writers, despite its limitations.
During the virtual panel “The AI Revolution in Media and Communication: Content Creation, News Production, and Social Media” hosted by the Philippine Communications Society, Rufino shared his experience using ChatGPT to write editorial columns. He noted that ChatGPT produces good but generic copy, making it suitable for corporate press releases due to its normal language.
“As a writer, my problem is I hate the blank page, so talking to ChatGPT provides that first draft,” Rufino added.
The limits of AI
Despite the power of AI to generate content quickly, Rufino maintained that a human element is necessary, especially for editorial roles.
“Sometimes, ChatGPT will produce nonsensical output, and you don’t want it to publish nonsensical content,” said Rufino. “You also have to guide ChatGPT, similar to how you enter search terms in a search engine. There’s actually a job called ‘prompt engineer’ whose function is to write the prompts that guide the AI to produce the desired outcome. At a high level, this is actually programming, because it involves telling a machine in great detail what you want it to do. With AI, prompt engineers type in normal English, continuously nudging the system to get the desired output.”
Meanwhile for radio and streaming services, AI is also taking on additional roles.
Earlier this year, Spotify rolled out its new DJ feature, which not only personalises playlists based on the user’s preferences, but also makes short commentaries in between songs, mimicking a real DJ.
For Christian dela Cruz, Director of Digital Media at Manila Broadcasting Company (MBC), generative AI is something to prepare for, rather than fear.
“We are treading in uncharted territory right now. We are all navigators here. This technology right now is forcing every industry, not just the broadcast industry— it’s forcing us to somehow understand and see the impact of these technologies in terms of our business operations, and how it can impact our audience,” he said.
Dela Cruz added that while AI can certainly accomplish many things, listeners still look for a human touch to complement their auditory experience.
“When it comes to radio, it’s about personal connection and creating real-time updates for listeners, as well as establishing an emotional connection with them. DJs play a crucial role in achieving this, which is why they’re still relevant. Although ChatGPT can be helpful in providing news updates and real-time weather reports, radio is not just about delivering information; it’s also about human interaction, and we need to continue leveraging this as an industry,” he added.
However, there is also the question of the kinds of data being fed into the AI. For Gemma Bagayaua Mendoza, Digital Strategy Head at Rappler, this issue requires a deeper conversation.
“The technology can present challenges when you start talking about what exactly you are feeding the AI. Not all data and information are created equal. There’s vetted information, true information, and false information. Ultimately, it’s ‘garbage in, garbage out,’ and without proper processing, that is the problem with AI. It has been fed a lot of information, including unvetted information from sources like Wikipedia,” she said.
AI as disinformation tool
Aside from fears that AI may lead to job displacement, technology experts are also wary about its potential to be used for spreading disinformation.
Therefore, it is important to remember that technologies like AI are still influenced by human culture, noted Rappler’s Gemma Bagayaua Mendoza.
“The problem with solely relying on tools like ChatGPT and AI is that you lose the source and provenance of information. With the rise of meme warfare and mimetic relationships on the internet, pieces of truth can be circulated without proper context, which can cause societal divisions,” she said.
“At the end of the day, if you rely on a disembodied chatbot for information, you’ll have problems with sources of information and veracity,” Mendoza continued.
Beat Fluri, CTO of Adnovum, also acknowledged the limitations of AI language models like ChatGPT.
“As an AI language model, ChatGPT can create vast amounts of material – words, pictures, sounds, and videos—in split seconds. However, it is limited to the data it has been trained on, and while it is designed to provide accurate and truthful information to the best of its abilities, it will occasionally generate responses that contain inaccuracies or misinformation,” he said.
To avoid falling prey to disinformation attempts, the cybersecurity expert recommended three steps individuals and organisations can take:
- Cross-check information
Verify the information with other reliable sources such as official government websites, peer-reviewed articles, and reputable news outlets known for accuracy. - Check for consistency
Inconsistencies such as self-contradiction can sometimes be prevalent in the responses generated by ChatGPT. If it contradicts previously known facts or reliable sources, it may be inaccurate. - Evaluate the source of data
ChatGPT is trained on vast amounts of data, including sources that may or may not be trustworthy or credible. Therefore, it is essential to evaluate the credibility of the source the information is coming from.
Cybercriminals’ new ally
The ability of tools like ChatGPT to generate human-like text at great speed makes them a potential threat when it comes to creating phishing emails, social engineering attacks, and other types of malicious content, cautioned Adnovum’s Beat Fluri.
“In the past, threat actors writing phishing messages by hand sometimes reveal themselves with mistakes like spelling and grammatical errors that can alert a target that the sender is not a native English speaker. With ChatGPT, it’s easier for cybercriminals to impersonate an organisation or individual. Emails can be tailored according to the right prompts, making it harder for organisations to distinguish them from genuine ones,” he explained.
Aside from this, ChatGPT’s capabilities extend to fabricating fake websites and identities, which can be used to enable fraudulent behaviours.
ChatGPT can also create realistic-sounding speeches for phone-based social engineering attacks, tricking recipients into believing that they are interacting with a real person and revealing sensitive information to a cybercriminal using an AI bot for malicious purposes,” the cybersecurity expert continued.
What’s more alarming, Fluri noted, is that employees and individuals may unintentionally feed private and confidential information to ChatGPT.
“For instance, employees may forget to censor a document they have asked ChatGPT to proofread, or request for content to be generated around confidential information such as financial figures or performance. Any reports uploaded may also contain information on customers, vendors, and partners, which will be recorded and stored,” he said.
Fighting back
Although AI might seem like an all-powerful tool that will one day take over from the humans that have created it, experts have recommended safeguards to ensure that the technology remains an ally, rather than an opponent.
“Vulnerable organisations see generative AI and ChatGPT as entirely new forms of cyberthreats when in reality, ChatGPT just magnifies the age-old attack methods. AI enables many of the same attacks, but on an unprecedented scale. However, none of that matters if you don’t have the right fundamentals in place,” noted Pierre Samson, Chief Revenue Officer, Hackuity.
He continued, stating that generative AI is accelerating vulnerability exploits, but the fundamentals haven’t changed. Complete cyber hygiene involves aggregating data from the various siloed security tools that most organisations already have in place, prioritising common vulnerabilities and exposures according to the organisation’s specific attack surface, and automating the team’s remediation efforts.
“AI is replacing a lot of things – just not the cybersecurity basics,” Samson added.
Meanwhile, the zero-trust framework, which has worked so well for a lot of organisations in countering threats in the past, may not be enough in dealing with AI-based threats.
“Zero trust can protect organisations to some extent against the dangers posed by generative AI. The concept assumes that breaches will occur and allows organisations to implement the right controls to minimise damage. However, it’s important to understand that cybercriminals are targeting both organisations and employees— people like you and me. While zero trust provides a solid foundation for securing an organisation’s network and data, it may not be specifically tailored to guard against the risks presented by generative AI,” Adnovum’s Beat Fluri said.
To this end, people should regularly be given cybersecurity awareness training, he added.
Moreover, organisations can also harness the power of the Internet of Behaviour (IoB) to synthesise users’ online activity data from a behavioural perspective using machine learning and data analytics.
Fluri noted that the data on individual behaviours collected through IoB can assist security teams in identifying unauthorised access and suspicious activities by hackers, enabling them to activate security protocols at the earliest point of entry.
Regarding concerns about the possible AI replacement of human jobs and functions by AI, Dr Fernando Paragas, Dean of the College of Mass Communication at the University of the Philippines Diliman, suggests that the key is to explore areas for AI collaboration rather than constantly fearing the unknown.
“The recent panic, however, is founded on the idea that seemingly all too suddenly, the robot is doing only what we as sentient human beings should only be able to— write poems and essays, synthesise literature, and create art, among very human activities,” he said.
Paragas added that the greatest concern for communication and media practitioners is how to uphold long-standing values, including humanity, independence, accuracy, fairness, impartiality, accountability, and transparency while also embracing the values of digital culture, such as collaboration, data-driven decision making, innovation, inclusivity, and flexibility.
To illustrate the changing paradigm of learning as a result of the AI revolution, the academician used term papers and examinations as an example, noting that ChatGPT can be used as a collaborative tool rather than trying to police every single student.
“We’re going to fight a losing battle of checking against cheating if we use another app to check for cheating. Instead, we could provide the available tools and ask students to share what they’re using and how they’re building on it. Then, we can interact in that way,” he concluded.