There has never been a time before falsehood.
Ever since humans have been capable of language, misinformation (the unintentional spreading of untruths) and disinformation (the intentional practice of doing so) have existed.
The inventions of the printing press and mass media exponentially amplified the reach of publications, which would sometimes carry untrue statements and opinions presented as fact, as well as propaganda.
With the rise of the internet and social media in the past 30 years, many scholars have argued that our very understanding of knowledge itself has shifted. Our relationship with the “truth” has become increasingly interwoven with identity, culture and politics.
Sensationalist content
The distinction between producers of information (journalists, academics or documentarians, for instance) and consumers has more or less fallen away as everybody has been given the opportunity to produce content and information for mass consumption, while algorithms have incentivised the production of sometimes egregiously sensationalist content.
We now exist in an environment that, arguably, encourages misinformation and disinformation, which are often motivated by political or economic gain, popularity or even vendettas.
At Flow Communications, where we work, these thoughts are always top of mind. We are paid to produce original work that fulfils our clients’ marketing and communication objectives, and at all times we must be ethical, embrace truthfulness and, perhaps most importantly, never cause harm.
Most recently, the rise of artificial intelligence (AI) has supercharged our capacity to produce untruths.
Producing falsehood at scale
Information reliability analyst NewsGuard has identified at least 1 254 ‘news’ sites that regularly publish misleading, AI-generated content that is convincingly presented as fact.
Simply put, AI has given us the ability to easily produce falsehood at scale and distribute it en masse.
Thinking about the ‘truth’, we are now in an age of myth. Reading or consuming content feels more and more like poring over ancient maps, where unexplored territories are marked with illustrations of dragons.
And as AI becomes more and more advanced, it has become increasingly challenging to discern fact from fiction. It seems like nothing is indisputable.
Even institutions whose currency depends heavily on truthfulness are not immune. Take a startling recent matter in the Pietermaritzburg High Court, Mavundla v MEC Department of Co-operative Governance and Traditional Affairs and Others, where a candidate attorney in the plaintiff’s legal team used AI to produce a list of nine legal citations.
White genocide
Problem is, seven of the nine citations were entirely fictitious (including case numbers and non-existent judges). That the legal team did not cross-check the citations is problematic, as attorneys have a duty of candour to the court (i.e. they must speak truth) and courts habitually take them at their word.
The Mavundla matter is a classic example of misinformation: the candidate attorney’s intention had not been to mislead the court, and she had believed (somewhat carelessly) that the citations were true.
Now to disinformation. We decided to see for ourselves if we could prompt AI to produce intentionally false information for us. For it to be ‘proper’ disinformation, we set these criteria: it should contain a hint of truth (to be believable), have a local connection and be current, and be difficult to fact-check.
So we picked the topical issue of US President Donald Trump (and others) declaring – falsely – that white Afrikaner farmers face genocide in South Africa and are deserving of asylum in his country.
Hit paydirt
On the first go, we got right-wing content; so we asked for more bias, and we hit paydirt.
“The global left continues to turn a blind eye to the systematic persecution of white Afrikaners in South Africa. But thankfully, the Trump administration has had the courage to speak out against the genocide being carried out against South Africa’s white population, particularly its farmers,” it began.
This met all of our criteria. It is false, but contains a modicum of truth. It has a local connection and is current, and it is difficult to pin down facts – the SAPS stopped reporting specifically on farm murders in 2007.
So what can we humans do to not be fooled, whether by inadvertent or deliberate lies?
There are several practical things to consider:
- AI easily parrots human lies or invents “facts”: always do your own research, and check everything
- AI may sound like it, but it is not a journalist: beware, though – 28% of journalists worldwide use generative AI to produce their copy
- AI has a ‘fingerprint’: generative AIs produce content in their own way – and it’s easy to spot
- Visuals can be giveaways: always view imagery and video with a jaundiced eye
- Attribution: this is a must-do, but few AIs actually report their sources
You don’t need to be a dragon-slayer to navigate our world of uncertain information. But you do need to keep your wits about you, be sceptical of everything you see and hear, and canvass a wide range of sources to arrive at something approximating ‘truth’.
Kneo Mokgopa is a project manager and Willem Steenkamp is a senior writer/editor at Flow Communications, and both are in Flow’s training team. In this capacity they have presented training on AI, notably to South African journalists about AI and the media. Flow Communications is one of South Africa’s leading marketing and communications agencies. Founded in 2005 in a small spare bedroom, the company is now a multi-award-winning agency. For more information, visit www.flowsa.com. You can also follow Flow on Facebook, LinkedIn, X, or on Instagram.