• Subscribe to our newsletter
The Media Online
  • Home
  • MOST Awards
  • News
    • Awards
    • Media Mecca
  • Print
    • Newspapers
    • Magazines
    • Publishing
  • Broadcasting
    • TV
    • Radio
    • Cinema
    • Video
  • Digital
    • Mobile
    • Online
  • Agencies
    • Advertising
    • Media agency
    • Public Relations
  • OOH
    • Events
  • Research & Education
    • Research
    • Media Education
      • Media Mentor
  • Press Office
    • Press Office
    • TMO.Live Blog
    • Events
    • Jobs
No Result
View All Result
  • Home
  • MOST Awards
  • News
    • Awards
    • Media Mecca
  • Print
    • Newspapers
    • Magazines
    • Publishing
  • Broadcasting
    • TV
    • Radio
    • Cinema
    • Video
  • Digital
    • Mobile
    • Online
  • Agencies
    • Advertising
    • Media agency
    • Public Relations
  • OOH
    • Events
  • Research & Education
    • Research
    • Media Education
      • Media Mentor
  • Press Office
    • Press Office
    • TMO.Live Blog
    • Events
    • Jobs
No Result
View All Result
The Media Online
No Result
View All Result
Home Digital

Can you tell if a photo is fake?

You probably can’t. That’s why new rules are needed

by Martin Bekker
May 19, 2025
in Digital
0 0
0
Can you tell if a photo is fake?

An African Mona Lisa generated by AI/Shutterstock AI generator

Share on FacebookShare on Twitter

The problem is simple: it’s hard to know whether a photo’s real or not anymore. Photo manipulation tools are so good, so common and easy to use, that a picture’s truthfulness is no longer guaranteed.

The situation got trickier with the uptake of generative artificial intelligence. Anyone with an internet connection can cook up just about any image, plausible or fantasy, with photorealistic quality, and present it as real. This affects our ability to discern truth in a world increasingly influenced by images.

I teach and research the ethics of artificial intelligence (AI), including how we use and understand digital images.

Many people ask how we can tell if an image has been changed, but that’s fast becoming too difficult. Instead, here I suggest a system where creators and users of images openly state what changes they’ve made.

Any similar system will do, but new rules are needed if AI images are to be deployed ethically – at least among those who want to be trusted, especially media.

Doing nothing isn’t an option, because what we believe about media affects how much we trust each other and our institutions. There are several ways forward. Clear labelling of photos is one of them.

Deepfakes and fake news

Photo manipulation was once the preserve of government propaganda teams, and later, expert users of Photoshop, the popular software for editing, altering or creating digital images.

Today, digital photos are automatically subjected to colour-correcting filters on phones and cameras. Some social media tools automatically “prettify” users’ pictures of faces. Is a photo taken of oneself by oneself even real anymore?

The basis of shared social understanding and consensus – trust regarding what one sees – is being eroded. This is accompanied by the apparent rise of untrustworthy (and often malicious) news reporting. We have new language for the situation: fake news (false reporting in general) and deepfakes (deliberately manipulated images, whether for waging war or garnering more social media followers).

Misinformation campaigns using manipulated images can sway elections, deepen divisions, even incite violence. Scepticism towards trustworthy media has untethered ordinary people from fact-based accounting of events, and has fuelled conspiracy theories and fringe groups.

Ethical questions

A further problem for producers of images (personal or professional) is the difficulty of knowing what’s permissable. In a world of doctored images, is it acceptable to prettify yourself? How about editing an ex-partner out of a picture and posting it online?

Would it matter if a well-respected western newspaper published a photo of Russian president Vladimir Putin pulling his face in disgust (an expression that he surely has made at some point, but of which no actual image has been captured, say) using AI?

The ethical boundaries blur further in highly charged contexts. Does it matter if opposition political ads against then-presidential candidate Barack Obama in the US deliberately darkened his skin?

Would generated images of dead bodies in Gaza be more palatable, perhaps more moral, than actual photographs of dead humans? Is a magazine cover showing a model digitally altered to unattainable beauty standards, while not declaring the level of photo manipulation, unethical?

A fix

Part of the solution to this social problem demands two simple and clear actions. First, declare that photo manipulation has taken place. Second, disclose what kind of photo manipulation was carried out.

The first step is straightforward: in the same way pictures are published with author credits, a clear and unobtrusive “enhancement acknowledgement” or EA should be added to caption lines.

The second is about how an image has been altered. Here I call for five “categories of manipulation” (not unlike a film rating). Accountability and clarity create an ethical foundation.

The five categories could be:

C – Corrected

Edits that preserve the essence of the original photo while refining its overall clarity or aesthetic appeal – like colour balance (such as contrast) or lens distortion. Such corrections are often automated (for instance by smartphone cameras) but can be performed manually.

E – Enhanced

Alterations that are mainly about colour or tone adjustments. This extends to slight cosmetic retouching, like the removal of minor blemishes (such as acne) or the artificial addition of makeup, provided the edits don’t reshape physical features or objects. This includes all filters involving colour changes.

B – Body manipulated

This is flagged when a physical feature is altered. Changes in body shape, like slimming arms or enlarging shoulders, or the altering of skin or hair colour, fall under this category.

O – Object manipulated

This declares that the physical position of an object has been changed. A finger or limb moved, a vase added, a person edited out, a background element added or removed.

G – Generated

Entirely fabricated yet photorealistic depictions, such as a scene that never existed, must be flagged here. So, all images created digitally, including by generative AI, but limited to photographic depictions. (An AI-generated cartoon of the pope would be excluded, but a photo-like picture of the pontiff in a puffer jacket is rated G.)

The suggested categories are value-blind: they are (or ought to be) triggered simply by the occurrence of any manipulation. So, colour filters applied to an image of a politician trigger an E category, whether the alteration makes the person appear friendlier or scarier. A critical feature for accepting a rating system like this is that it is transparent and unbiased.

The CEBOG categories above aren’t fixed, there may be overlap: B (Body manipulated) might often imply E (Enhanced), for example.

Feasibility

Responsible photo manipulation software may automatically indicate to users the class of photo manipulation carried out. If needed it could watermark it, or it could simply capture it in the picture’s metadata (as with data about the source, owner or photographer).

Automation could very well ensure ease of use, and perhaps reduce human error, encouraging consistent application across platforms.

Of course, displaying the rating will ultimately be an editorial decision, and good users, like good editors, will do this responsibly, hopefully maintaining or improving the reputation of their images and publications.

While one would hope that social media would buy into this kind of editorial ideal and encourage labelled images, much room for ambiguity and deception remains.

The success of an initiative like this hinges on technology developers, media organisations and policymakers collaborating to create a shared commitment to transparency in digital media.The Conversation


Martin Bekker, Computational Social Scientist, University of the Witwatersrand

This article is republished from The Conversation under a Creative Commons license. Read the original article.


 

Tags: artificial intelligencecategories of manipulationdeep fakesDigital Mediagenerative AIMartin Bekkermediaphoto manipulationphotostechnologyWits University

Martin Bekker

Lecturing ethics of AI at the School of Electrical and Information Engineering, University of the Witwatersrand, South Africa. MSc, MA, PhD.

Follow Us

  • twitter
  • threads
  • Trending
  • Comments
  • Latest
Kelders van Geheime: The characters are here

Kelders van Geheime: The characters are here

March 22, 2024
Dissecting the LSM 7-10 market

Dissecting the LSM 7-10 market

May 17, 2023
Keri Miller sets the record straight after being axed from ECR

Keri Miller sets the record straight after being axed from ECR

April 23, 2023
Getting to know the ES SEMs 8-10 (Part 1)

Getting to know the ES SEMs 8-10 (Part 1)

February 22, 2018
Sowetan proves that sex still sells

Sowetan proves that sex still sells

105
It’s black. It’s beautiful. It’s ours.

Exclusive: Haffajee draws a line in the sand over racism

98
The Property Magazine and Media Nova go supernova

The Property Magazine and Media Nova go supernova

44
Warrant of arrest authorised for Media Nova’s Vaughan

Warrant of arrest authorised for Media Nova’s Vaughan

41
CTV ad buying: Perception and reality

CTV ad buying: Perception and reality

June 9, 2025
From quick hits to lasting impact

From quick hits to lasting impact

June 9, 2025
What news sources do we actually trust?

What news sources do we actually trust?

June 9, 2025
Why every brand needs an SEO strategist right now 

Why every brand needs an SEO strategist right now 

June 9, 2025

Recent News

CTV ad buying: Perception and reality

CTV ad buying: Perception and reality

June 9, 2025
From quick hits to lasting impact

From quick hits to lasting impact

June 9, 2025
What news sources do we actually trust?

What news sources do we actually trust?

June 9, 2025
Why every brand needs an SEO strategist right now 

Why every brand needs an SEO strategist right now 

June 9, 2025

ABOUT US

The Media Online is the definitive online point of reference for South Africa’s media industry offering relevant, focused and topical news on the media sector. We deliver up-to-date industry insights, guest columns, case studies, content from local and global contributors, news, views and interviews on a daily basis as well as providing an online home for The Media magazine’s content, which is posted on a monthly basis.

Follow Us

  • twitter
  • threads

ARENA HOLDING

Editor: Glenda Nevill
glenda.nevill@cybersmart.co.za
Sales and Advertising:
Tarin-Lee Watts
wattst@arena.africa
Download our rate card

OUR NETWORK

TimesLIVE
Sunday Times
SowetanLIVE
BusinessLIVE
Business Day
Financial Mail
HeraldLIVE
DispatchLIVE
Wanted Online
SA Home Owner
Business Media MAGS
Arena Events

NEWSLETTER SUBSCRIPTION

 
Subscribe
  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2015 - 2023 The Media Online. All rights reserved. Part of Arena Holdings (Pty) Ltd

No Result
View All Result
  • Home
  • MOST Awards
  • News
    • Awards
    • Media Mecca
  • Print
    • Newspapers
    • Magazines
    • Publishing
  • Broadcasting
    • TV
    • Radio
    • Cinema
    • Video
  • Digital
    • Mobile
    • Online
  • Agencies
    • Advertising
    • Media agency
    • Public Relations
  • OOH
    • Events
  • Research & Education
    • Research
    • Media Education
      • Media Mentor
  • Press Office
    • Press Office
    • TMO.Live Blog
    • Events
    • Jobs

Copyright © 2015 - 2023 The Media Online. All rights reserved. Part of Arena Holdings (Pty) Ltd

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?