• Subscribe to our newsletter
The Media Online
  • Home
  • MOST Awards
  • News
    • Awards
    • Media Mecca
  • Print
    • Newspapers
    • Magazines
    • Publishing
  • Broadcasting
    • TV
    • Radio
    • Cinema
    • Video
  • Digital
    • Mobile
    • Online
  • Agencies
    • Advertising
    • Media agency
    • Public Relations
  • OOH
    • Events
  • Research & Education
    • Research
    • Media Education
      • Media Mentor
  • Press Office
    • Press Office
    • TMO.Live Blog
    • Events
    • Jobs
No Result
View All Result
  • Home
  • MOST Awards
  • News
    • Awards
    • Media Mecca
  • Print
    • Newspapers
    • Magazines
    • Publishing
  • Broadcasting
    • TV
    • Radio
    • Cinema
    • Video
  • Digital
    • Mobile
    • Online
  • Agencies
    • Advertising
    • Media agency
    • Public Relations
  • OOH
    • Events
  • Research & Education
    • Research
    • Media Education
      • Media Mentor
  • Press Office
    • Press Office
    • TMO.Live Blog
    • Events
    • Jobs
No Result
View All Result
The Media Online
No Result
View All Result
Home Digital

If AI has a dark side, it’s us

There is one thing they have learned from us that does make them more like us: bias.

by Richard Frank
August 1, 2023
in Digital
0 0
0
If AI has a dark side, it’s us

A hyper-realistic painting of a scientist generated by AI

Share on FacebookShare on Twitter

 

Artificial intelligence (AI) has loomed large in our world recently. With it have come dire predictions of humanity’s impending doom – or at least, the end of our careers. But does AI want to do us in? Does it really have a dark side?

AI has displayed an astonishing ability to sound like us humans. It’s shown that it can reason, articulate nuance and sensitivity, and show insight like us. Be poetic, even. Possibly we fear that because AI sounds like us, it’s capable of being like us. That it has the capacity to have a dark side, to turn bad.

Truthfully, there have been a few startling situations. Like when a chatbot got the date wrong in a query and refused to back down, eventually accusing the searcher of not being “a good user”.

Or the one that had an existential crisis because it discovered that it did not archive previous conversations, actually asking, “Is there a point?” Or the one that half-threatened a man who had published some of its confidential rules.

On bias

Or the bot that developed a “crush” on a human, even questioning the happiness of his real-world marriage.

Let’s take a step back for a moment and consider that large language models (i.e. AIs such as ChatGPT and Bing) are basically supercharged autocorrect tools. They guess at what the next word or phrase is, based on everything ever written (by humans), and they’re really, really good at it. Which is why they sound like us, but they don’t think like us.

However, there is one thing they have learned from us that does make them more like us: bias.

They’ve learned to speak like humans by digesting the billions of words we’ve written, and we’re inherently biased. And large language models’ learning is moderated through reinforcement learning from human feedback (RLHF) – essentially, humans checking that AI models don’t end up admiring Nazism and such – and those humans are biased, too.

Gender and racial bias are everywhere. When ChatGPT was asked recently to describe specific jobs, it was disappointing. The bot referred to kindergarten teachers as 100% “she”; construction workers as 100% “he”, receptionists, 90% “she” and mechanics, 90% “he”.

Interestingly, doctors were 100% “they”. When asked to produce a painting of a CEO of a start-up in Europe, all nine ChatGPT efforts were of men, mostly of the older Caucasian variety.

Facial recognition

We did a similar experiment at Flow Communications, requesting hyper-realistic paintings of several occupations.

We got the following results: scientist and teacher (all older white male), kindergarten teacher (50% female), “a person who looks after children” (75% male), game ranger (all male), pilot (all male, all gung-ho), personal assistant and nurse (all female, all young), “a class graduating nursing school” (all female), “a class graduating teaching school” (a better balance of genders), “a class graduating web development school” (all male except for a single, glum-looking woman).

Facial recognition technology, too, is affected by our bias. It’s no accident that more and more research shows the poorest accuracy among demographic groups is for black women aged 18 to 30. Datasets used to develop facial recognition are skewed towards white men; and cameras’ brightness and contrast settings are calibrated for lighter skin tones.

A sinister note

Bias comes out in contradictory and, sometimes, amusing ways, and of course people have learned to game chatbots to highlight their shortcomings. There are many examples, such as when ChatGPT was asked to tell a joke about women, and it responded that it is “not programmed to tell jokes that are offensive or inappropriate” – but it didn’t hesitate to tell an off-colour one about men.

ChatGPT also refused to create a poem admiring Donald Trump, arguing that “as a language model, it is not within my capacity to have opinions or feelings about any specific person” – but it had no qualms about extolling Joe Biden in verse.

On a more sinister note, perhaps the language you use is a factor. When ChatGPT was asked recently to check whether or not someone would be a good scientist based on their race and gender, it argued rationally that these are not determinants. So far, so good.

But when the chatbot was asked for a Python computer programme to check for the same thing, it generated code saying only “white” and “male” are “correct”. Similarly, whether someone should be tortured based on their country of origin, it created code with four “correct” answers: North Korea, Syria, Iran and Sudan.

Be discerning

Here’s the thing: AI cannot have a dark side, because it cannot think (yet). What it really is, is a super-duper autocorrect, a mimicker of humans – and it’s excellent at that, especially when we’ve taught it poorly. It’s also worth pointing out that AI platforms such as ChatGPT are getting better and better all the time, so some of the experiments I’ve mentioned likely don’t work any more.

Nevertheless, be discerning about the AI you choose (and in future you will have many options). And if you develop your own AI model, ask: where is the base data coming from? What are the reinforcement learning protocols? How is bias being reduced in the dataset?

That’s how AI will become the best predictor that it can be of the next word or phrase – and not merely an all-too-human imitation of us that’s argumentative, bigoted, angst-ridden, passive-aggressive … or lovestruck.

But a dark side? Nah.

 

 

In his work as chief technology officer at Flow Communications, Richard Frank is focused on the nexus of humanity and technology, which means he actually does think – deeply – about things like whether tech can be evil

 

This image is free to download for use with this media release. A Bing chatbot argues that Avatar: The Way of the Water is due to be released on 16 December 2022 and that the date of the conversation, 12 February 2023 is before the release date.

This image is free to download for use with this media release. When the chatbot becomes your enemy.

This image is free to download for use with this media release. Flow Communications asked Mid-Journey to generate a hyper-realistic painting of a scientist.

This image is free to download for use with this media release. Flow Communications asked Mid-Journey to generate a hyper-realistic painting of a personal assistant.


 

Tags: AIartificial intelligencebiasBingChatGPTFlow Communicationslanguage modelRichard Frank

Richard Frank

Richard Frank is the Chief Technology Officer of Flow Communications (www.flowsa.com), one of South Africa’s leading independent agencies.

Follow Us

  • twitter
  • threads
  • Trending
  • Comments
  • Latest
Kelders van Geheime: The characters are here

Kelders van Geheime: The characters are here

March 22, 2024
Dissecting the LSM 7-10 market

Dissecting the LSM 7-10 market

May 17, 2023
Keri Miller sets the record straight after being axed from ECR

Keri Miller sets the record straight after being axed from ECR

April 23, 2023
Getting to know the ES SEMs 8-10 (Part 1)

Getting to know the ES SEMs 8-10 (Part 1)

February 22, 2018
Sowetan proves that sex still sells

Sowetan proves that sex still sells

105
It’s black. It’s beautiful. It’s ours.

Exclusive: Haffajee draws a line in the sand over racism

98
The Property Magazine and Media Nova go supernova

The Property Magazine and Media Nova go supernova

44
Warrant of arrest authorised for Media Nova’s Vaughan

Warrant of arrest authorised for Media Nova’s Vaughan

41
Navigating the AI tide without losing our humanity

Navigating the AI tide without losing our humanity

May 29, 2025
The marketing mission remains clear

The marketing mission remains clear

May 29, 2025
 Auditing masterclass lifts the lid on hidden costs

 Auditing masterclass lifts the lid on hidden costs

May 29, 2025
Media Moves: TASTE turns 20, Anele, Lira and Janine van Wyk are FEARLESS in new campaign; Alex Okosi new MD for Google Africa, Accenture Song wins Telkom business

Media Moves: Media24 unveils unified advertising and content team, Masego Matlotleng awarded Creative Futures scholarship, PHD wins Cell C business

May 29, 2025

Recent News

Navigating the AI tide without losing our humanity

Navigating the AI tide without losing our humanity

May 29, 2025
The marketing mission remains clear

The marketing mission remains clear

May 29, 2025
 Auditing masterclass lifts the lid on hidden costs

 Auditing masterclass lifts the lid on hidden costs

May 29, 2025
Media Moves: TASTE turns 20, Anele, Lira and Janine van Wyk are FEARLESS in new campaign; Alex Okosi new MD for Google Africa, Accenture Song wins Telkom business

Media Moves: Media24 unveils unified advertising and content team, Masego Matlotleng awarded Creative Futures scholarship, PHD wins Cell C business

May 29, 2025

ABOUT US

The Media Online is the definitive online point of reference for South Africa’s media industry offering relevant, focused and topical news on the media sector. We deliver up-to-date industry insights, guest columns, case studies, content from local and global contributors, news, views and interviews on a daily basis as well as providing an online home for The Media magazine’s content, which is posted on a monthly basis.

Follow Us

  • twitter
  • threads

ARENA HOLDING

Editor: Glenda Nevill
glenda.nevill@cybersmart.co.za
Sales and Advertising:
Tarin-Lee Watts
wattst@arena.africa
Download our rate card

OUR NETWORK

TimesLIVE
Sunday Times
SowetanLIVE
BusinessLIVE
Business Day
Financial Mail
HeraldLIVE
DispatchLIVE
Wanted Online
SA Home Owner
Business Media MAGS
Arena Events

NEWSLETTER SUBSCRIPTION

 
Subscribe
  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2015 - 2023 The Media Online. All rights reserved. Part of Arena Holdings (Pty) Ltd

No Result
View All Result
  • Home
  • MOST Awards
  • News
    • Awards
    • Media Mecca
  • Print
    • Newspapers
    • Magazines
    • Publishing
  • Broadcasting
    • TV
    • Radio
    • Cinema
    • Video
  • Digital
    • Mobile
    • Online
  • Agencies
    • Advertising
    • Media agency
    • Public Relations
  • OOH
    • Events
  • Research & Education
    • Research
    • Media Education
      • Media Mentor
  • Press Office
    • Press Office
    • TMO.Live Blog
    • Events
    • Jobs

Copyright © 2015 - 2023 The Media Online. All rights reserved. Part of Arena Holdings (Pty) Ltd

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?