At one point during the recent UNESCO conference in Pretoria, a regulator leaned into the microphone and admitted something you don’t often hear in policy spaces.
“We are slower than the crisis.”
The room went quiet. Not defensive. Not argumentative. Just honest.
For two days, regulators, academics, civil society, tech companies and policymakers gathered to discuss the regulation of digital platforms.
The conference closed with the adoption of what is being called the Pretoria Declaration, a set of guiding principles meant to help governments navigate misinformation, disinformation, artificial intelligence and platform accountability.
But what struck me most was not the declaration itself. It was the shift in tone.
Argument is over
We are no longer debating whether digital platforms influence democracy. That argument is over. The real question now is whether governments, particularly in the Global South, have the tools, speed and authority to respond to what is happening online.
Again and again, the conversation returned to implementation. Principles are easy to draft. Enforcement is harder. Disinformation spreads in minutes. Regulation can take years.
In South Africa, we know what information disorder looks like in moments of crisis. During the July 2021 unrest, during floods and fires, during Covid, rumours moved faster than official communication. Several speakers described regulators as custodians of information resilience. It is a compelling phrase. It also carries enormous weight.
The digital public square does not respect borders. Yet regulation is still largely national. That mismatch creates vulnerability, especially for countries with limited resources and limited access to data.
Not random noise
One of the most important themes to emerge in Pretoria was that disinformation is not random noise. It is organised. It is strategic. It is often funded. It thrives in information vacuums.
When credible information is slow or absent, manipulation fills the gap. When journalism is weakened, conspiracy theories gain traction. When scientific findings are poorly communicated, confusion becomes fertile ground.
The advertising model was scrutinised as well. If engagement drives revenue, outrage becomes profitable. If virality is rewarded, polarisation is incentivised. More than one speaker referred to the weaponisation of distribution. The architecture of amplification matters as much as the content itself.
That led to one of the most critical discussions for Africa. Access to data.
Platform harms
Without access to platform data, regulators cannot properly assess harm. Researchers cannot study coordinated manipulation. Civil society cannot build evidence. Policymaking becomes reactive instead of informed.
Several delegates pointed out that the overwhelming majority of research into platform harms originates in the Global North. If the data remains concentrated there, then the rules will reflect those realities, not ours.
An African instrument on access to data was discussed as a serious possibility. Not as a slogan, but as a structural necessity. If African regulators and researchers cannot see what is happening on platforms at scale, they are negotiating in the dark.
At the same time, privacy concerns were raised. The balance between transparency and protection is delicate. Access to data must serve the public interest without exposing vulnerable communities. The conversation was nuanced. It was not about state control. It was about regulatory visibility.
False framing of AI
Artificial intelligence ran through almost every panel.
Too often the debate is framed as innovation versus safety. As though protecting citizens somehow undermines technological progress. That framing feels increasingly false.
Artificial Intelligence (AI) is already used in content moderation. But many of these systems are trained predominantly on English language datasets. What does that mean for African languages, for cultural nuance, for political speech that does not fit Western categories?
Lack of contextual familiarity can distort moderation decisions in ways that are invisible from Silicon Valley.
At the same time, AI can be deployed to detect coordinated manipulation and prevent harmful content from going viral. The technology cuts both ways. The real question is oversight. Who audits these systems? Who understands how the models are trained? Who ensures they are not reinforcing bias or enabling censorship?
Regulation and repression
Child protection surfaced repeatedly. Not only in terms of age verification, but in relation to harmful content and long term psychological effects of algorithmic design. Yet alongside this was a warning. Internet shutdowns, justified in some countries as tools to combat misinformation, carry heavy democratic costs. Dozens of shutdowns were recorded across the continent last year alone.
The line between regulation and repression is thin. South Africa’s own experience during Covid, when certain forms of misinformation were criminalised, was mentioned. The intention may have been to curb harm, but enforcement and proportionality always matter.
Another moment that stayed with me came during a discussion on climate misinformation. A panellist said simply that scientific knowledge is not a debate. The earth is not flat. Some facts are established. Yet the translation of scientific data into media narratives remains weak. That gap is routinely exploited.
If the Global South wants resilience, digital literacy cannot be limited to technical skills. It must include media literacy, scientific literacy and a deeper understanding of how algorithms shape what we see and what we believe.
Public media independence
There was also a quiet acknowledgement that civil society needs funding to build evidence and capacity. That public media independence remains essential. That advertising regulation should be part of the governance conversation. That regulators need interoperable tools and shared response guidelines across borders.
The Pretoria Declaration speaks of cooperation. That word surfaced repeatedly. Cooperation between states. Between regulators and platforms. Between researchers across continents.
But cooperation is not neutral. Power dynamics shape it. The Global South has often been the recipient of digital governance norms designed elsewhere. In Pretoria, there was a clear sense that this cannot continue.
Digital governance is no longer an abstract human rights discussion. It is about power. Who controls distribution. Who accesses data. Who designs the business model. Who sets the guardrails.
The regulator who admitted that they are slower than the crisis was not confessing defeat. It was a recognition of urgency.
Next crisis won’t wait for legislation
The next crisis will not wait for legislative timelines. The next wave of AI tools will not pause for regulatory consultations. The next election cycle will not slow down for research access agreements.
Pretoria did not resolve these tensions. But it made something clear. The Global South is no longer content to observe the shaping of digital rules from the sidelines.
The real test will not be the language of the declaration. It will be whether governments, platforms and regulators move quickly enough, and transparently enough, to match the speed of the systems they are trying to govern.
And whether, this time, Africa helps write the rules instead of simply living with them.
Paula Slier is an international journalist and speaker who works on information warfare, disinformation and media literacy. She has reported from conflict zones across the Middle East, Africa and Europe.













