A combined group photo of the Leave No One Behind sounding board and the UN Resident Coordinator's Office on the final day of a retreat at Maanzoni Lodge, Machakos County on 17 September 2025. Photo Credit: Abel Ludeki | UN Kenya
As the befuddling September afternoon heat wafted through a conference room packed with eager young activists and perpetually overdressed UN staff, a vague throwaway line caught my attention.
Contained in the pages of the tantalisingly named UN Kenya Mid-Year Results Dashboard 2025, amidst slides showing a billion dollar shortfall and a woeful disability scorecard, was this:
“National guidelines on harmful content, disinformation and hate speech are being developed.”
As any creature of the internet would, I asked the most obvious question imaginable: what plans are there to identify and counter synthetic disinformation?
None. I was told there are none.
Dismayed, I spent that evening emailing our hosts at the UN Resident Coordinator’s Office to offer any help I could in my capacity as an advisor.
Being unprepared for synthetic content with an election around the corner is not just courting disaster; we are actively stalking catastrophe one rabidly thirsty Like at a time.
In 2022, technology researcher Odanga Madung industriously made hay hither and yon exposing Kenya’s thriving cottage industry of bad information and how platforms remain disinterested in remedying the situation.
In the short time since, corporations have released powerful technology to the masses for profit with the potential to spread bad information rapidly and at scale.
It doesn’t take a particularly active imagination to see paid hashtag peddlers being completely replaced by vast bot farms spraying slop up and down our timelines in 2027 and beyond.
Government officials and NGOs will likely make reassuring noises pointing to various partnerships with the platforms on which this slop lives, but this would be nonsense. These platforms are enthusiastically dismantling the teams and tools tracking bad information. And NGOs carrying out fact-checking have been so comprehensively captured by platform funding that any cuts threaten to render them utterly toothless.
To summarise:
New technology can help disinformation spread faster.
Platforms are not actively stopping disinformation.
Fact-checking NGOs have been neutered.
There is no plan for synthetic disinformation.
What are we going to do?
I spent a few months in 2022 idly exploring how young Kenyans consumed news on TikTok. This project, conducted in meandering fashion during my free time, helped me understand that there is a real hunger for accurate, insightful information tailored to the spaces young people occupy.
Accessible information in formats that work for these spaces, coupled with an aggressive grassroots digital literacy campaign, is a start. It certainly beats doing nothing, or going cap-in-hand to platforms that remain disincentivised to collaborate.
Instead of hosting endless workshops for disinterested corporate media, old-school government dinosaurs, and participation certificate aficionados, institutional partners like the UN – with its dwindling budget in tow – should also assiduously support independent media and grassroots organisations that meet people where they are: community stations, indigenous writers, young creators, local broadcasters.
I’ve seen considerable UN resources poured into well-intentioned but dead-end programmes run by the usual revolving door of entrenched INGOs which inevitably engineer dubious impact reports to explain away insipid results before dutifully queueing at the trough during the next funding cycle, fanciful proposals at the ready.
This has to change.
One can only hope that less financial flexibility brings with it a razor-sharp focus, out-of-the-box thinking, and drastically new priorities.
Mark Renja is a facilitator for Able Union and has spent 14 years working in digital media.
These views do not necessarily reflect the views of Able Union’s partners and supporters.
Contact: markrenja@proton.me