Two podcasts where I discuss ethical issues around data, digital and technology with peers in charities and non-profit organisations.
Back in December, I took part in a podcast recording for Charity Digital News. The theme was around ethics in charity digital teams.
On the panel with me where Giselle Cory, Executive Director, Datakind UK and Chris Martin, Chief Executive, The Mix. The podcasts were hosted by Jonathan Chevallier, Chief Executive of Charity Digital.
I’ve included some notes around things we discussed and references to anything I mentioned.
Part 1: Designing digital ethics for your charity#

What do we mean by digital ethics?
You could fill the podcast with this question itself (and many ethicists would try) but that would be dull.
Google had the strapline “Don’t be evil” until everyone found about Project Mavern. So, I suggested a definition that meant technology which is not exploitative and doesn’t do harm.
What’s the issue with technology in charities that bothers you the most?
I’m deeply troubled by the surveillance capitalist business model of Google, Facebook and Amazon (as well as many others) and the ever-growing reliance that all charities have on these companies to reach beneficiaries and the wider public and raise money. Big tech companies place us under constant surveillance and dragnet all the data they can about us without our consent.
The excellent Surveillance Giants report from Amnesty International points the spotlight of one of the world’s leading human rights organisations to this problem.
Amnesty International report
“…the surveillance-based business model of Facebook and Google is inherently incompatible with the right to privacy and poses a systemic threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination.”
Read more
This is a serious problem not just for charity activity but also affecting staff and beneficiaries. The way these companies operate is incompatible with most charities stated aims and values but it’s not quite as simple as turning them off because of the hold their monopolies have over society. Charities are completely dependent on big tech just to function.
This is such a contradiction that it’s even difficult to get any sort of conversation going on this subject with peers in the sector.
Part 2: Ethical data use

Responsible data sharing in charities
There are considerable ways in which charities can use better use of data.
Almost every charity uses Google Analytics which collects visitor behaviour information for our websites while also firing off data to Google servers, tracking us as we move around the internet. Similarly, we interact with our beneficiaries on platforms like Facebook who use that data to build profiles about people so they can manipulate their behaviour.
Charities who have built up public trust over decades and sometimes centuries gift their brands to platforms which host disinformation and hate speech. Charities put in all the efforts to build up their Facebook or Instagram accounts but the majority of the benefits go to the platform, not the charity.
Are big tech companies doing enough?
I was asked about the steps that big tech companies are taking to moderate, fake news and hate speech. Considerable time has been invested in employing an ever-increasing number of human moderators and improved algorithms to help find the bad stuff. After all, the likes of Facebook don’t like what’s done to their reputation by the bad press do they?
Whilst this is true, steps have been taken, money has been spent, it’s nowhere near enough and sufficient seriousness has never properly been directed to improve these issues.
We know that the most effective content on social media, the stuff that keeps eyeballs glued to the screen is often designed to provoke a strong emotional reaction. So extreme content the sort that appals, angers, terrifies or saddens works best. Thriller not vanilla.
Is it such a stretch to think that growth-obsessed surveillance capitalists might be a bit slow or lackadaisical at removing harmful content because it’s making them loads of dosh?
Christchurch call
After the Christchurch massacre, there was a conference in Paris – the Christchurch Call which has achieved…er…what did it achieve? Macron and Jacinda Ardern waggled their fingers at the likes of Nick Clegg from Facebook and told them they should do better. They forgot to ask Clegg if ‘Facebook Live’ needs to be live. Does it? Has anyone ever asked this question?
If it’s not possible to stop people like that mass murderer broadcasting live then why can’t their operation be scaled back to a level where it can be kept safe? Why is scaling back or stopping never even on the table? Nor is the honesty that the scale of the problem might be beyond them entirely.
An article by Bloomberg showed that YouTube was prepared to ignore warnings about controversial and potentially dangerous content to meet an internal target of 1 billion viewing hours per day.
Humble correction
YouTube moderators can only suggest content which is removed they cannot remove the content themselves. I wrongly referred to Facebook on this point and I apologise profusely if anyone thought my slip besmirched their immaculate reputation.