International Human Rights Day 2025
Published on December 10, 2025
Published on December 10, 2025
On International Human Rights Day we’re reminded that human rights are not abstract ideals. They’re everyday essentials:
safe work, fair pay, clean air and water, a liveable climate, and the ability to speak up without fear.
Those essentials are under the greatest pressure in global supply chains. That’s where human rights abuses and environmental harm most often collide, and it’s exactly where TISCreport focuses its work: using data and transparency to make corporate behaviour visible.
This year, the conversation has shifted again. AI has arrived in our boardrooms, our communications work and our supply chains. The question is no longer “Will we use AI?” but “How do we use it without harming people or the planet – and ideally, to protect both?”
Recently I chaired a session on AI, the environment and ethics for environmental communicators at Communicate, the UK’s conference for environmental communicators produced by The Natural History Consortium – a charitable collaboration of major environmental organisations led by the awesome Chief Executive Savita Willmott
I was joined by:
You can watch the full discussion here:
AI, Environment and Ethics – Natural History Consortium session
We looked at what AI really costs – not just in energy, but in human impact – and what questions we should be asking before we hit “deploy”. Those conversations sit right at the heart of TISCreport’s social mission: using transparency and open data to support human rights and environmental justice.
I chaired the session as a digital human rights and environmental justice geeky user of AI and open data. For the last 11 years I’ve watched companies react (and sometimes overreact) to emerging human rights and environmental due diligence regulations, and I’ve used open data to try to influence their behaviour and push back against greenwash.
I’m a big believer that the Hippocratic oath applies to technologists and digital leaders just as much as it does to doctors:
First, do no harm.
That sounds simple. But inside complex digital supply chains, it isn’t.
TISCreport grew out of that tension. We are, first and foremost, a social mission project: a civic tech platform that uses open data, shared data and public accountability to help expose where corporate behaviour supports – or undermines – human rights and environmental commitments.
We are not trying to get organisations to buy more technology. We’re trying to ensure that the technology they already use is held to account.
Our first speaker, Professor Chris Preist, grounded the discussion in a very everyday question:
If a data centre uses energy to serve a typical AI prompt, how many cups of tea could we boil with that energy instead?
The answer surprised most people in the room:
So an individual AI query is tiny. The problem is scale and speed:
And then the really uncomfortable truth: **we often stare at the shiny data centres and ignore the supply chains that make them possible (thank you Bruce Lee with **your finger and the heavenly glory!).
The biggest environmental and human harms linked to AI are still in the familiar places:
In other words, the AI boom sits on top of the same opaque global supply chains we already know are risky.
TISCreport’s role here is not to sell AI or condemn it outright. Our role is to track and link corporate data so that these connections can’t be conveniently forgotten.
From a transparency point of view, the AI conversation is not separate from modern slavery, living wage, gender equality or climate commitments. It is part of the same system of decisions.
If we are not careful, we risk using AI to automate greenwash and rights-wash – faster, cheaper and at larger scale.
This is why, inside TISCreport, we treat AI as a tool in service of transparency, not as a replacement for human judgement.
Examples of how we use AI for public good rather than profit include:
The key principle is simple: AI can help us see, but humans must still decide what is acceptable and what must change.
Our second speaker, Martin O’Leary, described AI as a “charismatic technology” – something that makes people project their hopes and fears onto it, well beyond what the tools actually do today.
We’re already seeing several overlapping waves:
There is also a psychological pull: the temptation to hit “generate” just one more time, especially in energy-intensive areas like video, where each new attempt has a real, if invisible, footprint.
And then there’s the question of truth. Video, in particular, has long carried an aura of authenticity – “I saw it, so it must be real.” Generative video and synthetic voices make that assumption deeply unsafe.
In that landscape, the only honest response is transparency:
For TISCreport, that mirrors our wider mission: if you want ethical use of technology, you start by telling the truth about how it’s used and who is affected.
We didn’t have time in the session to answer every question about AI policy, but a few principles emerged from our panel that align closely with our social mission at TISCreport.
AI is not a mystical force. It’s software and hardware, run by companies, embedded in supply chains.
Most organisations already have:
Those should apply to AI too.
Questions to ask of any AI tool you adopt:
TISCreport’s contribution here is to link public information about companies – from modern slavery statements to payment practices and climate pledges – so that these questions can be asked using evidence, not just marketing claims.
As Chris put it in the session, the key question is often not “how many prompts?” but “which companies are we choosing to support?”
That means:
The aim is not perfection. It is to shift demand gradually but deliberately towards companies that take human rights and environmental responsibilities seriously.
AI should not drift in quietly through side doors.
At a minimum:
External transparency platforms like TISCreport can provide a mirror from the outside, showing how your public disclosures and supply-chain footprint sit alongside your growing use of AI.
Some decisions should not be fully automated, however impressive the tools become:
AI can help to surface patterns and documents, but people must remain accountable for the outcomes.
If we want AI to support human rights and environmental justice rather than erode them, we have to anchor it in transparency, accountability and an honest look at power.
On this Human Rights Day, some practical steps you can take:
So on International Human Rights Day, I call on corporate activists to track the companies that put AI into their supply chAIns. And to get you started, here is a dashboard tracking the current big players. Just as with the eating ethical prawns dilemma (eat them but hold suppliers to account), the solution is not to stop using AI – but feeding the companies that are doing the right thing by people and planet. Vote with your money.
If you think your company or supplier should be on this list PLEASE get in touch, it's the starter, not the mAIn course. We'll be adding more metrics as they emerge, to help keep the sector and its market honest.
Human Rights Day lasts 24 hours.
Human rights and climate risk are with us every day.
AI will not change that. What it can change is how quickly we spot harms, how honestly we report them, and whether we choose to reward those who take responsibility.
At TISCreport, our role is simple: use technology and data for good, by making corporate behaviour visible, so that people, planet and transparency win out over hype.
P.S. two teaspoons of tea were used in the structuring and proofing of this article. The insights were from my learned panel members, The commentary, cheesy jokes and cultural references are all me.