International Human Rights Day 2025

International Human Rights Day: AI, Environment and Human Rights All Come Back to Transparency in Supply ChAIns

On International Human Rights Day we’re reminded that human rights are not abstract ideals. They’re everyday essentials:
safe work, fair pay, clean air and water, a liveable climate, and the ability to speak up without fear.

Those essentials are under the greatest pressure in global supply chains. That’s where human rights abuses and environmental harm most often collide, and it’s exactly where TISCreport focuses its work: using data and transparency to make corporate behaviour visible.

This year, the conversation has shifted again. AI has arrived in our boardrooms, our communications work and our supply chains. The question is no longer “Will we use AI?” but “How do we use it without harming people or the planet – and ideally, to protect both?”

Recently I chaired a session on AI, the environment and ethics for environmental communicators at Communicate, the UK’s conference for environmental communicators produced by The Natural History Consortium – a charitable collaboration of major environmental organisations led by the awesome Chief Executive Savita Willmott 

Image
For AI technologists, the hippocratic oath still stands. First do no harm." Quote on AI ethics over a river with data streams, TISC Report logo.

I was joined by:

  • Professor Chris PreistProfessor of Sustainability and Computer Systems & Academic Director of Sustainability, University of Bristol (LinkedIn)
  • Martin O’LearyHead of Studio at Pervasive Media Studio, Watershed, and artist/creative technologist (LinkedIn)

You can watch the full discussion here:
AI, Environment and Ethics – Natural History Consortium session

We looked at what AI really costs – not just in energy, but in human impact – and what questions we should be asking before we hit “deploy”. Those conversations sit right at the heart of TISCreport’s social mission: using transparency and open data to support human rights and environmental justice.

“First, do no harm”: a Hippocratic oath for technologists

I chaired the session as a digital human rights and environmental justice geeky user of AI and open data. For the last 11 years I’ve watched companies react (and sometimes overreact) to emerging human rights and environmental due diligence regulations, and I’ve used open data to try to influence their behaviour and push back against greenwash.

I’m a big believer that the Hippocratic oath applies to technologists and digital leaders just as much as it does to doctors:

First, do no harm.

That sounds simple. But inside complex digital supply chains, it isn’t.

TISCreport grew out of that tension. We are, first and foremost, a social mission project: a civic tech platform that uses open data, shared data and public accountability to help expose where corporate behaviour supports – or undermines – human rights and environmental commitments.

We are not trying to get organisations to buy more technology. We’re trying to ensure that the technology they already use is held to account.

The “cup of tea” question – and what it hides

Our first speaker, Professor Chris Preist, grounded the discussion in a very everyday question:

If a data centre uses energy to serve a typical AI prompt, how many cups of tea could we boil with that energy instead?

The answer surprised most people in the room:

  • For most prompts, it’s around one hundredth of a cup of tea.
  • For more complex prompts, it might reach one tenth of a cup.

So an individual AI query is tiny. The problem is scale and speed:

  • Billions of prompts a day.
  • Rapid rollout across the world, including the Global South.
  • Countless parallel experiments as companies and individuals train and fine-tune their own models.

And then the really uncomfortable truth: **we often stare at the shiny data centres and ignore the supply chains that make them possible (thank you Bruce Lee with **your finger and the heavenly glory!).

The biggest environmental and human harms linked to AI are still in the familiar places:

  • Minerals mined in conflict-affected regions.
  • Hardware manufactured in factories with poor labour standards.
  • Infrastructure built and powered in ways that displace people and damage ecosystems.

In other words, the AI boom sits on top of the same opaque global supply chains we already know are risky.

TISCreport’s role here is not to sell AI or condemn it outright. Our role is to track and link corporate data so that these connections can’t be conveniently forgotten.

Where AI meets human rights and environmental justice

From a transparency point of view, the AI conversation is not separate from modern slavery, living wage, gender equality or climate commitments. It is part of the same system of decisions.

  • AI models run on devices and servers made by workers in real factories, in real communities.
  • Data is processed in centres that change local energy and water use and land use.
  • AI systems are already being used to write HR policies, draft supply-chain reports, summarise “due diligence”, and generate communications content at scale.

If we are not careful, we risk using AI to automate greenwash and rights-wash – faster, cheaper and at larger scale.

This is why, inside TISCreport, we treat AI as a tool in service of transparency, not as a replacement for human judgement.

Examples of how we use AI for public good rather than profit include:

  • Using AI to review modern slavery statements against public guidance, highlighting where information is missing or inaccessible.
  • Supporting checks on whether key disclosures are actually reachable and machine-readable, rather than hidden in broken links or images.
  • Connecting public data on labour, equality and climate with corporate identifiers so that patterns of behaviour can be seen by regulators, researchers, activists and the public.

The key principle is simple: AI can help us see, but humans must still decide what is acceptable and what must change.

The charismatic lure of AI – and why honesty matters

Our second speaker, Martin O’Leary, described AI as a “charismatic technology” – something that makes people project their hopes and fears onto it, well beyond what the tools actually do today.

We’re already seeing several overlapping waves:

  1. Excitement and experimentation
    Artists, technologists and activists using AI to prototype, play and explore new ideas about intelligence and creativity.
  2. Anxiety and loss in the creative professions
    Freelancers – illustrators, voiceover artists, composers – discovering that many of the small, everyday jobs that used to pay the bills are quietly shifting to AI-generated content.
  3. Creep into everyday business practices
    AI note-takers showing up unannounced in meetings. New AI buttons sneaking into familiar software. “We’ll just use AI for a draft” quietly becoming “we’ll just use AI”.

There is also a psychological pull: the temptation to hit “generate” just one more time, especially in energy-intensive areas like video, where each new attempt has a real, if invisible, footprint.

And then there’s the question of truth. Video, in particular, has long carried an aura of authenticity – “I saw it, so it must be real.” Generative video and synthetic voices make that assumption deeply unsafe.

In that landscape, the only honest response is transparency:

  • Say clearly where AI has been used in your communications.
  • Do not present machine-generated content as lived experience.
  • Make it explicit when people are being recorded, transcribed or analysed by AI.

For TISCreport, that mirrors our wider mission: if you want ethical use of technology, you start by telling the truth about how it’s used and who is affected.

What should organisations actually do?

We didn’t have time in the session to answer every question about AI policy, but a few principles emerged from our panel that align closely with our social mission at TISCreport.

1. Treat AI as infrastructure – and apply the rules you already have

AI is not a mystical force. It’s software and hardware, run by companies, embedded in supply chains.

Most organisations already have:

  • data protection and retention policies
  • supplier due diligence processes
  • environmental and human rights commitments
  • rules about recording and monitoring people

Those should apply to AI too.

Questions to ask of any AI tool you adopt:

  • Who builds and runs this? What is their track record on labour rights, human rights and environment?
  • Where does our data go? Who can access it, and for how long?
  • Whose jobs and livelihoods are affected? Are we displacing decent work or creating better work?
  • Does this match what we say publicly about climate, equality, modern slavery and human rights?

TISCreport’s contribution here is to link public information about companies – from modern slavery statements to payment practices and climate pledges – so that these questions can be asked using evidence, not just marketing claims.

2. Look at who you are funding with your AI spend

As Chris put it in the session, the key question is often not “how many prompts?” but “which companies are we choosing to support?”

That means:

  • Paying attention to the corporate behaviour of AI providers and cloud hosts.
  • Favouring organisations that publish information about their supply chains, labour standards and environmental impacts.
  • Being prepared to walk away from vendors whose business models or governance clearly conflict with your values.

The aim is not perfection. It is to shift demand gradually but deliberately towards companies that take human rights and environmental responsibilities seriously.

3. Make AI visible inside your organisation

AI should not drift in quietly through side doors.

At a minimum:

  • Name it: in meetings, ask “Are there any AIs in the room?” before someone activates an AI note-taker or recorder.
  • Ask for consent: people have a right to know when and how their words and images are being captured and processed.
  • Map your uses of AI across HR, recruitment, procurement, communications and operations, and check them against your existing human rights and sustainability commitments.

External transparency platforms like TISCreport can provide a mirror from the outside, showing how your public disclosures and supply-chain footprint sit alongside your growing use of AI.

4. Keep humans accountable where rights are at stake

Some decisions should not be fully automated, however impressive the tools become:

  • hiring and firing
  • risk assessments on suppliers or communities
  • decisions about remedy, escalation and disengagement
  • situations where people’s safety or basic rights are on the line

AI can help to surface patterns and documents, but people must remain accountable for the outcomes.

What you can do today

If we want AI to support human rights and environmental justice rather than erode them, we have to anchor it in transparency, accountability and an honest look at power.

On this Human Rights Day, some practical steps you can take:

  1. Ask where AI is already present in your supply chains – from cloud providers to HR tools and comms platforms.
  2. Review the public record of the major AI and cloud companies you rely on, using open data on labour, human rights and climate where it is available.
  3. Bring AI into your human rights due diligence, rather than treating it as something separate or “too technical”.
  4. Be open with your audiences and colleagues about when AI has been used, and why.
  5. Support efforts to make AI and digital supply chains more transparent, whether through regulation, procurement standards or civic tech.

In summation

So on International Human Rights Day, I call on corporate activists to track the companies that put AI into their supply chAIns. And to get you started, here is a dashboard tracking the current big players. Just as with the eating ethical prawns dilemma (eat them but hold suppliers to account), the solution is not to stop using AI – but feeding the companies that are doing the right thing by people and planet. Vote with your money.

AI ACCOUNTABILITY DASHBOARD

If you think your company or supplier should be on this list PLEASE get in touch, it's the starter, not the mAIn course. We'll be adding more metrics as they emerge, to help keep the sector and its market honest.

Human Rights Day lasts 24 hours.
Human rights and climate risk are with us every day.

AI will not change that. What it can change is how quickly we spot harms, how honestly we report them, and whether we choose to reward those who take responsibility.

At TISCreport, our role is simple: use technology and data for good, by making corporate behaviour visible, so that people, planet and transparency win out over hype.

P.S. two teaspoons of tea were used in the structuring and proofing of this article. The insights were from my learned panel members, The commentary, cheesy jokes and cultural references are all me.