Every few days there’s another headline about a company getting smashed for “misleading claims”.
A climate pledge that quietly ignores half the emissions.
A diversity promise that sounds bold until you read the fine print.
Some “research-backed” supplement that collapses the moment anyone checks the source.
On paper, we’ve never had more data.
In reality, people mostly trust what they’re told when it lines up with what they already want to believe.
We usually throw all of this into the big bucket called misinformation. But that’s not the full story. What we really have is a messy mix of how human brains work, how our institutions amplify those biases, and how little real infrastructure we have for checking what sits behind a claim.
Until we deal with that, good companies will keep getting burned along with the ones who are clearly playing games.
Smart people, dumb outcomes
When you look at Deepwater Horizon, Silicon Valley Bank, or any number of ESG blow-ups, it’s tempting to ask, “Were these people just incompetent?”
Most of them weren’t. They were biased in the same ways all of us are.
We like information that fits the story already in our head. We quietly push aside anything that doesn’t. We hate being the awkward person in the meeting saying, “Hang on, are we sure about the evidence for this?”
And then we go and build organisations that double down on that.
We hire for “culture fit”. We reward people who are “easy to work with”. We don’t tend to promote the person who slows the room down with questions about assumptions and data quality.
That’s great for fast alignment. It’s terrible for spotting blind spots.
So you end up with smart teams, lots of credentials, plenty of reports and dashboards… and still a massive gap between what’s claimed and what’s actually true.
The referees aren’t neutral either
We like to imagine that universities and research bodies are sitting above all this, calmly chasing truth.
Real life: not quite.
Researchers are human too. Careers depend on attention, funding, and staying on the right side of whatever the institution wants to be known for.
If you publish something that tells a comforting story –
“sustainability always pays”,
“diversity automatically boosts returns”,
“this policy is a clean win with no trade-offs” –
you’re far more likely to get profiled, invited, quoted, and retweeted than if you publish something that basically says, “It’s complicated and sometimes it doesn’t work.”
Nuance doesn’t travel as fast as a punchy one-liner.
The awkward part is that companies don’t see that internal sausage-making. They just see a line in a report that says “Research shows X”, and it fits the story they want to tell, so it gets turned into a claim, and that claim gets turned into a campaign, and off we go.
By the time anyone notices the caveats (if they do at all), the headline version has already hardened into “fact”.
When belief becomes part of your outfit
There’s another layer that doesn’t get talked about enough: identity.
People don’t just react to information with “Is this true?”
They also react with “What does believing this say about me? Whose side does it put me on?”
Climate is a good example. You can present the same underlying data as:
- a story about tax and regulation, or
- a story about innovation, energy security, and keeping your industry alive for the next 30 years.
The numbers can be identical. The way people feel about them won’t be.
Once an issue gets coded as “our team believes X, their team believes Y”, it stops being about evidence and starts being about loyalty. Changing your mind feels less like, “I updated based on better data,” and more like, “I just betrayed my group.”
I’ve lost count of the number of younger people who say, in different words, that their social life basically depends on not visibly stepping outside the consensus of their group – at uni, at work, online. If that’s the social reality, how much open debate can we realistically expect?
Now imagine you’re a company trying to communicate something nuanced about sustainability or social impact in that environment. You’re not just dealing with “Is this accurate?” You’re dealing with “What tribe does this put us in?” and “What tribe does it put you in if you agree with us?”
The usual “fixes” are not fixes
When people talk about tackling misinformation, the default answers are pretty predictable:
- give people more facts,
- set up some body that decides what’s true and what’s not,
- push more “correct information” through education.
None of those really grapple with the psychology and sociology underneath.
If someone only trusts facts that match their worldview, dumping more data on them doesn’t magically help.
If you create a central “truth authority”, it’s only as neutral as whoever runs it. Once it tilts, you’ve just baked bias into the system and called it official.
And if education is basically “here are the right answers, memorize them”, you’re not teaching people to reason. You’re installing Version 1.0 of the story in their heads and making it harder to change later.
We keep treating this like a content problem: bad information in, good information out.
It’s more of a structure problem. The way we produce, reward, share, and challenge claims is off.
So what would actually help?
If you strip away the noise, a few simple questions sit underneath all of this:
- When a company makes a big claim, can anyone inside the building actually see what it rests on?
- Is it obvious what’s solid, what’s assumption, and what’s still unknown?
- Is there any space – structurally, not just theoretically – for someone to say, “The evidence doesn’t really support this,” before it hits a press release?
- And when the claim reaches the outside world, is there any way for customers, partners, or regulators to see more than just the polished headline?
That doesn’t require some dystopian “ministry of truth”.
What it does require is some basic infrastructure for evidence:
- A way to link claims to their documentation.
- A way to surface nuance without drowning people in PDFs.
- A way to keep track of what’s actually been checked, and what’s still wishful thinking.
- A way to communicate all of that without turning every statement into a political litmus test.
In short: moving from “trust us” to “here’s how you can check”.
Where PocketSeed fits into this
This is the problem we are solving with PocketSeed.
The idea is pretty simple: when a company says something important – about impact, performance, sustainability, whatever – that statement shouldn’t just float around on its own. It should be anchored to the evidence behind it in a way that’s:
- traceable internally,
- explainable externally, and
- honest about limits and trade-offs.
That means turning claims into objects you can inspect, not just lines in a slide. It means giving teams a way to see, “Are we about to over-claim here?” before it becomes a liability. It means letting customers and partners see more than just the slogan, without expecting them to read a 120-page report.
PocketSeed doesn’t exist to decide what’s true. That’s not realistic, and it’s not the point. It exists to make the structure behind a statement visible, so people can make up their own mind with a bit more clarity than “marketing said so”.
We happen to be working on this in the sustainability and trust space, but the same pattern is showing up everywhere: AI, finance, healthcare, food, you name it. Anywhere a claim can move money or behaviour, this gap between claim and evidence is going to matter more and more.
A simple starting question
You don’t have to wait for new tools to shift the culture a little.
Here’s a very basic question that’s surprisingly uncomfortable to answer honestly:
For the biggest claims we’re putting into the world right now, would we feel okay showing the full chain of evidence behind them – as it actually is today – to a smart, sceptical outsider?
If the answer is no, that’s the real problem. Not the PR strategy, not the wording, not the “narrative”.
We’re heading into a decade where everyone can yell. Volume won’t be the differentiator. The edge will sit with people and organisations who can show their workings without flinching.
For a long time, the pattern was: make the claim first, then pull the evidence together later if someone pushes hard enough.
The shift we need – and the one we are betting on – is the opposite: build from evidence outwards, and let that shape what you dare to claim.
That’s not just “being good”. It’s basic risk management. And, sooner than most people think, it’s just going to be normal business hygiene.


