Denmark just had an election.

The day after, I published a tool to read the data.

The correlations were correct.
The code was correct.
The numbers were correct.

The language was not.

I wrote "significantly more."
I wrote "the pattern holds across all 98 municipalities."
I wrote "no meaningful link."

None of those phrases are in the data.
They are interpretations of the data.
Small ones. Easy to miss. The kind that sound like precision but are not.

"Significantly" is not a threshold.
"Holds across all" is not what a Pearson correlation measures.
"Meaningful" is not defined anywhere in the system.

The tool was built to remove that kind of language from journalism.
And then it used that kind of language itself.

What I changed:

The no-pattern threshold moved from r ≥ 0.20 to abs(r) ≥ 0.30.
The strength scale tightened: Strong now requires 0.70, not 0.65.
"Significantly more" became "tend to vote more."
"The pattern holds across all municipalities" became "based on data from 98 municipalities."
"No meaningful link" became "no consistent relationship."

The full list is in the changelog on GitHub.

The lesson is not that the first version was broken.
The lesson is that correct data and honest language are two separate things.
You can have one without the other.

Most tools do not notice the difference.
Most journalism does not notice the difference.
That is exactly the problem the tool was built to address.

So it needed to address it in itself first.

— Dennis Hedegreen, updated publicly