The automated factchecking project is split between two parts: scanning text and checking its validity. When I started it a few weeks ago, I intended to spend equal amounts of time on each. However, I started off in the first week using some very rudimentary programming tools, and it became clear that it would be much more worthwhile to explore new avenues and come up with new ideas in order to produce work that will be useful in the long term. And the long term is important, because I am kicking off something that will be built on in the coming years.
So I got stuck into the first part, learning about natural language processing and understanding how to tease out the important information from sentences. It’s interesting to spend your time thinking about how language is formulated, and when you see how complicated it is, it makes you wonder how the tech giants have built intelligent personal assistants like Siri and Cortana.
Having found some much more useful analytical tools, I came up with ways to decide what data is required to check the claim that is fed in. There are some immense difficulties and limitations. For instance, how can you tell that “this government has reduced spending on new housing” is a factual claim, but “this government might well reduce its infrastructure investment” is just speculation, and that only one of these should be factchecked?
I saw an opportunity to take a diversion from these thoughts when last week, Director of Full Fact Will Moy was on BBC Radio 4’s moral maze. Giles Fraser, one of the panellists, explored the conflict between technology and humanity in the context of automated factchecking, saying that once a computer algorithm decides what is right and wrong, “the truth” has been dehumanised.
This gives an opportunity to get further into what automated factchecking really aims to do. It does not ask computer software to make a moral judgement like a human can. Full Fact provides people with the tools they need to check things by themselves and come to an informed decision. The factchecks don’t just give a yes or no answer, they also point out the shades of grey. In the same way, my code will not tell you what to believe and what is right, but will allow people to confidently decide for themselves.
Importantly, rather than replacing the factchecker, the software’s role is to make their work easier. Every time a simple phrase like “unemployment stands at 5%” appears, a person should not have to take up their time retrieving statistics from the ONS website, when a computer could do that instead.