I’m sharing a thoughtful piece of writing from Adrian Weckler, Ireland’s top technology journalist. He points out we need to do something about the fact that more than 20% of people in Ireland believe most things they read online.
According to last week’s Red C poll for the Electoral Commission, 22pc of Irish people now believe the Government is replacing white people with “imported” non-white people.
And 21pc believe that “viruses and/or diseases have been deliberately disseminated to infect certain populations”, while 22pc think the Government conducts experiments with drugs and technology on the public “without their knowledge or consent”.
If there ever was a time to snigger at such things being a “tiny” or “irrelevant” cohort, it has long passed by.
That probably starts with X.
The owner of the most-used news-related platform, Elon Musk continually pushes the “great replacement” conspiracy theory – a notion that US and European governments are trying to import as many non-white immigrants as they can to replace indigenous populations.
Indeed, one of the overarching themes in the posts to his 190m followers is that white culture is in danger of fading out in western countries.
During the Dublin riots in November, as anti-immigrant sentiment set trams alight, he even said that then Taoiseach Leo Varadkar “hates the Irish people”.
X, then, is a top-down engine for boosting distrust in public authority, to the point of enthusiastically helping to spread conspiracy theories. It barely needs an algorithm to do it.
What about other social media?
YouTube is far, far bigger than X. It doesn’t actively encourage conspiracy content, but its scale means it often fails to curtail it.
Earlier this year, it was called out for approving ads promoting voter suppression and incitement to violence about India’s election. And it remains the go-to place for people to try to build audiences around conspiracy content.
In April, a DCU study found that within 23 minutes of setting up a new YouTube account, teenage boys were served up videos of “misogynist” Andrew Tate, who is currently awaiting trial for rape and human trafficking.
The study, led by Debbie Ging, Catherine Baker and Maja Andreasen, found that YouTube’s algorithm simply assumes that all young males will be interested in “manfluencer” content – even containing individuals who have been de-platformed, such as Tate.
Once in the ecosystem of fringe videos, more content can follow.
TikTok was also named in the DCU study, though its recommendation algorithms did not lead to conspiracy or fringe content as much or as quickly as YouTube’s.
Facebook, which has long faced accusations of being gamed for disinformation, has been toughest on implementing tighter rules and moderation to combat conspiracy content.
On the other hand, its user base is so vast that even with all of its AI technology and thousands of human moderators, it remains a disinformation problem, something that the European Commission is currently probing.
Instagram, also owned by Facebook parent Meta, is the least likely of the social media sites to contain conspiracies and disinformation tropes.
So if platforms are a problem, is anyone in charge monitoring it?
Coimisiún na Meán – the Irish media regulator that also has oversight over online video platforms based in Ireland, such as YouTube, TikTok, Facebook and X – has a more hands-off approach than many people realise.
Although its role includes combatting online misinformation and disinformation, and getting the big platforms to enforce their own rules, its engagement is quite limited.
“We do not act as a content moderator and we do not resolve disputes about whether particular items of content are illegal or represent misinformation,” said the body’s chairman, Jeremy Godfrey, in an Oireachtas hearing on disinformation in March.
“We are not a censor. We do not consider complaints about individuals, nor do we take action against them.”
It’s seems clear, so, that the “great replacement” conspiracy theory isn’t something that our regulator is especially bothered about acting on. It did, however, take proactive action in contacting the social media platforms to guard against disinformation on the night of, and subsequent to, the Dublin riots.
But mainly, the battle against tech platforms who promote conspiracy theories lies with the European Commission.
It is currently taking direct action against X for “deceiving” users by presenting “verified” blue tick accounts as trustworthy, when in act “there is evidence of motivated malicious actors” using them to deceive users.
Elon Musk is very likely to get a substantial fine out of this direct regulatory action.
Statistically, at least one fifth of those reading this column will think I’m completely on the wrong track, that I just haven’t yet “opened my eyes” or that I am in “privileged denial”, sitting in a “Dublin media bubble” or some such removed-from-reality condition.
The questions now are whether that 20pc could become 30pc or 40pc anytime soon?
And if it does, how physically violent will it get, outside the doom-scrolling of our phone screens?