"I am concerned that this is being used as leverage to get us to agree to outlawing private and encrypted communications and to allow the State to snoop on us - using child protection to scare us into relinquishing our rights to privacy."
They have effectively done that via the Online Safety Bill, which has been passed and is on its way to receiving Royal Assent. The only things standing in their way are,
a) companies that use encrypted communication, like WhatsApp and Signal, are threatening to withdraw services from the UK, not least because enabling a "backdoor" for the UK Police compromises security of users worldwide, and
b) the technical means to break encryption do not exist, as admitted by Lord Parkinson of Whitley Bay.
"UK Online Safety Bill to become law – and encryption busting clause is still there"
Admits it's 'not technically feasible' ... but with no promise not to invoke it
The Register 20 Sept 2023
https://www.theregister.com/2023/09/20/uk_online_safety_bill_passes/
The Times article explicitly links the article in the OP with articles about the Online Safety Bill. From the article:
"The study involving 1,500 British men was carried out by Edinburgh University’s Childlight unit, a research centre that investigates the risk of sexual exploitation of children and young people, and the University of New South Wales, Australia."
"Related Articles" below that one include this one written by Paul Stanfield, chief executive of Childlight:
"Only powerful regulation can end the silent pandemic of online child abuse"
22 Sept 2023
https://www.thetimes.co.uk/article/only-powerful-regulation-can-end-the-silent-pandemic-of-online-child-abuse-8h3zp05xw
Archive: https://archive.ph/NK1q7
and this one:
"Online Safety Bill approved by House of Lords
20 Sept 2023
The legislation has completed its passage through parliament and is expected to receive royal assent next month
https://www.thetimes.co.uk/article/online-safety-bill-uk-parliament-law-30m06jdt3
Archive: https://archive.ph/PYB1x
From the Free Speech Union newsletter to members (my bolding):
"UK Parliament finally passes censorial Online Safety Bill"
The UK’s controversial and long-awaited Online Safety Bill, which contains sweeping new surveillance and censorship measures, concluded its final Parliamentary debate on Tuesday and will soon achieve Royal Assent, meaning it will then become law (BBC, Reuters, Times).
The FSU is disappointed that the legislation has been passed. But thanks to our lobbying and campaigning, as well as our members who contacted their MP to express their misgivings about the Bill, the version of reaching the statute books is at least an improvement on earlier versions.
In particular, the obligation on social media companies to “address” so-called ‘legal but harmful’ content, as set out in Clause 13 of the original legislation, has now been removed. Our objection to this clause was essentially that the phrase “address” risked becoming a euphemism for “remove”. If the Government had published a list of legal content it considered harmful to adults and imposed an obligation on social media companies to say how they intended to “address” it, that would have nudged them to remove it. But thanks to the lobbying of the FSU, that obligation has now been dropped.
The new Harmful Communications Offence, which was to replace s127 of the Communications Act, as well as the Malicious Communications Act, and could have seen people jailed for two years for sending or posting a message with the intention of causing “psychological harm amounting to at least serious distress”, has also been scrapped. In other words, the legislation won’t now criminalise saying something, whether online or offline, that causes ‘hurty feelings’, and the FSU deserves most of the credit for that.
In addition, the original duty imposed on social media companies to “have regard” for freedom of expression – which is so undemanding, it’s virtually pointless – has now been bumped up to “have particular regard”, thanks to us.
Finally, we were ahead of the curve in promoting user empowerment as an alternative to the de facto removal of ‘legal but harmful’ content – in the final iteration of the legislation, the locus of responsibility for online safety has shifted from paternalist providers to empowered users. The result is that instead of saying how they intend to “address” harmful content in their terms of service, companies like Facebook and X (formerly Twitter) will now have to say what tools they’re going to make available to their users so they can act as their own content moderators.
It’s important to note that the passage of the Bill onto the statute book is not the end of the story as far as the FSU’s campaigning work goes.
Ofcom’s Chief Executive, Dame Melanie Dawes, this week confirmed that the UK’s new regulator for online safety will shortly be setting out the first set of standards that it expects tech firms to meet when it comes to offering users the option to filter out ‘harmful’ content that they don’t want to see.
The FSU will be watching closely to see how those ‘standards’ end up influencing the user empowerment tools that each social media platform subsequently offers its users.
There is, for instance, a significant risk that the big providers will establish a ‘safe’ mode as their default setting, so that if adult users want to see ‘lawful but awful’ content they will have to ‘opt in’.
That may mean perfectly lawful yet politically contentious views – e.g., an article in the Spectator by a gender critical feminist – will be blocked by default since woke victim groups will argue that any such views constitute incitement to hatred based on their protected characteristics.
Of course, users will have the option of adjusting their settings so they can see that content, but some won’t want to in case, say, a colleague sees something ‘hateful’ over their shoulder and reports them to HR for ‘harassment’.
And what about those who won’t even be aware they have a choice, or are aware but don’t know how to do anything about it?
We know that that’s likely to be a lot of people thanks to research carried out by behavioural scientists on what’s known as ‘choice architecture’ – i.e., the way in which customers are presented with choices will influence their subsequent decision-making. As countless studies in this area have now demonstrated, one of the most powerful tools available to organisations wishing to ‘nudge’ consumers down certain behavioural pathways is the humble ‘default setting’. That’s because consumers tend to be too lazy to revise those settings or don’t know how.
The devil will be in the detail of what each social media platform’s version of ‘safe browsing’ looks like. If a platform’s ‘safe mode’ becomes the default setting, how easy will it be to switch it off? As one recent study put it, “If defaults have an effect because consumers are not aware that they have choices… [they] impinge on liberty.”