Archive


It is time to think again

Volume 33, Number 3, September 2022

At an Institute for Public Policy Research (IPPR) conference in June this year, one of the main architects of the Online Safety Bill, Dr Lorna Woods, admitted that its core principle started on the back of a Pret a Manger napkin. To be fair to Dr Woods, neither she nor her conceptual partner from the Carnegie Trust, Will Perrin, will have anticipated the sprawling 213- page 12-part Bill that was put on hold in July following Boris Johnson’s defenestration. Our newly installed prime minister will have some difficult decisions to make on how – and even whether – to take it forward. As it stands, the Bill is not fit for purpose.

Most are agreed that there is little wrong with that original core principle: to impose a duty of care on tech platforms to protect the more vulnerable – and particularly children – from some of the most harmful effects of the largest social media and search platforms, Facebook, Twitter, Google and their like. There has been enough evidence of damaging consequences from online bullying, proliferating self-harm and eating disorder sites, violent and misogynistic pornography, and profoundly disturbing homophobic, islamophobic and antisemitic abuse to concern all of us.

On two of the three main conceptual components of the Bill, there is broad agreement: tech platforms and search services must, on pain of sanctions for non-compliance, have clear and effective processes in place for taking down illegal material and material that is harmful to children. There are further provisions to deal with online scams and for combating the kind of online anonymity so beloved of trolls – although how tech companies are supposed to do this without compromising genuine whistleblowers is one of many unknowns.

It is the third component, however, that has created an unbridgeable gap between the internet-safety advocates and the free speech champions: what about material that is legal but harmful (otherwise known in colloquial terms as “awful but lawful”)? Here the water becomes not so much muddy as resembling dangerous quicksand. Harmful is defined as material which could cause “physical or psychological harm”, which in the Bill’s first iteration was up to tech platforms to determine. Worried by the prospect of giving Messrs Zuckerberg et al too much power to censor speech, this was changed to allow government, with Parliament’s approval, to decide what content meets this threshold. So ministers will have the power quickly to define or add new categories of “harmful” content – that must be addressed by the policies of platforms and search services – through secondary legislation, with little notice or scrutiny.

It is hardly surprising that this was condemned by the former Conservative Brexit secretary David Davis as “dangerous”, “authoritarian” and “potentially the biggest accidental curtailment of free speech in modern history”. His opposition has been echoed by campaigning groups such as Index on Censorship and the Open Rights Group. The Bill’s cause was undoubtedly not helped by having at its helm a secretary of state in Nadine Dorries who is simply not trusted with terminology that in many places is either incomprehensible or open to several subjective interpretations to be determined at a later date by tech companies, or Ofcom, or the government, or Parliament, or any permutation of the above.

Even without Dorries, there are two fundamental flaws at the heart of the Bill, both worthy but ultimately futile attempts to square the circle of protecting journalism while curbing “harmful” online content.

First are the so-called democratic importance and journalistic content duties. The draft Bill places an obligation on the largest tech platforms to take into account the importance of democratic free expression, diversity of political opinion, and free and open journalism when deciding what content should be taken down. Content of democratic importance is defined as material that “is or appears to be specifically intended to contribute to democratic political debate”.

Ofcom will issue guidance on definitions, but ultimately it will be up to Twitter, Facebook and Google to make those determinations. On what basis? Should the kinds of potentially dangerous misinformation that circulated around Covid be protected as contributing to democratic debate? Some of the conspiracy theories were bonkers, but when even our own prime minister starts talking about the “deep state”, what should be classed as dubious, or even impermissible? What about the “unprecedented levels of trolling” experienced by TV weather forecasters during the UK summer’s record temperature levels, which virtually every climate scientist linked to climate change? (“Nanny state again having another fake emergency so you will do as your [sic] told” was one of the more polite variants). When does conspiracy-based climate denial become unacceptable abuse?

Even vaguer is the duty to protect journalistic content, which is defined in a tautological loop as content “generated for the purposes of journalism” and appears to make no distinction between professionally produced journalism and citizen journalism. In a world of Substack, WordPress, microbloggers and hyperlocal websites run by a single individual who may or may not have “professional” experience, this lack of distinction is probably appropriate. But again, how on earth are platforms supposed to distinguish between harmful content with a journalistic purpose and harmful content that consists of one individual’s private ravings?

The ‘recognised news publisher’ get-out section

As Facebook stated in its evidence to the Bill committee, “we are concerned that the government is putting obligations on private companies to make extremely complex and real-time assessments about what constitutes journalistic content which could be impossible to implement consistently. We would also question whether it is appropriate for platforms to be defining what counts as journalistic content”.

The second fundamental flaw, even more egregious because it is the result of intense lobbying by private media conglomerates, is the news publisher exemption. This essentially excuses any organisation that meets the definition of a “recognised news publisher” from the duties to address harmful content that apply to social media and search platforms. In fact, following a government amendment added just before recess, tech platforms will be obliged to carry news publisher content that they might otherwise have deemed to be harmful (apart from plainly illegal material). As currently drafted, this “must carry” amendment will require platforms to consult a news publisher before it can address any harmful content, which at best will incur unnecessary delays before removal. Since there will be no obligation to remove the content, in practice platforms are more likely to avoid any hassle by simply not bothering.

Given the sweeping nature of this exemption, one might think that the news publisher definition would be tightly drawn to ensure that only the most responsible would qualify. But section 50 of the draft Bill requires only that a recognised news publisher:

  • Publishes news-related material which is created by different persons and is subject to editorial control;
  • Publishes in the course of a business;
  • Is subject to a standards code;
  • Has policies and procedures for handling complaints;
  • Has a registered UK office or business address;
  • Is legally responsible for publication in the UK;
  • Publishes its address and the person with legal responsibility.

These are scarcely onerous requirements, which can be easily circumscribed by potentially bad actors seeking to ensure their material is left intact. Existing white nationalist extremist publications based in the UK will have little difficulty in qualifying. Nor will the conspiracy theory websites that proliferate in the US. For Infowars to guarantee that its dangerously misleading material must be left on all tech platforms, it will simply have to register a UK business address, publish its standards code (“all our content is meticulously fact-checked by Mr Alex Jones…”) and its bespoke complaints process (“please write to our official head of complaints Mr Alex Jones…”), and every one of its tweets and Facebook posts would have to stay up, pending an onerous appeal process. RT, formerly Russia Today would qualify without any such workaround: it already ticks the requisite boxes. As the Labour MP Kim Leadbetter said during one of the parliamentary debates: “International publishers spreading hate, disinformation or other forms of online harm could easily set up offices in the UK to qualify for this exemption and instantly make the UK the harm capital of the world.”

Free speech advocates will, of course, applaud these exemptions as protecting the right to free expression. But the whole point of the Bill is that it is supposed to restrain content deemed to be harmful, a position endorsed – unsurprisingly, but hypocritically – by the newspapers whose umbrella body the News Media Association has lobbied furiously both for regulation of speech on tech platforms and for exemptions for their newspapers.

There are good theoretical reasons for exempting the kinds of journalism essential to democracy: factual information to enable citizens to make informed choices; investigative journalism that is essential for holding power to account; and exposing corruption, incompetence or dishonesty. We need to ensure that revelations – such as those by Edward Snowden – involving unlawful state surveillance, for example, will make it past the Google algorithm that might otherwise have removed it from search platforms. So yes, there is a fundamental need to safeguard vital investigative and informative journalism originating from sources that are committed to professional standards of news gathering and fact-checking.

Unfortunately, the very press groups that have fought for this exemption themselves have a distinctly dodgy track record in promoting the kinds of harmful content that are being targeted by this Bill. There are multiple examples of those publications riding roughshod over their standards code to produce material every bit as damaging as some of the most egregious social media content, while its ineffectual complaints-handler IPSO looks the other way.

Mosque shooting video took 12 hours to remove

A relatively recent example is the two Christchurch mosque shootings in 2019, one of which the perpetrator streamed online as he murdered 51 people attending their local mosques. Facebook worked round the clock to ensure that the horrendous video was removed wherever it was being posted. Meanwhile – despite an explicit plea from New Zealand police – the Daily Mail, The Sun and the Daily Mirror all hosted some edited version of that video (plus the terrorist’s manifesto in the case of the Mail) for at least 12 hours. In any similar scenario – under the Bill’s provisions – no platform would initially be able to take down material that had appeared on a UK publisher website unless it was manifestly illegal. That terrorist manifesto would have to stay up.

Meanwhile, objectionable material that would clearly offend the users of particular sites could not be removed from those sites pending an appeal to the original publisher. Platforms aimed at women, such as Mumsnet, could be required to carry misogynistic content about, for example, the Angela Rayner “leg crossing” story if it was originally published in the Daily Mail. Jewish websites could be required to carry Holocaust denialist stories reposted from qualifying publishers. And so on. The bottom-line policy question is this: is it appropriate to require platforms to host content deemed to be harmful if posted by private individuals?

To add further insult, the comment sections of these same qualifying news publishers will also be exempt from any of the regulatory obligations, despite quite clearly mimicking the “user-to-user” experience of social media platforms. The net result will be that the same misogynistic, antisemitic or homophobic content that might be deemed sufficiently “harmful” to be removed from Twitter or Facebook can be posted as a comment on the freely available Mail Online, Mirror or Guardian websites – and indeed reposted elsewhere as protected content. The rationale that most comment sections are moderated and therefore regulated by IPSO or IMPRESS won’t wash. Those comment sections that aren’t moderated can (and do) host some profoundly offensive and damaging material. And experience shows that any complaint to IPSO for those that are moderated will take six months to resolve – by which time the damage is long done.

There are therefore four good reasons for challenging these exemptions, however well-intentioned they may be as a route to mitigating the Bill’s threat to responsible journalism. First, they could easily be exploited to allow through harmful and dangerous material from extremist or conspiracy publications, or those propagandising for hostile foreign states. Secondly, there is no effective scrutiny for mainstream publishers whose content could easily meet the vague definition of “legal but harmful”. Thirdly, journalistic content is explicitly protected on the face of the Bill.

Finally, perhaps the most important reason of all: it creates a regulatory double standard between the potentially harmful posts of private citizens and the equally harmful content appearing on the websites of poorly regulated (or entirely unregulated) news publishers. Put simply, news publishers would have greater free expression rights than private citizens. The new prime minister – and whoever is appointed as culture secretary – must deal with that fundamental paradox at the heart of the Bill.

STEVEN BARNETT

Steve Barnett is professor of communications at the University of Westminster and
is an editorial board member of the BJR.
@stevenjbarnett

From the same issue

Do grow up

How time flies. Is it really three years since we discussed on these pages the generosity shown by...

read more