US tech giants need to admit they are media companies – and accept the inherent responsibilities, argues Stewart Purvis
Imagine that a broadcaster reaching over 1 billion people a day is making billions of pounds of profits every year, partly by distributing news coverage that includes numerous mistakes.
Imagine, too, that, when the broadcaster is called to account, its first proposed solution to the problem is to send out a message to viewers entitled “tips for spotting false news”. The first of the 10 tips is: “Be sceptical of headlines”.
The chances are that the broadcaster would be told that its so-called “new educational tool against misinformation” was hardly a satisfactory remedy.
All of the above accurately describes Facebook’s current status and policies – apart, of course, from the fact that it isn’t a broadcaster by any traditional definition, even though it is a platform for multiple video streams.
Imagine, too, a broadcaster having to admit to advertisers that it didn’t know during which programmes their ads were transmitted. That’s the equivalent of Google’s position over its video platform, YouTube.
Brands as varied as L’Oréal and the UK Government stopped advertising on Google after realising that their ads had appeared next to extremist content.
This row prompted Sir Martin Sorrell, whose WPP advertising agencies spent about $5bn on Google advertisements last year, to announce: “We have always said Google, Facebook and others are media companies and have the same responsibilities as any other media company. They cannot masquerade as technology companies, particularly when they place advertisements.”
Ever since the 15th-century printing press of Johannes Gutenberg, media companies have been shaped by the technologies of their times. So how did the 21st-century Gutenbergs manage to distance themselves from the responsibilities that traditionally come with terms such as “publishers”, “editors” or “broadcasters”?
I got an early insight into this at the turn of the millennium, when I visited AOL’s headquarters outside Washington. AOL was the big, new kid on the digital block and had, in effect, taken over the “old-media” company Time Warner.
"Google, Facebook and others are media companies…They cannot masquerade as technology companies."
I was directed to the AOL “newsroom” and arrived to find nobody there, just computers. We’d heard about paperless newsrooms but here was a human-less newsroom and, perish the thought, potentially a world without editors.
AOL and their counterparts emphasised that they were, in the jargon of the time, “mere conduits for others’ communications”. In other words, no more than a modern version of the stagecoach, where the mail was carried under the driver’s seat and the driver never opened the letters and read them.
Thus, the early “tech” companies avoided being seen as editors or publishers. Much of what they carried had already been edited or published by people like us broadcasters, so what was the problem?
But the second wave of tech companies provided carriage to very different kinds of content – “social media”, “user-generated content”, “citizen journalism”.
The new business model created an engagement currency of clicks, likes and shares that fed off emotional responses that put a premium on strong views and strong reactions.
Getting noticed was a way of getting paid, for the content creator and the carrier. And that’s when the “mere conduit” argument started to become unconvincing.
For years, the companies avoided many of the debates about content oversight and regulation.
As the Ofcom member of the first UK Council for Child Internet Safety, I saw how the US-based businesses initially frustrated action at a UK national level by saying that they only worked with Brussels on EU-wide initiatives. Specifically, they pushed back against a proposal on child protection from CEOP, the police-led, child-protection body later absorbed into the National Crime Agency.
But as the lobby for child protection became bigger in the UK, things started to change. Even more effective was the lobby against digital piracy by creative rights-holders, especially the Hollywood studios, which brought about the 2010 Digital Economy Act. There has also been action over “hate crimes” in social media.
In the courts, Twitter is now regarded as mainstream media. See Mr Justice Warby’s recent judgment in the Jack Monroe vs Katie Hopkins libel action, where he dismissed the attempt by Hopkins’s counsel to portray Twitter as “the ‘Wild West’ of social media, and not as authoritative as (for instance) the Sun or the Daily Mail”.
So now, with these precedents set, comes a natural corollary. As Damian Collins MP, outgoing Chair of the Culture, Media and Sport Committee, says: “Facebook and Google already accept that they have a social obligation to address pirated content online and illicit material. I think they also have a social obligation, as well, to act against the sources of fake news.
“Mark Zuckerburg of Facebook, who initially dismissed the impact of fake news on the US presidential election, has had to announce forthcoming projects on the detection of fake news of which the ‘10 tips’ is among the first.”
At an RTS event earlier this year, Patrick Walker of Facebook in Europe said: “We don’t see ourselves as editors.”
I would suggest that the more credible position now is: “We at Facebook originally didn’t see ourselves as editors but that’s where we are ending up.”
Or would admitting that leave Facebook open to the same exposure for libel that mainstream editors and publishers have always faced?
Perhaps such an admission would also worry the regulatory institutions, which prefer to keep a safe distance from the “Wild West”. Witness Ofcom’s resistance to regulating something as comparatively wholesome as the BBC website. The regulator convinced the Government that it would set an awkward precedent because it might be asked to regulate other websites.
As a result, Ofcom will have the final say on whether, for example, the BBC is impartial in its TV and radio coverage of the forthcoming Brexit negotiations, but not on whether the BBC’s coverage online has been. That will be a matter for the BBC Board.
Looking across the wider spectrum of issues, the regulatory institutions and the tech companies share a common fear that this beast has simply got too big to control. Sorting it out would require resources that the regulators aren’t able to deploy. Meanwhile, despite their enormous wealth, the companies themselves often aren’t willing to.
When the heat gets really bad, they have shown that they can solve some problems. Last year, Google said it was “thinking deeply” about the way that users searching the word “holocaust” were often taken first to Holocaust-denial sites.
That seems to have been solved. But so many other problems remain.
In an article headlined “Why does Facebook still seem so helpless against ‘fake news’?”, Jacob Brogan of the online site Slate said: “Facebook still gives the impression of a hiker flapping his arms before a bear, struggling to scare off a monster many times its size.”
He argued that Facebook’s latest strategies, including attempts to disrupt the monetisation of fake news, were little more than a stopgap.
Furthermore, its top 10 tips page “isn’t so much a tool as it is a cry for help, a desperate attempt to leverage the source of its power in pursuit of a war that it’s currently losing”.
Stewart Purvis is a former Chief Executive of ITN and Ofcom regulator. He is a non-executive director of Channel 4 and writes here in a personal capacity.