What should social media do about fake news and online abuse?

What should social media do about fake news and online abuse?

Tuesday, 3rd January 2017
From left: Patrick Walker, Facebook; Stephen Nuttall, YouTube; Dara Nasr, Twitter; and Kate Bulkley, chair (Credit: Paul Hampartsoumian)
Twitter icon
Facebook icon
LinkedIn icon
e-mail icon

Much has been written and broadcast in the traditional media about the boost President-elect Donald Trump received from fake news stories during his successful election campaign

At an RTS event about social media and television, Facebook’s Patrick Walker addressed the charge that his company had done little to stop these stories spreading.

‘We are a platform – we see ourselves first and foremost as a technology company. The mission we have is to connect people and make the world more connected, which is about sharing information,’ he said.

On the problem of online abuse, Walker argued that it wasn’t ‘easy to balance’ a desire for ‘openness and connectivity’ with keeping ‘the place safe’.

He added: ‘One person’s freedom of expression can be another person’s hate speech.’

Walker said that Facebook had created more community guidance to outlaw hate speech and other abuse, but admitted mistakes had been made.

These included the censoring of an award-winning photo from the Vietnam war showing children running from a napalm attack.

But social media, argued YouTube’s Stephen Nuttall,  had brought many stories, such as the Arab Spring, the coup in Turkey and the horror of Aleppo, to the attention of the world: ‘Online has enabled those stories to be told to a huge audience.’

Pressed by Channel 4 communications chief Dan Brooke, who was in the audience, on the increasing threat to democracy posed by fake news, Walker said: ‘The internet allows for the instantaneous, global transmission of ideas, which we enable, and sometimes these ideas come from sources that might be a bit dubious.’

He added: ‘We’re spending a lot of time trying to figure out how to improve [the situation].’ Measures could include finding ‘much more obvious ways for people to flag content that is potentially false’. ‘The thing that we can’t do is become the judge of what is true and what is not true – that’s an impossible task and shouldn’t be our responsibility.’

‘Well, that happens in television,’ pointed out Brooke. ‘There is a judge in television – the regulator.’

Nuttall said that his parent company, Google, tried to stop anyone gaining financially from fake news: ‘If there is video that is misrepresentative or inaccurate, then it won’t monetise – that takes away a huge incentive to put the content there in the first place.’

He continued: ‘It is important, though, that the internet is a place for free speech. It wouldn’t be for us to become a regulator or censor of the internet. I also think that audiences are smart enough to work out where to go for their trusted sources of news.’

Walker noted that ‘people tend to think that the status quo from before was some golden age of truth. But I [could] read something in the paper, [on the] front page – misinformation, potentially intentionally so – [then] there’s a retraction three days later on page 42.’

When a reader came across fake news online, Walker said, he or she had ‘the power as an individual to call it out, to flag it so that people know. That power to respond is something very new.’