Loading...

PICA #027

A bashed-up Facebook thumb.

What Facebook should do next.

Balancing the right to speak and the need to control divisive content is a bigger issue for Facebook than the current data scandal, resolving it will require an ambitious approach.

Last week, a contrite, solemn Mark Zuckerberg faced the Senate Committee like a guilty schoolboy before a headmaster. The world smirked, because we all love to see the arrogant and mighty humbled. I found the human spectacle and the data privacy drama mildly entertaining and moderately enlightening but the indignation and investigation slightly misdirected, because data use (and abuse) is here to stay.
The Cambridge Analytica scandal has popularised concerns about collection and use of personal data. The responsible use of personal data is something that Facebook (and the ad-tech industry in general) needs to improve on urgently. A new EU regulation, called GDPR, will provide a useful test case for broad data privacy regulation, but we live in a data economy and it would be foolish to believe that any regulatory framework will ever comprehensibly protect individuals and communities.
The Facebook News Feed accounts for more than $30 billion, or approximately 75% of the site’s total annual advertising revenue. That revenue relies on user data that is collected to target, deliver, and analyse advertising investments. The Facebook business model (“Senator, We Run Ads.”) dictates the consideration Facebook has of personal data.
Nearly 20 years ago Scott McNealy (at the time CEO of Sun Microsystems) told Wired: “You have zero privacy anyway. Get over it.” In 2010, Mark Zuckerberg predicted that as social media grows, privacy will no longer be a “social norm.” If the 21st century really is the data century then it really can’t be the privacy century.
Unless there is a mass exodus from social media, the quantity and quality of data Facebook collects will grow. In 2017 Facebook passed the 2 billion monthly users milestone but, of course, it also owns Instagram (800 million monthly users) and WhatsApp (1.5 billion monthly users). Based on past actions it seems likely that, in the future, Facebook will acquire other platforms to protect and consolidate it’s position as an indispensable, data rich, tool for advertisers.

Mass influencers of belief and behaviour.
An advertiser is any entity that pays to display a message with the intent of informing and influencing. Most of the advertisers on Facebook are brands trying to sell something. Some however are organizations that have sociopolitical aims. The term ‘divisive material’ is used to describe some of the sociopolitical content that has been advertised and shared on Facebook. Zuckerberg has explicitly said that Facebook is responsible for the content on its platform, in doing so he implicitly assumes liability for the material it distributes. This is a much bigger issue than data privacy.
Facebook knows this, the following quote from Wikitribune reveals the official solution: “Nearly every time Zuckerberg was asked about Facebook becoming a platform of divisive material, he pointed to the promising possibilities of Artificial Intelligence (AI).” Currently Facebook employs human moderators to review flagged material. The future that Zuckerberg sees for AI is replacing those human moderators. Humans are more fallible, unreliable, slow and, in the long run, more expensive than efficient, compliant and unquestioning AI. But moderators who review flagged material are censors - whether they are human or AI - and censorship is dangerous for democracy and for progress. Whoever has the power to decide what billions of people see, or do not see, has enormous power and responsibility.

Ideas that are wrong and offensive.
Steven Pinker, Johnstone Professor of Psychology at Harvard University and author, says it well: “Everything we know about the world—the age of our civilization, species, planet, and universe; the stuff we’re made of; the laws that govern matter and energy; the workings of the body and brain—came as insults to the sacred dogma of the day.”
Consider, for example, Ignaz Semmelweis, the Hungarian physician and pioneer of antiseptic procedures. Today, the need for sterilization in medical procedures is not a divisive subject. In 1848 however, Semmelweis's observations conflicted with established scientific and medical opinions, his ideas were rejected and many of his colleagues were offended at his suggestion that they should wash their hands. Semmelweis died in 1865 in a mental institution, 139 years before the existence of Facebook. In an improbable parallel universe, where Facebook pre-dated Semmelweis and given the indignant reaction of the contemporary medical world, it is probable his ideas would have been considered divisive.
It wasn’t very long ago that gender was widely considered a binary subject, yet today, in liberal societies, there is a growing awareness and acceptance that gender is more complex than male or female. This change in mainstream consciousness and attitude is the result of sexual diversity being discussed in mainstream and social media. Abortion has been a divisive issue for centuries. The theory of evolution was, and in some circles still is, divisive. The list of divisive subjects is endless.
Who decides the parameters of censorship - the scientific community of the 1850’s who don’t like the idea of washing their hands? Which side of the pro-life or pro-choice barricade does censorship land on? Does it simply eliminate all reference to abortion? What if the true clients of Facebook, the big spenders in advertising, used their influence on censorship decisions?
Progress is a result of discussion and confrontation of diverse views. Would society be better, without ‘divisive material’? I think not, just as I believe that some of today’s divisive subject matter will, through a process of exposure, discussion, and assimilation, become a part of tomorrow’s accepted norms.

Mr Zuckerberg, teach the world to debate.
The declared Facebook mission is: “bringing the world closer together.” The world is very, very, very diverse. There isn’t much that 2 billion people all agree on. Diversity of culture and beliefs is at the very heart of divisive content. Ideas are dangerous and divisive, but without them we are dust. Through human interaction ideas grow and morph or fail and die. By understanding opposing ideas we can better comprehend others.
The solution isn’t to block the dissemination of certain ideas, the solution is to propagate tools that promote respectful co-existence. If Facebook were to develop a framework for human interactions that promoted respect for opposing views, acceptance of diversity and the ability to defend belief without insulting opposing ideas, then it could harness AI to moderate the tone and manner of the debate, not censor opinions. By increasing the debating ability of the average Facebook user the quality and sustainability of the platform would improve.
I realise this is a monstrously ambitious enterprise, but Facebook, if it were interested, is in a unique position to attempt it. The size of the Facebook audience is larger than any nation. Their technological and financial strength is commensurate to the task. The influence and addictiveness of the platform is conducive to learning and behaviour change. Nudge theory suggests that if Facebook applied itself to the task of improving the quality of debates it could do so effectively, day after day and user after user. If Facebook doesn’t attempt something big, visible and worthy like this it almost certainly will be subject to restrictive regulation, devised by politicians who don’t properly comprehend the thing they intend to regulate.

Advertisers will like it.
Teaching people acceptable ways of expressing what they think is not the same as telling them what they should think. Informing, those interested, about divisive issues by collecting, condensing and presenting in a neutral way the different points of view is an inclusive and unbiased service. If Facebook follows the slippery path of censoring discussion about certain issues it will inevitably, through its choices of what to censor, become a divisive platform. It would struggle to keep it’s 2 billion users, forget about bringing the world together.
Another significant reason for Facebook to improve debate quality is that this would be appreciated by advertisers. Recently brand safety has become a serious concern for brands. Situations where ads appear along side inflammatory content are bad for advertisers and consequently also for Facebook. The promise of a platform where opinions of all kinds are freely expressed but in a brand friendly tone and manner is compelling. The audience is potentially vast and the channel is brand safe.
The rules of debate quality would, of course, also apply to advertising content. This wouldn’t effect brand advertising that tends to be self-regulated and for the most part avoids toxic messages, but it would usefully restrict the more inflammatory tactics of certain political advertising.
Helping people around the world understand that the freedom of every individual to believe and discuss what they wish can only be guaranteed by an open society that has agreed rules of engagement is a noble cause. For Facebook, conveniently, it also makes business sense.