Facebook’s Menlo Park entrance. Photo: Facebook
Two years ago, Amit Lodha, a portfolio manager at the global investment management services provider, Fidelity International wrote in a blog post (intended for financial advisers and wholesale investors) that despite Facebook having a market cap (in March 2017 when the blog post was written) of US$394 billion and trading at a price to earnings multiple of 30 times, making $11 billion of cash flow for equity shareholders in 2016, Lodha “would be taking a break from Facebook”.
To be clear, Lodha was not taking a break from using Facebook, but from investing funds under his management into Facebook. The reason? Lodha wrote that Facebook’s algorithms have the ability to influence people’s opinions “and in many cases even our emotions. This is great power, and with great power comes great responsibility… What makes (Facebook) so unique could also be its Achilles heel.”
He made the point that if history is any guide, market power has been resolved either through new competition (which he argued is unlikely in the case of Facebook) or the heavy hand of the government. And it is this area which every Facebook investor should be applying more thought to. As Lodha pointed out, “Government regulation and its impact is difficult to forecast with accuracy – just look at what happened to the banking sector post the financial crisis or, closer to tech land, Microsoft, which went through a near-lost decade (in terms of stock price performance) post the European Commission antitrust ruling in 2004.”
The moat of network effect
Lodha also references the 1982 breaking up of the largest corporation in American history, AT&T, into eight different entities by the US government effectively dismantling the company’s monopoly over the US telephone system. “Social media and the telephone share a very important characteristic – the moat of network effect. Network effect is the (generally) positive effect that each incremental user has on the value of that product or service to all other users. For example, the more people who own telephones, the more valuable is the telephone to each owner.
“Social media follows the same analogy – the more people on Facebook, Instagram or WhatsApp, the more valuable the platform to all existing and future users. Facebook’s moat actually goes a step further in that not only does it gain from network effects (like AT&T) but it also has unique access to our digital identity and all the actions (photos/articles we like and share) we perform on its various platforms.”
— Fidelity UK (@Fidelity_UK) April 13, 2017
“This is mind-blowing data and gives Facebook unique power”
Lodha recognised the dangers Facebook could face in its immediate future by pointing to research conducted by Michal Kosinski at Cambridge University in 2012. The study indicated that the researchers could predict on the basis of an average of 68 Facebook ‘likes’ by a user, their skin colour (with 95 per cent accuracy), their sexual orientation (88 per cent accuracy), and their affiliation to the US’s Democratic or Republican party (85 per cent).
“This is mind-blowing data and gives Facebook unique power. There has already been significant controversy around whether this user data was used to influence either the Brexit vote or the US presidential elections,” wrote Lodha. He then argued that if Facebook’s algorithms have the ability to influence opinions and emotions and Facebook’s management cannot manage the margins of safety in handling this immense power and responsibility “the issue of market concentration in technology in general and Facebook in particular sounds like an area which requires a lot more thought and where the risk/reward is probably not appropriate at this point to spend the fund’s risk capital.”
Lodha isn’t the only professional investor who has ‘taken a break’ from the social media giant more recently. Others include James Thomson of the Rathbone Global Opportunities Fund, who previously held Facebook as one of his biggest holdings. He sold all of his shares of Facebook on 17 March last year, the day news broke of digital consultant Cambridge Analytica’s use of Facebook data to build profiles on US voters. The data protection issue was also the reason Mark Hawtin, manager of the GBP£241 million ($320 million) GAM Star Technology fund, reduced his exposure to Facebook.
While Lodha showed some appreciation for Facebook’s CEO (read the full blog here) and management saying they have “clearly articulated how seriously they take these issues”, modern day critics of Facebook and its management accuse Facebook of being guilty of “a wilful blindness driven by greed, naïveté, and contempt for oversight,” as articulated in a The New Yorker article ‘Ghost in the Machine’ by Evan Osnos in September last year.
Facebook’s Prineville data center. Photo: Facebook
The face of algorithmic malfeasance
Most recently, Facebook’s sins were eloquently summed up on Medium.com by Douglas Rushkoff, author of the upcoming book Team Human, who says the social network “has become the face of algorithmic malfeasance… the poster child for how technology can be turned against human agency. The company employs behavioural finance, privacy invasion, and machine learning to manipulate users in the fashion of Las Vegas slot machines, and then claims either innocence or ignorance when the social impact of these machinations is revealed”.
It is against this backdrop that Facebook now faces worldwide scrutiny. From the US’s Federal Bureau of Investigation, Securities and Exchange Commission, Federal Trade Commission and the Department of Justice to the EU’s European Commission and Data Protection Commission to watchdogs in the UK and Australia, almost every regulatory body of note has started investigations into violations of consumer protection rules, data protection rules or the spreading of fake news.
As the Washington Post reports, Facebook seems to be ready to agree to pay a multi-billion dollar fine following an investigation by the Federal Trade Commission into the media platform’s privacy practices following the Cambridge Analytica scandal. But being able to pay huge fines will not save Facebook from further scrutiny and/or regulatory intervention.
‘Clear history’ privacy feature
Maybe this is why, for the first time, Facebook seems to be making good on its promise of a long-awaited ‘clear history’ privacy feature available to all users and apparently ready to be launched later this year. First mentioned by Zuckerberg in May last year in the fallout following the Cambridge Analytica scandal, a ‘clear history’ feature could be used by users to instruct Facebook to delete all records of websites visited and links clicked. The development of this feature has now been confirmed by Facebook CFO, Dave Wehner at the recent Morgan Stanley conference in San Francisco.
Facebook is also pulling out the stops in its efforts to fight fake news. Since it launched its US fact-checking programme (received with mixed reviews), it has also started to role out fact checking in the UK earlier this year with the independent charity, Full Fact selected to be the first British publisher to review and rate the accuracy of content on the social network.
How successful these measures will bode for the embattled social media platform, will rely heavily on the success and sincerity of the rollout of these responsibility measures. Zuckerberg had the ability to create a monster, now he desperately needs to prove to the world (and its governing bodies) that they don’t need to cage it.
More like this