Facebook, YouTube, and Twitter grilled on addictive nature of their algorithms

Yesterday, media tech’s triumvirate of online engagement power was called to the Capitol once again, this time for a senate hearing honing-in on algorithms. In a session lasting just over two hours, the Subcommittee on Privacy, Technology, and the Law quizzed representatives from Facebook, YouTube, and Twitter on ‘Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Mind’. 

In a break from recent tradition, Mark Zuckerberg and Jack Dorsey did not headline Tuesday’s event.  It was also the first time that YouTube had been involved directly in such a hearing, having received criticism for so far having managed to evade these proceedings, despite being a key player in the spread of disinformation online.

The social media panel was made up of: 

  • Ms. Monika Bickert: Vice President for Content Policy, Facebook
  • Ms. Alexandra Veitch: Director of Government Affairs and Public Policy for The Americas and Emerging Markets, YouTube
  • Ms. Lauren Culbertson: Head of US Public Policy, Twitter

“Whether voluntary, regulatory, or legislative” 

Wasting no time in setting the agenda, US State Senator Chris Coons said that there was not necessarily anything inherently wrong with using algorithms or machine learning to enhance consumer experience, but in the case of the media tech giants, something had gone astray: 

“Many have recently argued this advanced technology is harnessed into algorithms designed to attract out time and attention on social media, and the results can be harmful… to our kid’s attention spans, to the quality of our public discourse, to our public health and even to our democracy itself.” 

“What happens when algorithms become so good at amplification? At showing you content that a computer thinks you’ll like so much that you, or your kids, or family members end up spending hours each day staring at the screen? What happens when algorithms become so hyper-tailored to you and your habits and interests that you stop being exposed to ideas that you might find disagreeable or so different from yours as to be offensive? What happens when they amplify content that might be very popular, but is hateful or just plain wrong?”

He added that the purpose of the hearing, and others like it, was to find solutions to improve the role of social media in society, “Whether voluntary, regulatory, or legislative.” 

Testimony Highlights

Ms. Monika Bickert: Vice President for Content Policy, Facebook

“For us, the focus is always on the longterm, and I’ll give you an example of that: In January 2018 we put out a post announcing that we were going to be prioritising content from family and friends, over say news content. It was called ‘Meaningful Social Interactions’. We suspected that it would lead to less time on the service and it did – it led to people spending tens of millions less hours on Facebook every day.” 

“But that was something that we did because we thought it was more important for people to see that kind of content, and longterm that would make the site more meaningful and people want to continue to use it. So that is just one example of how we are focussed on maintaining a healthy environment.” 

Ms. Alexandra Veitch: Director of Government Affairs and Public Policy for The Americas and Emerging Markets, YouTube

“Misinformation is not in our interests. Our business relies on the trust of our users and also our advertisers, who on our platform advertise on single pieces of content. We want to build these relationships for the longterm and that’s why we bake user choice, user control right into the product, with things like timers, the opportunity to turn autoplay off, and ‘take a break reminders’, of which we’ve sent half a billion.”  

Ms. Lauren Culbertson: Head of US Public Policy, Twitter

“I know that people have concerns and that they think companies like ours should be more transparent. That’s why we have put fourth three solutions that we think would go a long way to addressing these concerns. The first is increased transparency. The second is more user control and choice over algorithms. And the third is enhanced due process, so if we do make a mistake then users have the ability to appeal and have their decision reviewed against our terms.”

Section 230

There was seemingly slightly less talk of Section 230 – the piece of US legislation that protects social media companies from taking responsibility for the third-party content that appears on their platforms – on this occasion. The push for social to be regulated in the same way as traditional media is unlikely to go away any time soon, but revisiting US State Senator Chris Coons’ opening remarks, the mood in the room (and across the airwaves) seemed very much more focussed on “voluntary” measures this time around. With the CEOs absent and an algorithmic theme, this was a more commercial, as opposed to legislative consideration. 

Towards the end of the session, in a statement that was highlighted by financial publication MarketWatch, Senator Kennedy (no relation) of Louisiana eloquently summarised the current issue: “You can still find kindness in America, but you have to go offline to do it.”   

You can watch the hearing in full here

Main Image: Shutterstock/RozenskiP

Topics

Your first step to joining FIPP's global community of media leaders

Sign up to FIPP World x