Cut Out By The Middle Man: The Free Speech Implications Of Social Network Blocking and Banning In The EU

AuthorPatrick Leerssen
PositionPost-graduate student of Information Law at the University of Amsterdam, research assistant at the Institute for Information Law (IvIR)
Pages99-119
Cut Out By The Middle Man
2015
99
1
Cut Out By The Middle Man
The Free Speech Implications Of Social Network Blocking and
Banning In The EU
by Patrick Leerssen, post-graduate student of Information Law at the University of Amsterdam, research
assistant at the Institute for Information Law (IvIR)
© 2015 Patrick Leerssen
Everybody may disseminate this ar ticle by electroni c means and make it available for downlo ad under the terms and
conditions of the Digita l Peer Publishing Licence (DPPL). A copy of the license text may be obtaine d at http://nbn-resolving.
de/urn:nbn:de:0009-dppl-v3-en8 .
Recommended citation: Patr ik Leerssen, Cut Out By The Middle Man: T he Free Speech Implications Of So cial Network Blocking
and Banning In The EU, 6 (2015) JIPITEC 99 pa ra 1.
Keywords: Social Media; Banning; Private Censorship; Removal Orders
tical examples, this article explores the threat to free
speech created by this lack of accountability: Firstly,
a shift from legislative regulation and formal injunc-
tions to public-private collaborations allows state
authorities to influence these ostensibly voluntary
policies, thereby circumventing constitutional safe-
guards. Secondly, even absent state interference,
the commercial incentives of social media cannot
be guaranteed to coincide with democratic ideals. In
light of the blurring of public and private functions in
the regulation of social media expression, this arti-
cle calls for the increased accountability of the social
media services towards end users regarding the ob-
servance of free speech principles.
Abstract: This article examines social network
users’ legal defences against content removal un-
der the EU and ECHR frameworks, and their implica-
tions for the effective exercise of free speech online.
A review of the Terms of Use and content modera-
tion policies of two major social network services,
Facebook and Twitter, shows that end users are un-
likely to have a contractual defence against content
removal. Under the EU and ECHR frameworks, they
may demand the observance of free speech princi-
ples in state-issued blocking orders and their imple-
mentation by intermediaries, but cannot invoke this
‘fair balance’ test against the voluntary removal deci-
sions by the social network service. Drawing on prac-
A. Introduction1
1 Social media have taken up a central role in public
discourse, and are often hailed as a boon to free
speech. Social network services (SNS) such as
Facebook and Twitter facilitate civic participation
in numerous ways. Firstly, they can act as soap-
boxes for the ‘average citizen’ to voice his or her
opinion, leading to high-prole expressions of
political sentiment such as with the #jesuischarlie
and #illridewithyou hashtags. Secondly, SNSs
act as gateways for accessing external links and
resources, with the average news website relying on
Facebook and/or Twitter for over 25% of its trafc.2
Thirdly, they have also played an important role
in the organisation of major ‘real world’ political
manifestations such as the Arab Spring protests
and the Occupy movement.
3
In comparison to the
linear dissemination models of ‘mass media’ such as
radio, press and television, SNSs have been praised
for creating a more diverse and accessible public
debate.4 And yet, these networked systems also
lead to a (re-)centralisation of power around a new
set of privileged actors: the social network service
providers themselves.
2
While we tend to view social media platforms as
neutral carriers of information, their operators
possess the technical means to remove information
2015
Patrick Leerssen
100
1
and suspend accounts. As such, they are uniquely
positioned to delimit the topics and set the tone
of public debate. Increasingly, they have shown
themselves prepared to apply these techniques in
order to moderate their users and block undesirable
information.
5
SNSs may take on this editorial role
out of their own commercial interest, or as a
matter of compliance with (perceived) legal duties
or government orders. In both cases, this may
lead them to stie potentially legitimate forms of
expression. This raises the question whether end
users can legally contest SNS removal decisions,
and protect themselves from such interference. To
what extent are social network services required
to observe free speech principles under the EU
legal framework when removing end-user content
from their services? Does this level of protection
guarantee the effective exercise of the right to
freedom of expression in practice?
3
This article will start by examining the Terms of Use
and content moderation policies of two major SNS,
Facebook and Twitter, in order to illustrate their
handling of user-generated content and to examine
whether end users can rely on contractual grounds
to contest content removal. This will be followed by
a review of European Convention of Human Rights
(ECHR) case law regarding positive obligations
to protect free speech and their application to
intermediary content removal. Subsequently, it
will review the EU’s legal framework, focusing on
its e-Commerce regime and the Court of Justice
of the European Union’s Charter-based case law.
Finding that neither framework is likely to provide
a defence against voluntary removal decisions by the
SNS (as opposed to injunction-based measures), this
article will explore the potential for abuse of this
competence. Firstly, it will detail how EU governments
have attempted to inuence SNS content policies in
the context of anti-terrorism efforts, allowing for the
indirect exercise of state power and a ‘privatisation’
of censorship. Secondly, it will be argued that, even
absent state interference, the commercial incentives
of SNSs and their responsibilities towards end users
and third parties do not guarantee the observance
of free speech principles. In light of the blurring of
public and private functions in the regulation of
expression via social networks, this article calls for
the increased accountability of the SNS towards end
users as a means to protect online speech.
4
Depending on one’s denition, the term ‘social
network service’ can apply to a broad range of online
services, from dating websites such as eHarmony to
videohosting platforms such as Youtube. This article
will focus on Twitter and Facebook as illustrations of
this broader category, due to their unique popularity
and global reach. Twitter currently serves over 250
million users and Facebook over 1 billion.6 These
numbers are rivalled only by the Chinese ‘Weibo’
and the Russian ‘VKontakte’, which, relatively
speaking, do not reach a major audience outside
their country of origin.
7
Furthermore, Facebook and
Twitter are not dedicated to one particular format
or topic and often include highly political forms
of discussion (in comparison to, say, eHarmony’s
online dating community or LinkedIn’s professional
networking model). Therefore, as gatekeepers to
online political discourse, these two websites are
especially deserving of scrutiny regarding the level
of free speech protection provided to their users.8
B. SNS Terms of Use
5
Specic rules on content removal can be found in
social network Terms of Use (ToU), which govern
the contractual relationship between end users and
the service provider. Before delving into the general
constraints imposed by fundamental rights and other
public law sources, it is therefore worth examining
the level of protection that has resulted from these
private agreements. Contractual assurances can set
conditions for content removal, and also contribute
to its foreseeability by informing end users of their
rights and responsibilities relating to their content.
This section will examine the Facebook and Twitter
ToU, assessing their level of protection against the
removal of user-generated content (‘blocking’ or
‘removing’) and termination or suspension of service
(‘banning’).
6
Article 5 of the Facebook ToU, titled ‘Protecting
Other People’s Rights’, starts with the following
paragraphs:
•
You will not post content or take any action on
Facebook that infringes or violates someone else’s
rights or otherwise violates the law.
•
We can remove any content or information you
post on Facebook if we believe that it violates this
Statement or our policies.9
7
Removal is thus permitted for those content
categories prohibited by Facebook’s policy, and
for any content that infringes individual rights or
violates the law. All forms of illegal content, such
as criminal hate speech, child pornography or
copyright infringement, are removable, but this
also goes for breaches of Facebook’s terms, which
outline a large number of additional prohibitions.
These include bullying, harassment, intimidation,
and nudity, as well as the use of Facebook ‘to
do anything unlawful, misleading, malicious or
discriminatory’.10 Grounds for removal thus reach
far further than the requirements of the law, and
include some rather vague terms. What constitutes
a misleading or malicious post is highly subjective,
and could vindicate the removal of a broad range
of content. To make matters worse, the Article
5 removal competence also covers cases where

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT