Guido Noto La Diega
A. Context and scope of the research
This work argues that algorithms cannot and should
not replace human beings in decision-making, but it
takes account of the increase of algorithmic decisions
and, accordingly, it presents three European legal
routes available to those affected by such decisions.
2 Algorithms have been used in the legal domain for
decades, for instance in order to analyse legislation.
These processes or sets of rules followed in
calculations or other problem-solving operations
raised limited concerns when they merely made our
lives easier by ensuring that search engines showed
us only relevant results.2 However, nowadays
algorithms can decide if one can get a loan,3 is hired,4
is allowed to cross a border,5 or must go to prison.6
Particularly striking is the episode concerning a
young man sentenced in Wisconsin to a six-year
imprisonment for merely attempting to ee a trafc
ofcer and operating a vehicle without its owner’s
consent. The reason for such a harsh sanction was
that Compas, an algorithmic risk assessment system,
concluded that he was a threat to the community.
The proprietary nature of the algorithm did not
allow the defendant to challenge the Compas report.
The Supreme Court found no violation of the right
to due process.7
* Lecturer in Law (Northumbria University); Director (Ital-
IoT Centre for Multidisciplinary Research on the Internet of
Things); Fellow (Nexa Center for Internet & Society).
1 William Adam Wilson, ‘The Complexity of Statutes’ (1974)
37 Mod L Rev 497.
2 The algorithm used by Google to rank search results is
covered by a trade secret.
3 More generally, on the use of algorithms to determine the
parties’ contractual obligations, see Lauren Henry Scholz,
‘Algorithmic Contracts’ (SSRN, 1 October 2016), <https://
ssrn.com/abstract=2747701> accessed 1 March 2018.
4 On the negative spirals that automated scoring systems
can create, to the point of making people unemployable,
see Danielle Keats Citron and Frank Pasquale, ‘The scored
society: Due process for automated predictions’ (2014) 89(1)
Washington Law Review 1, 33.
5 Jose Sanchez del Rio et al., ‘Automated border control
e-gates and facial recognition systems’ (2016) 62 Computers
& Security 49.
6 As written by Frank Pasquale, ‘Secret algorithms threaten
the rule of law’ (MIT Technology Review, 1 June 2017)
algorithms-threaten-the-rule-of-law/> accessed 1 March
2018, imprisoning people “because of the inexplicable,
unchallengeable judgements of a secret computer program
undermines our legal system”. For a les $10 million lawsuit
related to face-matching technology that allegedly ruined
an American man’s life see Allee Manning, ‘A False Facial
Recognition Match Cost This Man Everything’ (Vocativ,
1 May 2017) <http://www.vocativ.com/418052/false-
accessed 1 March 2018.
7 State v Loomis, 881 N.W.2d 749 (Wis. 2016). Cf Adam
Liptak, ‘Sent to Prison by a Software Program’s Secret
3 Articial intelligence techniques (natural language
processing, machine learning, etc.) and predictive
analytics enable private and public decision-makers
to extract value from big data8 and to build proles,
which are used to make decisions in an automated
way. The accuracy of the proles is further enhanced
by the linking capabilities of the Internet of Things.9
These decisions may profoundly affect people’s
lives in terms of, for instance, discrimination, de-
individualisation, information asymmetries, and
In light of the confusion as to the actual role of
algorithms, it is worrying that in “the models of game
theory, decision theory, articial intelligence, and
military strategy, the algorithmic rules of rationality
replaced the self-critical judgments of reason.”11
5 One paper12 concluded by asking whether and how
algorithms should be regulated. This work aims to
constitute an attempt to answer those questions with
a focus on the existing rules on intellectual property,
data protection, and freedom of information. In
particular, it will be critically assessed whether “the
tools currently available to policymakers, legislators,
and courts (which) were developed to oversee
human decision-makers (…) fail when applied to
First, the paper presents ten arguments why
algorithms cannot and should not replace human
decision-makers. After this, three legal routes are
presented.14 The General Data Protection Regulation
Algorithms’ (New York Times, 1 May 2017), <https://www.
1 March 2018.
8 In analysing the algorithms used by social networks,
Yoan Hermstrüwer, ‘Contracting Around Privacy: The
(Behavioral) Law and Economics of Consent and Big Data’
(2017) 8(1) JIPITEC 12, observes that for these “algorithms to
allow good predictions about personal traits and behaviors,
the network operator needs two things: sound knowledge
about the social graph [describing the social ties between
users] and large amounts of data.”
9 Article 29 Working Party, ‘Guidelines on Automated
individual decision-making and Proling for the purposes
of Regulation 2016/679’ (2017) 17/EN WP 251.
10 See Bart W. Schermer, ‘The limits of privacy in automated
proling and data mining’ (2011) 27 Computer law &
security review 45, 52, and Article 29 Working Party (n 9) 5.
11 Lorraine Daston, ‘How Reason Became Rationality’ (Max-
Planck-Institut für Wissenschaftsgeschichte, 2013) <https://
Daston_Reason> accessed 1 March 2018.
12 Solon Barocas et al., ‘Governing Algorithms: A Provocation
Piece’ (SSRN, 4 April 2013) 9 <https://ssrn.com/
abstract=2245322> accessed 1 March 2018.
13 Joshua A. Kroll et al., ‘Accountable Algorithms’ (2017) 165 U
Pa L Rev. 633.
14 Other routes may be explored. In the US, Keats Citron (n 4) 33
suggested that the principles of due process may constitute