Against the Dehumanisation of Decision-Making - Algorithmic Decisions at the Crossroads of Intellectual Property, Data Protection, and Freedom of Information

Author:Guido Noto La Diega
Pages:3-34
SUMMARY

This work presents ten arguments against algorithmic decision-making. These revolve around the concepts of ubiquitous discretionary interpretation, holistic intuition, algorithmic bias, the three black boxes, psychology of conformity, power of sanctions, civilising force of hypocrisy, pluralism, empathy, and technocracy. Nowadays algorithms can decide if one can get a loan, is allowed to cross a border, or must go to prison. Artificial intelligence techniques (natural language processing and machine learning... (see full summary)

 
FREE EXCERPT
Against the Dehumanisation of Decision-Making
2018
3
1
Against the Dehumanisation
of Decision-Making
Algorithmic Decisions at the Crossroads of Intellectual
Property, Data Protection, and Freedom of Information
by Guido Noto La Diega*
© 2018 Guido Noto La Diega
Everybody may disseminate this ar ticle by electronic m eans and make it available for downloa d under the terms and
conditions of the Digital P eer Publishing Licence (DPPL). A copy of the license text may be obtain ed at http://nbn-resolving.
de/urn:nbn:de:0009-dppl-v3-en8.
Recommended citation: Guido N oto La Diega, Against the Dehumanisa tion of Decision-Making – Algorithmic De cisions at the
Crossroads of Intellectua l Property, Data Protection , and Freedom of Information, 9 (2018) JIPITEC 3 para 1.
secrets leading the way), this paper presents three
legal routes that enable citizens to ‘open’ the algo-
rithms. First, copyright and patent exceptions, as well
as trade secrets are discussed. Second, the EU Gen-
eral Data Protection Regulation is critically assessed.
In principle, data controllers are not allowed to use al-
gorithms to take decisions that have legal effects on
the data subject’s life or similarly significantly affect
them. However, when they are allowed to do so, the
data subject still has the right to obtain human inter-
vention, to express their point of view, as well as to
contest the decision. Additionally, the data controller
shall provide meaningful information about the logic
involved in the algorithmic decision. Third, this paper
critically analyses the first known case of a court us-
ing the access right under the freedom of information
regime to grant an injunction to release the source
code of the computer program that implements an
algorithm. Only an integrated approach – which takes
into account intellectual property, data protection,
and freedom of information – may provide the citi-
zen affected by an algorithmic decision of an effec-
tive remedy as required by the Charter of Fundamen-
tal Rights of the EU and the European Convention on
Human Rights.
Abstract: This work presents ten arguments
against algorithmic decision-making. These revolve
around the concepts of ubiquitous discretionary in-
terpretation, holistic intuition, algorithmic bias, the
three black boxes, psychology of conformity, power of
sanctions, civilising force of hypocrisy, pluralism, em-
pathy, and technocracy. Nowadays algorithms can
decide if one can get a loan, is allowed to cross a bor-
der, or must go to prison. Artificial intelligence tech-
niques (natural language processing and machine
learning in the first place) enable private and public
decision-makers to analyse big data in order to build
profiles, which are used to make decisions in an auto-
mated way. The lack of transparency of the algorith-
mic decision-making process does not stem merely
from the characteristics of the relevant techniques
used, which can make it impossible to access the ra-
tionale of the decision. It depends also on the abuse
of and overlap between intellectual property rights
(the “legal black box”). In the US, nearly half a million
patented inventions concern algorithms; more than
67% of the algorithm-related patents were issued
over the last ten years and the trend is increasing. To
counter the increased monopolisation of algorithms
by means of intellectual property rights (with trade
Keywords: Algorithmic decision-making; algorithmic bias; right not to be subject to an algorithmic decision;
GDPR; software copyright exceptions; patent infringement defences; freedom of information
request; algorithmic transparency; algorithmic accountability; algorithmic governance;
Data Protection Act 2018
2018
Guido Noto La Diega
4
1
A. Context and scope of the research
1
This work argues that algorithms cannot and should
not replace human beings in decision-making, but it
takes account of the increase of algorithmic decisions
and, accordingly, it presents three European legal
routes available to those affected by such decisions.
2 Algorithms have been used in the legal domain for
decades, for instance in order to analyse legislation.
1
These processes or sets of rules followed in
calculations or other problem-solving operations
raised limited concerns when they merely made our
lives easier by ensuring that search engines showed
us only relevant results.2 However, nowadays
algorithms can decide if one can get a loan,3 is hired,4
is allowed to cross a border,5 or must go to prison.6
Particularly striking is the episode concerning a
young man sentenced in Wisconsin to a six-year
imprisonment for merely attempting to ee a trafc
ofcer and operating a vehicle without its owner’s
consent. The reason for such a harsh sanction was
that Compas, an algorithmic risk assessment system,
concluded that he was a threat to the community.
The proprietary nature of the algorithm did not
allow the defendant to challenge the Compas report.
The Supreme Court found no violation of the right
to due process.7
* Lecturer in Law (Northumbria University); Director (Ital-
IoT Centre for Multidisciplinary Research on the Internet of
Things); Fellow (Nexa Center for Internet & Society).
1 William Adam Wilson, ‘The Complexity of Statutes’ (1974)
37 Mod L Rev 497.
2 The algorithm used by Google to rank search results is
covered by a trade secret.
3 More generally, on the use of algorithms to determine the
parties’ contractual obligations, see Lauren Henry Scholz,
‘Algorithmic Contracts’ (SSRN, 1 October 2016), <https://
ssrn.com/abstract=2747701> accessed 1 March 2018.
4 On the negative spirals that automated scoring systems
can create, to the point of making people unemployable,
see Danielle Keats Citron and Frank Pasquale, ‘The scored
society: Due process for automated predictions’ (2014) 89(1)
Washington Law Review 1, 33.
5 Jose Sanchez del Rio et al., ‘Automated border control
e-gates and facial recognition systems(2016) 62 Computers
& Security 49.
6 As written by Frank Pasquale, ‘Secret algorithms threaten
the rule of law’ (MIT Technology Review, 1 June 2017)
<https://www.technologyreview.com/s/608011/secret-
algorithms-threaten-the-rule-of-law/> accessed 1 March
2018, imprisoning people “because of the inexplicable,
unchallengeable judgements of a secret computer program
undermines our legal system”. For a les $10 million lawsuit
related to face-matching technology that allegedly ruined
an American man’s life see Allee Manning, ‘A False Facial
Recognition Match Cost This Man Everything’ (Vocativ,
1 May 2017) <http://www.vocativ.com/418052/false-
facial-recognition-cost-denver-steve-talley-everything/>
accessed 1 March 2018.
7 State v Loomis, 881 N.W.2d 749 (Wis. 2016). Cf Adam
Liptak, ‘Sent to Prison by a Software Program’s Secret
3 Articial intelligence techniques (natural language
processing, machine learning, etc.) and predictive
analytics enable private and public decision-makers
to extract value from big data8 and to build proles,
which are used to make decisions in an automated
way. The accuracy of the proles is further enhanced
by the linking capabilities of the Internet of Things.9
These decisions may profoundly affect people’s
lives in terms of, for instance, discrimination, de-
individualisation, information asymmetries, and
social segregation.10
4
In light of the confusion as to the actual role of
algorithms, it is worrying that in “the models of game
theory, decision theory, articial intelligence, and
military strategy, the algorithmic rules of rationality
replaced the self-critical judgments of reason.”11
5 One paper12 concluded by asking whether and how
algorithms should be regulated. This work aims to
constitute an attempt to answer those questions with
a focus on the existing rules on intellectual property,
data protection, and freedom of information. In
particular, it will be critically assessed whether “the
tools currently available to policymakers, legislators,
and courts (which) were developed to oversee
human decision-makers (…) fail when applied to
computers instead.”13
6
First, the paper presents ten arguments why
algorithms cannot and should not replace human
decision-makers. After this, three legal routes are
presented.14 The General Data Protection Regulation
Algorithms’ (New York Times, 1 May 2017), <https://www.
nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-
software-programs-secret-algorithms.html?_r=0> accessed
1 March 2018.
8 In analysing the algorithms used by social networks,
Yoan Hermstrüwer, ‘Contracting Around Privacy: The
(Behavioral) Law and Economics of Consent and Big Data’
(2017) 8(1) JIPITEC 12, observes that for these “algorithms to
allow good predictions about personal traits and behaviors,
the network operator needs two things: sound knowledge
about the social graph [describing the social ties between
users] and large amounts of data.”
9 Article 29 Working Party, ‘Guidelines on Automated
individual decision-making and Proling for the purposes
of Regulation 2016/679’ (2017) 17/EN WP 251.
10 See Bart W. Schermer, ‘The limits of privacy in automated
proling and data mining’ (2011) 27 Computer law &
security review 45, 52, and Article 29 Working Party (n 9) 5.
11 Lorraine Daston, ‘How Reason Became Rationality’ (Max-
Planck-Institut für Wissenschaftsgeschichte, 2013) <https://
www.mpiwg-berlin.mpg.de/en/research/projects/DeptII_
Daston_Reason> accessed 1 March 2018.
12 Solon Barocas et al., ‘Governing Algorithms: A Provocation
Piece’ (SSRN, 4 April 2013) 9 <https://ssrn.com/
abstract=2245322> accessed 1 March 2018.
13 Joshua A. Kroll et al., ‘Accountable Algorithms’ (2017) 165 U
Pa L Rev. 633.
14 Other routes may be explored. In the US, Keats Citron (n 4) 33
suggested that the principles of due process may constitute
Against the Dehumanisation of Decision-Making
2018
5
1
(GDPR)15 bans solely automated decisions having
legal effects on the data subject’s life “or similarly
signicantly affects him or her.”
16
However, when
such decisions are allowed, the data controller shall
ensure the transparency of the decision, and give the
data subject the rights to obtain human intervention,
to express their point of view, as well as to contest
the decision. Data protection is the most studied
perspective but invoking it by itself is a strategy
that “is no longer viable.”
17
Therefore, this paper
approaches this issue by integrating data protection,
intellectual property, and freedom of information.
7
As to the intellectual property route, some copyright
and patent exceptions may allow the access to a
computer program implementing an algorithm,
notwithstanding its proprietary nature.
8
In turn, when it comes to the freedom of information,
an Italian court stated that an algorithm is a digital
administrative act and therefore, under the freedom
of information regime, the citizens have the right
to access it.18
9 In terms of method, the main focus is a desk-based
research of EU laws, and of the UK and Italian
implementations. The paper is both positive and
normative. Whilst advocating against algorithmic
decision-making, this research adopts a pragmatic
approach whereby one should take into account
that the replacement of human decision-makers
with algorithms is already happening. Therefore, it
is important to understand how to solve the relevant
legal issues using existing laws. If algorithms are
becoming “weapons of math destruction,”19 it
is crucial that awareness is raised regarding the
pervasivity of algorithmic decision-making and that
light is shed on the existing legal tools, in anticipation
of better regulations and more responsible modelers.
Without clarity on the nature of the phenomenon
and the relevant legal tools, it is unlikely that citizens
will trust algorithms.
a sufcient answer against algorithmic decisions (in
particular, against automated scoring systems). The authors
recommend that the Federal Trade Commission interrogate
scoring systems under their unfairness authority.
15 Regulation (EU) 2016/679 of the European Parliament and
of the Council of 27 April 2016 on the protection of natural
persons with regard to the processing of personal data and
on the free movement of such data, and repealing Directive
95/46/EC [2016] OJ 119/1.
16 GDPR, art 22.
17 Schermer (n 10) 52.
18 TAR Lazio, chamber III bis, 22 March 2017, No 3769.
19 Cathy O’Neil, Weapons of Math Destruction: How Big Data
Increases Inequality and Threatens Democracy (Crown 2016).
B. Positive and normative
arguments against algorithms
as a replacement for human
decision-makers
10
The rst part of this section is dedicated to
presenting the main reasons why algorithms cannot
replace human decision-makers. The second part
discusses the reasons why such a replacement is not
desirable. The analysis is carried out with the judge
as the model of a decision-maker.
I. The unfeasibility of
the replacement
11
The untenability of the replacement is mainly related
to the role and characteristics of legal interpretation.
Algorithms could replace human decision-makers if
interpretation were a straightforward mechanical
operation of textual analysis; where the meaning
is easily found by putting together the facts and
the norms. The said model of interpretation, which
seems awed, is accompanied by the conviction
that there is a clear distinction, on the one hand,
between interpretation and application and, on
the other hand, between easy cases and hard cases.
However, legal interpretation seems to have the
opposite characteristics. Indeed, it is ubiquitous20
and its extreme complexity relates to several
factors,21 such as the psychological (and not merely
cognitive) nature of the process.
22
This highlights
20 Given the features of legal interpretation in practice, the
brocard in claris non t interpretatio should be replaced
by in claris t interpretatio (cf Francesco Galgano, Tutto il
rovescio del diritto (Giuffrè 2007) 100, who points out how the
attempts to rule out legal interpretation by means of clear
statutes (Carlo Ludovico Muratori) or through proposals
to expressly prevent judges from interpreting the statutes
(Pietro Verri) today would be laughable. cf Vittorio Villa,
Una teoria pragmaticamente orientata dell’interpretazione
giuridica (Giappichelli 2012).
21 For instance, due to the intrinsic vagueness of the legal
language and because of the importance of general
principles, one of the main tasks of judicial interpretation is
striking a balance between conicting interest, which shall
be done on a case-by-case basis. However, some scholars
believe that “balancing works with mathematical rules”
(Pier Luigi M. Capotuorto, ‘Arithmetical Rules for the Judicial
Balancing of Conicts between Constitutional Principles:
From the ‘Weight Formula’ to the Computer-Aided Judicial
Decision’ (2007) 3(2) Rivista di Diritto, Economia e Gestione
delle Nuove Tecnologie 171.
22 Richard A Posner, ‘The Role of the Judge in the Twenty-First
Century’ (2006) 86 B U L Rev 1049, 1060, believes that the
psychological component is dominant when it comes to the
sources of ideology, which plays a fundamental role in the
decisions of all judges. Works on the prediction of judicial
decisions usually focus on non-textual elements such as the
nature and the gravity of the crime or the preferred policy
position of each judge. See e.g. Benjamin E Lauderdale and

To continue reading

REQUEST YOUR TRIAL