Partial average cross‐weight evaluation for ABC inventory classification

AuthorGiannis Karagiannis
Date01 May 2021
Published date01 May 2021
DOIhttp://doi.org/10.1111/itor.12594
Intl. Trans. in Op. Res. 28 (2021) 1526–1549
DOI: 10.1111/itor.12594
INTERNATIONAL
TRANSACTIONS
IN OPERATIONAL
RESEARCH
Partial average cross-weight evaluation for ABC
inventory classification
Giannis Karagiannis
Department of Economics, University of Macedonia, 156 Egnatia Str., Thessaloniki, 54006 Greece
E-mail: karagian@uom.edu.gr [Karagiannis]
Received 27 October 2017; receivedin revised form 27 July 2018; accepted 20 August 2018
Abstract
In this paper, we propose an alternative overall measure, inspired by the notion of average cross-efficiency,
which summarizes achievements across different descending ordering schemes regarding the relative impor-
tance of the considered indicators in the Ng model. The proposed overall measure is equal to the arithmetic
average of the maximum partial averages across all possible descending ordering schemes. It can also be
obtained using the average (across ordering schemes) of the estimated multipliers. We apply the proposed
measure to the ABC inventory classification problem, and we compare our results with those by four in-
formation theory based methods that may be used for the same purpose, namely the Shannon entropy,
distance-based, weighted least-square dissimilarity, and maximizing deviation methods.
Keywords:composite indicators; descending ordering schemes; overall measure; ABC inventory classification
1. Introduction
In performance evaluation literature, the performance of individualdecision-making units (DMUs)
may be evaluated under three appraisal schemes: self-appraisal, peer appraisal, or preference ap-
praisal. Self-appraisal refers to the case that each evaluated DMU is allowed to choose its own
“value system,” by means of the weights attached to each performance indicator, in order to show
the best possible light relative to other DMUs included in the assessment.1Each evaluated DMU
can exaggerate its own advantages and at the same time downplay its own weakness to obtain the
maximal possible evaluation score. Data envelopment analysis (DEA), introduced by Charnes et al.
(1978), is the main operation research tool for conducting self-appraisal performance evaluation.
On the other hand, peer appraisal gives the right to every DMU to have a “say” about the evalu-
ation of the other DMUs. In particular, each DMU takes into account the “value system” of all
Corresponding author.
1In some cases, performance indicators takethe form of inputs and outputs as in productive efficiency analysis, while in
other cases they reflect different aspects of performance as in the construction of composite indicators.
C
2018 The Authors.
International Transactionsin Operational Research C
2018 International Federation ofOperational Research Societies
Published by John Wiley & Sons Ltd, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main St, Malden, MA02148,
USA.
G. Karagiannis / Intl. Trans. in Op. Res.28 (2021) 1526–1549 1527
evaluated units (included itself) in assessing its own performance. Since each “value system” re-
sults in a different evaluation score, performance is gauged by the average of the efficiency scores
obtained using all DMUs’ self-appraisal weights (see Sexton et al., 1986; Doyle and Green, 1994),
which is called average cross-efficiency. Lastly, in the case of preference appraisal, aprioriinforma-
tion provided by experts, stakeholders, or policy makers is incorporated into the evaluation process
by means of a predetermined “value system.” Their preferences about the importance of the con-
sidered performance indicators are reflected in a set of restrictions that reduce the weight flexibility
of self-appraisal DEA (Dyson et al., 2001; Angulo-Meza and Lins, 2002). These restrictions may
either take the form of numerical limits on the weights of the considered performance indicators or
provide a ranking of their relative importance, with the latter having the advantage of being simple
and intuitive (Joro and Viitala, 2004).
Sometimes, however, experts, stakeholders, or policy makers cannot reach a consensus about the
relative importance of the considered performance indicators. This turns into an issue as long as
it affects the estimated scores and/or the final ranking of DMUs. Then, the choice of a particular
ordering scheme becomes difficult and debatable especially if there is no apriorireason to weight
the opinion of an expert, stakeholder or policy maker more than another’s. In such cases, one may
either use all acceptable ordering schemes to examine the extent of changes (if any) in the final
ranking of the evaluated DMUs or alternatively, may try to obtain an overall performance measure
summarizing achievements across different ordering schemes. A similar situation may arise in a
different occasion: for example, when an analyst conducts a sensitivity analysis on the resulting
efficiency scores by examining the entire set of possible ordering schemes regarding the importance
of the considered performance indicators. Then, it may also be useful to end up with a single metric
that reflects performance under all different norms or “value systems.”
The problem of deriving such an overall or synthetic perfor mance measure has been handled
so far by information theory methods. In particular, Fu et al. (2015) and Zheng et al. (2017) used
Shannon entropy to aggregate a DMU’s evaluation scores obtained under alternative ordering
schemes regarding the importance of the considered performance indicators, while for the same
purpose Fu et al. (2016) and Cao et al. (2016) employed a distance-based method, and Wu et al.
(2018) the weighted least-square dissimilarity method of Wang and Wang (2013).2Both these are
purely data-driven methods trying to exploit the infor mation of the data itself. For example, the
entropy method rateshigher for overall performance the preference-appraisal evaluation scores with
the relatively larger variation across DMUs, and consequently, assigns to them a higher aggregation
weight. If an ordering scheme results in evaluation scores that have almost no variation across
DMUs, then its aggregation weight tends to 0. On the other hand, the distance-based method
rates higher for overall performance the preference-appraisal evaluation scores with the smaller
deviations from the mean and as a result, it assigns to them a higher aggregation weight.
The objective of this paper is to propose an alternative overall or synthetic performance measure
rooted to performance evaluation rather than to information theory. In particular, the proposed
measure is inspired by the notion of average cross-efficiency. The difference is however that in
preference-appraisal case, we consider all possible ordering schemes as reflecting the different
“value systems” while the entire set of DMUs determines the different “value systems” in peer
2The distance-based method has been employed previously by Wu et al. (2012) for aggregating cross-efficiencies. It may
be considered as a modification of the Wangand Wang (2013) weighted least-square deviation approach.
C
2018 The Authors.
International Transactionsin Operational Research C
2018 International Federation of OperationalResearch Societies

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT