Data Envelopment Analysis
Sunday, 23 June 2013
Performance of
any decision-making unit (DMU) largely depends on how efficiently inputs are
used in the production, marketing and distribution processes. As resources at
its disposal are limited and have competitive use, they are to be optimally
applied to enhance productivity, efficiency and profitability. In order to
survive in today’s competitive environment, it has to improve its performance
not only relative to its past performances but also relative to its competitors
in the industry. In this context, it becomes vital to study inter-firm
comparison to identify best practices of efficient firms in resource
utilization and apply them to improve the efficiency of relatively less
efficient firms.
In order to
identify up to what extent a firm produces output efficiently and
cost-effectively, its economic efficiency is estimated. Economic efficiency is
the product of two efficiencies-technical efficiency and allocative efficiency.
Technical efficiency refers to ‘the
firm’s ability to produce the maximum possible output from a given combination
of inputs and technology, regardless of market demand and prices’.
Allocative efficiency refers to the firm’s ability to use the inputs in optimal
proportion, given their respective prices. Classical production theory assumes
that given the level of technology, a production function shows maximum
quantity of output that a firm can produce with the given set of inputs. This
means that the firm produces output with 100 per cent technical efficiency.
However, in reality, a firm’s realised output may be below the potential
output. Hence, measurement of individual firm’s technical efficiency becomes
essential to know the extent of deviation of firm’s actual output from its potential
output. There are two most popular approaches to estimate technical
efficiency—Data Envelopment Analysis (DEA) and Stochastic Production Frontier
(SPF) Analysis. In this lecture, a detailed discussion will be held on the DEA
and the efficiency estimating procedure will be taught through DEA software.
Genesis of DEA
Farrell
(1957) laid the foundation for new approaches to efficiency and productivity
analysis at the micro level, involving new insights on two issues: how to
define efficiency and productivity, and how to calculate the benchmark
technology and the efficiency measures. He showed how to define economic
efficiency and how to decompose it into its technical and allocative
components. He defines technical efficiency as the ratio of observed output to
the maximum potential output that can be attained from given inputs. If a
firm’s actual output is below the potential output, the shortages is regarded
as an indicator of inefficiency. Allocative efficiency (AE) of a firm is
defined as the ratio of minimum cost to the actual cost. It refers to the
firm’s ability to use the inputs in optimal proportion, given the prices of
inputs.
Farrell’s
paper gave birth to two approaches of efficiency measurement—deterministic
frontier approach and stochastic frontier approach (SFA). Deterministic
frontiers are parametric as well as non-parametric. Aigner and Chu (1968),
Afriat (1972), Richmond (1974), and Schmidt (1976) develop parametric
deterministic models, while Charnes, Cooper and Rhodes (1978) evolve a
non-parametric deterministic approach, popularly known as Data Envelopment
Analysis (DEA) which is extended by Banker, Charnes, and Cooper (1984). SFA is
developed independently by Aigner, Lovell and Schmidt (1977) and Meeusen and
Broeck (1977) and later on extended by Jondrow, Lovell, Materov, and Schmidt (1982)
and Battese and Coelli (1992;
1995). Both DEA and SFA are being
applied by the researchers to measure technical efficiency of decision-making
units (DMUs) using cross-sectional as well as panel data. Earlier, economists
usually prefer to use econometric methods to measure efficiency. In the 1990s,
many of them have also started using DEA because of its ability to handle
multiple inputs and outputs and its suitability for studying the performance of
both manufacturing and service sectors’ DMUs.
STOCHASTIC
FRONTIER ANALYSIS
Deterministic frontier
approach does not incorporate the measurement errors and other noise. In it,
all deviations from the frontier are assumed to be the result of technical
inefficiency, whereas, stochastic frontier production function (SFPF)
accommodates exogenous shocks. This involves the specification of the error
term as being made up of two components: a symmetric component permitting
random variation of the frontier across firms, and captures the effects of
measurement error, other statistical noise, and random shocks outside the DMUs
control and a one-sided component capturing the effects of inefficiency
relative to the stochastic frontier.
DATA ENVELOPMENT
ANALYSIS
DEA is a linear programming (LP) based
multi-factor productivity analysis model for measuring the relative efficiency
of homogenous set of DMUs. It optimises on each individual
observations with an objective of calculating a discrete piecewise frontier
determined by the set of Pareto-efficeint DMUs. It does not require
any specific assumptions about the functional form. It calculates a maximal
performance measure for each DMU relative to all other DMUs in the observed
population with the sole requirement that each DMU lie on or below the external
frontier. Each DMU not on the frontier is scaled down against a convex
combination of the DMUs on the frontier facet closest to it (Charnes, et al.
1994).
There is
an increasing concern with measuring and comparing the efficiency of
organisational units such as local authority departments, schools, hospitals,
shops, bank branches and similar instances where there is a relatively homogeneous
set of units.
The usual
measure of efficiency, i.e.:
Efficiency = Output/Input
is
often inadequate due to the existence of multiple inputs and outputs related to
different resources, activities and environmental factors. DEA methodology is
developed to solve this problem. This
technique is quite useful for measuring the efficiency of service sector DMUs,
especially the government organization providing public goods.
We have two basic DEA models—CCR model, developed
by Charnes, Cooper and Rhodes in 1978 and BCC model, developed by Banker,
Charnes, and Cooper in 1984. CCR model generalises the single output/input
ratio measure of efficiency for a single DMU in terms of fractional linear programming
(FLP) formulation transfroming the multiple output/input characteristics of
each DMU to that of a single “virtual” output and “virtual” input. The model
defines the relative efficiency for any DMU as a weighted sum of outputs
divided by a weighted sum of inputs where all efficiency scores are restricted
to lay between zero and one. An efficiency score less than one means
that a linear combination of other units from the sample could produce the same
vector of outputs using a smaller vector of inputs. The score reflects the
radial distance from the estimated production frontier to the DMU under
consideration. Variables in the
model are input-output weights and the LP solution produces the weights most
favourable to the unit under reference. In order to calculate efficiency
scores, FLP is converted into LP by normalising either the numerator or the
denominator of the fractional programming objective function. In case of output
–maximization DEA program, the weighted sum of inputs is constrained to be
unity to maximize weighted sum of outputs, while in input-minimization DEA
program, the weighted sum of outputs is constrained to be unity to minimize
weighted sum of inputs. CCR model is based on constant returns to scale
assumption. Under this assumption, if
the input levels of a feasible input-output correspondence are scaled up or
down, then another feasible input-output correspondence is obtained in which
the output levels are scaled by the same factor as the input levels
(Thanassoulis, 2001).
Another version of DEA was
given by Banker, Charnes and Cooper (1984).
The primary difference between BCC and CCR models is the convexity constraint,
which represents the returns to scale. The CCR model is based on the assumption
that constant return to scale exists at the efficient frontiers whereas BCC
assumes variable retunes to scale frontiers. CCR efficiency is overall
technical efficiency (OTE), known as global technical efficiency whereas
BCC efficiency is the pure technical efficiency (PTE) net of scale-effect,
known as local technical efficiency. If a DMU scores value of both
CCR-efficiency and BCC-efficiency one, it is operating in the most productive
scale Size (MPSS). If a DMU has BCC-efficiency score one and CCR-efficiency
score less than one, it is operating locally efficiently but not globally
efficiently due to the scale size of the DMU. Thus, inefficiency in any DMU may
be caused by the inefficient operation of the DMU itself (BCC-inefficiency) or
by the disadvantageous conditions under which the DMU is operating
(scale-inefficiency). Scale efficiency is estimated by dividing the
CCR-efficiency from the BCC-efficiency for a DMU. Another technique based on
DEA is Malmquist Productivity Index (MPI) proposed by Caves, et al. in 1982.
The MPI is defined with distance functions.
For panel data, distance functions permit to describe multiple
input-output production technologies without behavioural objectives such as
profit maximisation or cost minimisation.
GROWTH OF DEA
APPROACH
Since the publication of the
seminal paper of Charnes, et al. (1978), a numerous research papers have been
written on both theoretical and applied aspects of the DEA approach. On
theoretical facets, a number of DEA models and their extensions have been made.
Weight restriction, non-discretionary inputs and outputs, categorical inputs
and outputs, sensitivity analysis, input congestion, returns to scale, bad
outputs, supper efficiency, target setting etc. are the major aspects on which
extension of DEA models have been made. In parallel with the theoretical
development, a wide range of empirical studies have also been published which
evince the inexhaustible potential of DEA for innovative applications.
Originally,
DEA was applied to estimate the relative efficiency of non-profit organizations
such as educational institutions, government hospitals, public utilities, etc.
where market prices are not generally available. However, its ability to use
multiple output-input variables without a priori underlying functional form
assumption has motivated the researchers to extend it to the
profit-organizations also. Some of the areas where applications of DEA have
been made frequently by the researcher are: banks, academic institutions,
hospitals, public utilities like gas, water, electricity supply, police
services, transport services, agriculture, and industry. Moreover, development
of DEA-based MPI for measuring total factor productivity growth and its
decomposition into technical efficiency change and technical progress is the
significant achievement in the field of productivity analysis.
Terminology of DEA
1. Benchmarking:
It is the process of
comparing the performance of an individual organization against a benchmark, or
ideal level of performance. Benchmarks can be set on the basis of performance
over time or across a sample of similar organizations or some externally set
standard.
2.
Best Practices: Best practices refer to the set of management and work practices that
results in the highest potential or optimal quantity and combination of outputs
for a given quantity and combination of inputs (productivity) for a group of
similar organisations.
3. Decision Making Unit (DMU): The term DMU is first used by Charnes, Cooper
and Rhodes in 1978 in their seminal paper on DEA. DMU means individual
production unit producing tangible or intangible output under private,
cooperative, government or any other organization’s ownership. It comprises
manufacturing firms, banking and insurance companies, transport and
communication firms, hospitals, schools and universities, other service
providing firms, government organizations, local governments, municipal
corporations, etc. For measuring the relative performance of individual DMUs,
the set of DMUs should face the same fundamental characteristics in terms of
environment and technological constraints. If someone wants to assess the
efficiency of educational institutions, the DMUs in the dataset should be
homogeneous. For instance, school cannot be compared with universities.
4.
Economies of Scale: It refers to increasing a firm’s size until it obtains the minimum cost
per unit of output.
5. Inefficiency: The amount by which a firm lies below the estimated frontier can be
regarded as measure of inefficiency. Under the given technology, if actual
output of a firm equals the potential output, the firm would not have
inefficiency in the production.
6. Most
Productive Scale Size (MPSS): It is
that size at which a DMU obtains 100 percent pure technical efficiency and
scale efficiency. This is possible when a DMU attains an efficiency score of
one under constant returns to scale technology assumption.
7. Pareto Efficiency: A DMU is Pareto-efficient if it is not
possible to reduce any one of its input levels without increasing at least
another one of its input levels and /or without lowering at least one of its
output levels.
8.
Peer: A peer is an efficient DMU which acts as a reference point (in terms of
input and output mix) for inefficient DMUs.
9.
Productivity: It can be defined as the ratio of a measure of output of one or more of
inputs used to produce the output. There are two main concept of productivity:
partial (single) factor productivity and total (multiple) factor productivity.
Partial factor productivity is a simple ratio of volume of total output to the
volume of total quantity of a single input. For instance, labour productivity
is measured by dividing the total production of a firm by the number of total
workers (or total hours of work) of that firm. Partial factor productivity
concept cannot provide the true performance of a resource. For instance, labour
productivity in a firm can be raised either by improving the quality of human
resource through training and retraining or simply by retrenching the manpower
and using more capital and technology intensive production process. Therefore,
total factor productivity (TFP) index is measured to assess the overall
productivity of a firm or industry. TFP is a ratio of weighted sum of output to
the weighted sum of inputs. The TFP index having value greater than one
indicates to the positive growth in the productivity and a value of TFP index
less than one means negative growth. If value of the index is equal to one,
there is no growth in the productivity. Various methods have been developed to
compute TFP. In this study, we apply a non-parametric DEA-based method, known
as MPI to measure the TFP growth in the sugar mills.
10. Production
Frontier: Production frontier is
what it gives maximal output that can be achieved with the given amount of
inputs.
11. Returns
to Scale: It refers to a measure of
change in output resulting from a change in the scale of a firm’s operation as
determined by its input usage. There are three returns to scale—increasing,
constant and decreasing. When inputs are doubled and output increases more than
double, it is increasing returns to scale.
If the output increases in the same proportion as inputs are increased,
it is constant returns to scale. Decreasing returns to scale exists when output
increases less than the proportional increase in the inputs.
12. Pure
Technical Efficiency: It refers to
the proportion of technical efficiency which is attributed to the efficient
conversion of inputs into output. Effect of size of plant on the efficiency is
neutralized in it. It is also known as
managerial efficiency or local efficiency. It is estimated through BCC DEA
model which is based on the variable returns to scale technology assumption.
Value of pure technical efficiency score lies between zero and one.
13. Technical
Efficiency: Technical
efficiency refers to the firm’s ability to produce the maximum possible output
from a given combination of inputs and technology. In DEA, technical efficiency
is determined by the difference between the observed quantities of a DMU’s
output (s) to input (s) and the ratio achieved by best practice DMUs. It is,
therefore, a relative technical efficiency, not the absolute technical
efficiency. Its value lies between zero and one. If a DMU is on the production frontier
and does not have any input or output slack, its technical efficiency score
will be equal to one. Technical efficiency can be decomposed into scale
efficiency and pure technical efficiency.
14. Scale
Efficiency: The extent to which an
organization can take advantage of returns to scale by altering its size
towards optimal scale. In DEA analysis, scale efficiency for a DMU is
calculated by dividing CCR efficiency score from BCC efficiency score. As BCC
score is more than or equal to CCR score, value of scale efficiency score lies
between zero and one.
15. Slacks: Slacks in DEA refer to the extra
quantity by which an input (output) can
be reduced (increased) to obtain technical efficiency after all inputs
(outputs) have been radially reduced to reach the production frontier.
Advantages and Limitations of DEA
- DEA methodology has several advantages over the traditional regression-based production function approach. A few of them are: it can handle multiple inputs and outputs; it doesn't require any assumption of a functional form relating inputs to outputs; DMUs are directly compared against a peer or combination of peers; inputs and outputs in the model can have different units; it sets targets for inefficient DMUs to make them efficient, it also identifies slacks in inputs and outputs; and it estimates a single efficiency score for each DMU. This approach also has certain advantages over the SFA. Apart from not imposing any functional form on production or technology, it makes the minimum assumptions about the underlying technology. SFA can use only single output variable, while DEA can use more than one output variables. In case of the stochastic frontier approach the parameter estimates are sensitive to the choice of the probability distributions specified for the disturbance terms (Ray, Seon n.d.), whereas DEA does not require any functional form. However, DEA has several limitations also, such as:
- Since DEA is an extreme point technique, noise such as measurement error can cause significant problems.
- In DEA, efficiency is defined relative to the efficiency of other firms under consideration. It is not an absolute measure.
- Since DEA is a nonparametric technique, statistical hypothesis testing is difficult8.
- DEA scores are sensitive to input-output specification and the size of sample.
- Since, no hypothesis testing is possible, data accuracy must be given priority.
- In order to make sufficient discrimination between DMUs, sample-size should be adequate. It should be at least three times greater than the sum of input-output variables.
- Most important exercise in DEA is the identification of input-output variables. Regression analysis can be conducted to identify the best fit in output and input variables. Zero and negative values of any input or output should be avoided. Variables in the model should be as few as possible.
- Data scaling should be done before applying DEA so that input-output variables do not have excessively large values.
0 comments:
Post a Comment