\documentclass[11pt,twoside]{article}
\usepackage{amsmath, amsthm, amscd, amsfonts, amssymb, graphicx, color}
\usepackage[bookmarksnumbered, colorlinks]{hyperref} \usepackage{float}
\usepackage{lipsum}
\usepackage{afterpage}
\usepackage[labelfont=bf]{caption}
\usepackage[nottoc,notlof,notlot]{tocbibind} 
%\renewcommand\bibname{References}
\def\bibname{\Large \bf  References}
\usepackage{lipsum}
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhf{}
\renewcommand{\headrulewidth}{0pt}
\fancyhead[LE,RO]{\thepage}
\thispagestyle{empty}
%\afterpage{\lhead{new value}}

\fancyhead[CE]{J. Gerami} 
\fancyhead[CO]{Using the outer approximation algorithm for generating all efficient extreme points of DEA}



%\topmargin=-1.6cm
\textheight 17.5cm%
\textwidth  12cm %
\topmargin   8mm  %
\oddsidemargin   20mm   %
\evensidemargin   20mm   %
\footskip=24pt     %

\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{xca}[theorem]{Exercise}
%\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\renewenvironment{proof}{{\bfseries \noindent Proof.}}{~~~~$\square$}
\makeatletter
\def\th@newremark{\th@remark\thm@headfont{\bfseries}}
\makeatletter



  
  
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% If you want to insert other packages. Insert them here
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%\long\def\symbolfootnote[#1]#2{\begingroup%
%\def\thefootnote{\fnsymbol{footnote}}\footnote[#1]{#2}\endgroup}



 \def \thesection{\arabic{section}}
 

\begin{document}
%\baselineskip 9mm
%\setcounter{page}{}
\thispagestyle{plain}
{\noindent Journal of Mathematical Extension \\
Vol. XX, No. XX, (2014), pp-pp (Will be inserted by layout editor)}\\
ISSN: 1735-8299\\
URL: http://www.ijmex.com\\
\vspace*{9mm}

\begin{center}

{\Large \bf 
Using the Outer Approximation Algorithm For Generating All Efficient Extreme Points of DEA \\}


\let\thefootnote\relax\footnote{\scriptsize Received: XXXX; Accepted: XXXX (Will be inserted by editor)}


{\bf   J. Gerami$^*$\let\thefootnote\relax\footnote{$^*$Corresponding Author}}\vspace*{-2mm}\\
\vspace{2mm} {\small  Department of mathematics, Shiraz Branch, Islamic Azad University, Shiraz, Iran.} \vspace{2mm}

\end{center}

\vspace{4mm}


{\footnotesize
\begin{quotation}
{\noindent \bf Abstract.} Identifying the efficient extreme units in a production possibility set is a very important matter in data envelopment analysis, as these observed, real units have the best performances. In this paper, we proposed a multiple objective programming model, in which the feasible region is the production possibility set under the assumption of variable returns to scale and the objective function consists of input and output variables. As we know, by increasing the dimensions of the problem, the set of efficient points would increase as well; thus, using the multiple objective linear programming problem-solving methods in a decision set would lead to computational problems and it would be much easier to work in the outcome set instead of the decision set. In this research, we show that the efficient points in the outcome set of the suggested multiple objective linear programming problems correspond with the efficient extreme points in data envelopment analysis. An outer approximation algorithm is presented for production of all efficient extreme points in the outcome set. This algorithm provides us with the equations for all efficient surfaces. In the outcome set, this algorithm would use few calculations to produce all the extreme points. Finally, we demonstrate the presented approach through numerical examples.
\end{quotation}
\begin{quotation}
\noindent{\bf AMS Subject Classification:} MSC code1; MSC code 2; more

\noindent{\bf Keywords and Phrases:} : Data Envelopment Analysis, Multiple objective, linear programming, Outer approximation. 
\end{quotation}}

\section{Introduction}

Data envelopment analysis (DEA) developed by Charnes et al. [12] has become one of the most widely used methods in operations research/management science. A reason for this success is that DEA is a task-oriented approach and focuses on an important task: to evaluate the relative (technical) efficiency of comparable Decision Making Units (DMUs) essentially performing the same task. Based on information about existing data on the performance of the units and some preliminary assumptions, the purpose of DEA is to empirically characterize the so-called efficient frontier (surface) based on the set of available DMUs and to project all DMUs onto this frontier. If a DMU lies on the frontier, it is referred to as an efficient unit, otherwise inefficient. Efficiency evaluation is based on the data available without taking into account the decision-makers (DM) preferences. All efficient DMUs are considered equally “good”. However, if the efficient units are not equally preferred by the decision-makers it is necessary to somehow incorporate the decision-maker's judgments or a priori knowledge into the analysis. A straightforward and widely used method has been to restrict possible values of the multipliers of so-called dual DEA models. Approach is to explicitly or implicitly gather direct preference information about the desirable input and output-values of DMUs, and insert that information in a form or another into the analysis. DEA is a technique based on mathematical programming for evaluating the relative efficiency of a set of decision-making units (DMUs). The efficiency of each DMU is determined by the efficiency frontier. The units on the efficiency frontier are assumed efficient; otherwise, they are considered as inefficient. In fact, DEA sets up a production possibility set and considers its frontier as the efficient frontier made according to the non-domination condition, see, for instance DE Witte and Marques [19].\\
For this approach, some ideas can be adopted from research carried out in the field of Multiple Criteria Decision Making (MCDM), especially in Multiple Objective Linear Programming (MOLP). In MCDM /MOLP, one of the key issues is to provide a decision-makers with a tool, which makes it possible to evaluate points lying on the efficient frontier. It has been shown that the MOLP and DEA models have a similar structure, see, for instance Hosseinzadeh Lotfi et al. [29]. Thus, theory and approaches developed in MOLP for evaluating solutions on the efficient frontier can also be applied in DEA.\\
This is important that because the dimension of the outcome set is smaller than m+s and the dimension of the decision set is n+m+s-1, generating all or portion of outcome set is expected, in general, to be less the demanding computationally than generating all or portions of the decision set. The identification of DEA efficient units under various DEA models is equivalent to the identification of the lowest input and the highest output solutions within the production possibility set for the corresponding multi-objective programming problem. The DEA-efficient DMU corresponds to the pareto efficient solution (or non-dominated solution). From this point of view, just as in the discussion of multi-objective programming, the set of all extreme points of variable returns to scale (VRS) models in DEA have significant values in the field of DEA, See, for instance Benson [7], Rockafellar [40]. In this article, we the use outer approximation algorithm for generating the set of all efficient extreme points of models DEA with VRS as proposed by Benson [8] and to do so, we use all efficient extreme points of the outcome set of the MOLP problem. 
The organization of this paper is as follows, in section 2, we present literature review. In section 3, we present MOLP problem and its relation to models DEA with VRS, and we provide the theoretical foundation of the outer approximation procedure. We summarize some relevant results concerning efficient extreme points of the MOLP problem. Section 4, provides a detailed statement of the algorithm; additionally, a small example problem is solved for illustration purpose. Section 6, provides a computational experiment and statistical analysis. Some concluding remarks are given in the last section.
\label{intro} % It is advised to give each section and subsection a unique label.
\section{Literature review}
In recent years, there have been a number of studies discussing the relationship between DEA and MOLP models. In their article, Doyle and Green [20] showed that DEA is an MCDM method. Alene et al. [1] used MOLP problem solving methods to apply the decision maker’s a priori knowledge in DEA problems. Golany [26] presented a data envelopment analysis model with a MOLP structure and used interactive MOLP methods to solve the model. Their model helped the decision maker (DM) to allocate a set of inputs, such as resources, on the efficiency frontier based on the level of outputs. Joro et al. [34] revealed that DEA problems have a similar structure to MOLP models; therefore, to solve the DEA models, we can use the corresponding reference point models in MOLP.\\
Wong et al. [46] proposed an equivalent model between DEA and MOLP and demonstrated how to solve a DEA problem interactively, without any prior judgment, by transforming a MOLP formula. Using interactive MOLP methods, they searched for the most preferred solutions (MPS) points on the efficiency frontier along with resource allocation and target setting according to the DM’s a priori knowledge; then, they used interactive approaches such as G-D-F, Steam and Stom to solve the model and finally, engaged in a comparison of results. Yang et al. [47] attempted to demonstrate the use of interactive MOLP methods for target setting in DEA and illustrated the relationship between output-oriented DEA dual models and formulation of maxmin preferred points in MOLP models; they used the interactive projected gradient approach to identify the efficient units. Malekmohammadi et al. [39] focused on the topic of target setting in DEA using MOLP problems; they extended the models presented by Yong et al [47] to simultaneously reduce the final inputs and increase the final outputs and showed that instead of solving n models, we can set our targets according to the DM’s preferences by solving only one model.\\
Hosseinzadeh et al. [28] evaluated the relationship between output-oriented dual models in DEA and MOLP models. In their study, they showed how a DEA model can be solved interactively by transforming a MOLP formula; in this regard, they used the Z-W approach to apply the DM’s a priori knowledge in the performance process. Ebrahimnejad et al. [21] proposed an interactive MOLP method to identify the target units in DEA models in the presence of undesirable outputs; they extended the relationship between BCC models and the reference point model in MOLP toward a simultaneous and interactive increase in desirable outputs and decrease in undesirable final outputs based on MOLP models.\\
The main purpose of MOLP problems is to find the set of efficient solutions. These solutions are Pareto optimal solutions that can simultaneously optimize all objective functions. Among these units, the efficient extreme units are the most important ones; these would be observed, real units and their performance would determine the performance of other units in the system.\\
We may search for solutions also on the efficient frontier in DEA. Since the outcome set has a much simpler structure and smaller size than the decision set, a handful of researchers in recent years have begun to turn their attention to the mathematics and tools for generating all or portions of the efficient outcome set, rather than the efficient decision set, for the MOLP problem. See, for instance, Banker et al. [2], Banker et al. [3], Benson [4], Benson and Sayin [10], Dauer and Liu [17], Dauer and Saleh [18], Dauer and Gallagher [16], Dauer [14, 15].
Various methods have been presented for identification of these units, out of which we can mention the approaches proposed by Chon [13], Evans [22], Goicoechea et al. [25], Luc [38], Sawaragi et al. [41], Steuer [43], Yu [48] and Zeleny [51].
One of these approaches is the vector maximization method, see Kuhn and Tucker [37]. We can use this method to determine all efficient points in a decision set, see Benson [5], Isermann [31], Bitran [11], Villarreal and Karwan [45], Kostreva and Wiecek [36]. The problem with all those methods of determining efficient points in the feasible region and the decision set was too many calculations and the presented approaches were not convergent in most cases; in this relation, when the problem’s dimensions, variables and constraints increase, the set of efficient points would expand in MOLP problems and we would face a difficult process for finding the efficient set. Since the outcome set has a simpler form and a smaller region compared to the decision set, it would be easier to find the efficient points in the outcome, see Steuer [44], Dauer and Liu [17], Dauer and Saleh [18], Benson [4, 6], Dauer [14, 15], Gallagher and Saleh [24], Dauer and Gallagher [16], Horst et al. [30].\\
Therefore, instead of directly solving the DEA models, we present a DEA model with a MOLP structure and use the MOLP model’s outcome set to specify the efficient extreme units. To find the set of efficient units in the outcome set, we can employ methods such as the outer approximation algorithm [8] and the weight set decomposition algorithm [9]. In this research, we make use of the outer approximation algorithm, which is a convergent algorithm using little calculations based on linear searching and linear programming techniques. The following methods have been proposed to find DEA efficient points using efficient surfaces in MOLP.\\
Jahanshahloo et al. [33] presented a method for finding the piecewise linear frontier of the production function in data envelopment analysis. Korhonen [35] introduced another method to search for the efficiency frontier in DEA. In another study, Jahanshahloo et al. [32] proposed an approach for finding strongly efficient hyperplanes of the production possibility set (PPS) in data envelopment analysis. Sayin [42] presented an algorithm for determining efficient faces in DEA. Hosseinzadeh et al. [27] proposed a new method for finding the set of efficient surfaces in DEA based on MOLP models; in this relation, they introduced a linear programming model that could find the efficient defining hyperplanes of the production possibility set.
The approach proposed in the present paper is a new and distinguished method comparing to previous approaches. The advantage to our approach is that this method can determine all efficient extreme points of the production possibility set in the outcome set through little calculations.
\label{sec:2}

\section{Structural Similarities between MOLP and DEA}
Assume that we have $n$ observed decision-making units as $DMU_{j}, j=1,\ldots,n$,
where each $DMU$ consumes an $m$-vector input to produce an
$s$-vector output. Suppose that
$X^{j}=(x^{j}_{1},x^{j}_{2},\ldots,x^{j}_{m})^{t}$ and $
Y^{j}=(y^{j}_{1},y^{j}_{2},\ldots,y^{j}_{s})^{t}$ are the vectors
of inputs and outputs, respectively, for $DMU_{j}, j=1,\ldots,n$, in which it
has been assumed that $ X^{j}\geq 0$, $ X^{j}\neq0 $ and $
Y^{j}\geq 0$, $Y^{j}\neq0 $.  $\lambda_{j}$ is the reference weight for $DMU_{j}, j=1,\ldots,n.$  $X=(x_{1},x_{2},\ldots,x_{m})\in E^{m}$ represents the input variable vector.
 $Y=(y_{1},y_{2},\ldots,y_{s})\in E^{s}$  shows the output variable vactor. We define the production possibility
set of data envelopment analysis with VRS as follows:
\\
$T_{v}=\{(X,Y)=(x_{1},\ldots,x_{m},y_{1},\ldots,y_{s})\mid\displaystyle\sum_{j=1}^{n}\lambda_{j}y_{r}^{j}\geq
y_{r}, r=1,\ldots,s$,
$\displaystyle\sum_{j=1}^{n}\lambda_{j}x^{j}_{i}\leq x_{i},
i=1,\ldots,m$, $\displaystyle\sum_{j=1}^{n}\lambda_{j}=1$,
$\lambda_{j}\geq 0$ , $j=1,\ldots,n \}$.\\
We must note that the $T_{v}$ set includes all input and output vectors $(X, Y)$ that apply to the set’s constraints. 
\begin{definition}
$DMU_{o}=(X^{o},Y^{o}) \in T_{v}$ is called an efficient point if
and only if there is not an $(X,Y)\in T_{v}$ such that
$(X,-Y)^{t}\leq (X^{o},-Y^{o})^{t}$ and
$(X,-Y)^{t}\neq(-X^{o},Y^{o})^{t}.$
\end{definition}
\begin{definition}
$DMU_{o}=(X^{o},Y^{o}) \in T_{v}$ is called a weak efficient point
if and only if there is not an $(X,Y)\in T_{v}$ such that
$(X,-Y)^{t}<(X^{o},-Y^{o})^{t}$.
\end{definition}
Consider the following MOLP problem
\begin{equation}\begin{array}{cc}
\hspace{-6cm}\min\quad C^{t}Z\\
\hspace{-1.8cm}s.t.\quad Z\in R=\{Z\mid AZ\leq b ,Z\geq 0 \}.\\
\end{array}
\end{equation}
where $C=(C_{1}^{T},C_{2}^{T},\ldots,C_{p}^{T})$ is a $p\times n$
matrix, $c_{i}^{T}=(c_{i1},c_{i2},\ldots,c_{ip})\in E^{n}$ , $i=1,\ldots,p$ , represent the multiples of the i-th objective function in the MOLP problem. $ E^{n}$ shows the Euclidean space.
 A is the technology matrix including all variable multiples in problem (1). $A$ is an $m\times n$ matrix,
$n\geq m $ and rank($A$)=$m$,\\ $b=(b_{1},b_{2},\ldots,b_{m})\in
E^{m}$.  $Z=(z_{1}^{T},z_{2}^{T},\ldots,z_{n}^{T})\in E^{n}$, $Z\in
R^{n}$  represents the decision-making variable vector in the MOLP problem and $R$ shows the feasible region of the MOLP problem.
The Pareto solution and weak Pareto solution of (1) are defined as follows:
\begin{definition}
$\bar{Z} \in R$ is called a Pareto solution of $(1)$ if there does
not exist $Z \in R$ such that $C^{T}Z \leq C^{T} \bar{Z}$, $
C^{T}Z \neq C^{T} \bar{Z}$.
\end{definition}
\begin{definition}
$\bar{Z}\in R$ is called a weak Pareto solution of $(1)$ if there
does not exist $Z\in R$ so that $C^{t}Z<C^{t}\bar{Z}$.
\end{definition}
Put $Z=(X,Y,\lambda)=(x_{1},\ldots,x_{m},y_{1},\ldots,y_{s},\lambda_{1},\ldots,\lambda_{n})$,
$X\in E^{m}$,  $Y\in E^{s}$, $\lambda\in E^{n},$ 
$C_{j}^{T}=-e_{j}^{T}, j=1,\ldots,m$, $C_{j}^{T}=e_{j}^{T},$
$j=m+1,\ldots,m+s$, $e_{j}^{T}\in E^{m+s+n}$ is a vector whose
$j$th element is one and other elements are zero,
$C=(C_{1}^{T},C_{2}^{T},\ldots,C_{m+s}^{T})$, and\\
$R=\{(x_{1},\ldots,x_{m},y_{1},\ldots,y_{s},\lambda_{1},\ldots,\lambda_{n})
\mid\displaystyle\sum_{j=1}^{n}\lambda_{j}y_{r}^{j}\geq y_{r},
r=1,\ldots,s,$ 
$\displaystyle\sum_{j=1}^{n}\lambda_{j}x^{j}_{i}\leq x_{i},
i=1,\ldots,m,$ $\displaystyle\sum_{j=1}^{n}\lambda_{j}=1$,
$\lambda_{j}\geq 0$ , $j=1,\ldots,n \}$.\\
Then problem $(1)$ is converted to
\begin{equation}\begin{array}{cc}
\hspace{-1cm}\max\hspace{0.1cm}\{-x_{1},\ldots,-x_{m},y_{1},\ldots,y_{s}\}\\
\hspace{-0.9cm}s.t.\quad
\displaystyle\sum_{j=1}^{n}\lambda_{j}y^{j}_{r}\geq
y_{r},\hspace{0.5cm}$ r=1,\ldots,s, $\\
\hspace{0.3cm}\displaystyle\sum_{j=1}^{n}\lambda_{j}x^{j}_{i}\leq
x_{i}, \hspace{0.5cm}$
i=1,\ldots,m, $\\
\hspace{-2.7cm}\displaystyle\sum_{j=1}^{n}\lambda_{j}=1,\\
\hspace{-1cm}\lambda_{j}\geq 0,\hspace{0.25cm}  j=1,\ldots,n,\\
\hspace{2.75cm} x_{i}\geq 0,\hspace{0.25cm}
i=1,\ldots,m,\hspace{0.2cm}
y_{r}\geq 0,\hspace{0.25cm} r=1,\ldots,s.\\
 \end{array}
\end{equation}
In model (2), vector$ (x_{1},\ldots,x_{m},y_{1},\ldots,y_{s})$ is the variable vector for
 inputs and outputs; we can obtain the values of this vector by solving model (2).
Note $(X,Y,\lambda)$  is a feasible solution of problem (2) while  $(-X,Y)$   is a vector belong to objective function space of problem (2).
By considering definition 3.3 $(X^{\star},Y^{\star},\lambda^{\star})$  is called a pareto solution of (2)  if there does not exist $(X,Y,\lambda)$ 
such that $(-X^{\star},Y^{\star})\leq (-X,Y)$ and
$(-X^{\star},Y^{\star})\neq (-X,Y)$.\\\\
{\large\bf Theorem 3.1.} Let  $(X^{\star},Y^{\star})\in T_{v}$  then\\
(i) $(X^{\star},Y^{\star},\lambda^{\star})$  is a Pareto solution of (2) if and only if $(X^{\star},Y^{\star})$  is an efficient unit in $T_{v}$.\\
(ii) $(X^{\star},Y^{\star},\lambda^{\star})$  is a weak Pareto solution of (2) if and only if  $(X^{\star},Y^{\star})$  is a weak efficient unit in $T_{v}$.\\
{\bf Proof}. (i) Let $(X^{\star},Y^{\star},\lambda^{\star})$ be a Pareto
solution of $(2)$. We show that $(X^{\star},Y^{\star})$ is an
efficient point in $ T_{v}$. By contradiction, suppose
$(X^{\star},Y^{\star})$ is not an efficient point in $ T_{v}$,
then there is an $(\bar{X},\bar{Y})\in T_{v}$ such that
$(-\bar{X},\bar{Y})\geq (-X^{\star},Y^{\star})$ and
$(-\bar{X},\bar{Y})\neq (-X^{\star},Y^{\star})$. Since
$(\bar{X},\bar{Y})\in T_{v}$, there is a $\bar{\lambda}\in E^{n}$
such that $(\bar{X},\bar{Y},\bar{\lambda})$ is a feasible
solution of $(2)$ . Since $(-\bar{X},\bar{Y})\geq
(-X^{\star},Y^{\star})$ and
$(-\bar{X},\bar{Y})\neq(-X^{\star},Y^{\star})$, then we
have a contradiction; therefore, $(X^{\star},Y^{\star})$ is
an efficient point in $ T_{v}$,\\
Now suppose $(X^{\star},Y^{\star})$ is an efficient point in $
T_{v}$. Since $(X^{\star},Y^{\star})\in T_{v}$, there is a
 $\lambda^{\star}\in R^{n}$ such that
$(X^{\star},Y^{\star},\lambda^{\star})$ is a feasible solution of
$(2)$. AS $(X^{\star},Y^{\star})$ is an efficient point in $
T_{v}$, there is no $(\bar{X},\bar{Y})\in T_{v}$ such that
$(-\bar{X},\bar{Y})\geq (-X^{\star},Y^{\star}) $ and
$(-\bar{X},\bar{Y})\neq (-X^{\star},Y^{\star})$. Since, there is a vector $\bar{\lambda}$ for $(\bar{X},\bar{Y})\in T_{v}$,
such that $(\bar{X},\bar{Y},\bar{\lambda})$ is a feasible
solution of $(2)$.
Regarding the above relations there is no $(\bar{X},\bar{Y},\bar{\lambda})$ that is a feasible solution of (2) such that
$(-\bar{X},\bar{Y})\geq (-X^{\star},Y^{\star}) $ and
$(-\bar{X},\bar{Y})\neq (-X^{\star},Y^{\star})$. Therefore $(X^{\star},Y^{\star},\lambda^{\star})$ 
is a Pareto solution of (2), and the proof iscompleted. \\
(ii) Proof is similar to (i). $\Box$ \\\\
{\large\bf Theorem 3.2.}
Let $V^{=}=\{(-x_{1},\ldots,-x_{m},y_{1},
\ldots,y_{s})\mid \\(x_{1},\ldots,x_{m},y_{1},
\ldots,y_{s})\in T_{v}\}$, ( $V^{=}$  is called the outcome set for MOLP) then dim $(V^{=})=m+s.$ \\
{\bf Proof}. Since $C=(C_{1}^{T},C_{2}^{T},\ldots,C_{m+s}^{T})$
and  $C_{j}^{T}=-e_{j}^{T}$, $j=1,\ldots,m$, $C_{j}^{T}=-e_{j}^{T}$, $
j=m+1,\ldots,m+s.$
Rank$\{-e_{1},\ldots,-e_{m},e_{m+1},\ldots,e_{s+m}\}=m+s $ and $
T_{v}\neq\emptyset$, then dim $(V^{=})\leq m+s$. $\Box$ \\\\
{\large\bf Theorem 3.3.} The optimal values of problem (2) are finite.\\
{\bf Proof}. Since
$\displaystyle\sum_{j=1}^{n}\lambda_{j}y^{j}_{r}\geq y_{r}$, $
r=1,\ldots,s, $ and $\displaystyle\sum_{j=1}^{n}\lambda_{j}=1 $,
$\lambda_{j}\geq 0$, $ j=1,\ldots,n $, then $y_{r},r=1,\ldots,s $
are finite. Similary
$-\displaystyle\sum_{j=1}^{n}\lambda_{j}x^{j}_{i}\geq -x_{i}$, $
i=1,\ldots,m $, and $\displaystyle\sum_{j=1}^{n}\lambda_{j}=1 $,
$\lambda_{j}\geq 0$, $ j=1,\ldots,n $, then $(-x_{i}),
i=1,\ldots,m
$ are finite, Therefore, the optimal values of problem $(2)$ are finite. $\Box$ \\\\
By using the observed DMUs, For each $ i=1,\ldots,m
$  and $r=1,\ldots,s, $  we put
$v_{i}^{AL}=-\max\{x_{i}^{j}\mid (x_{1},\ldots,x_{m},y_{1},\ldots,y_{s})^{j}\in T_{v}, j=1,\ldots,n\}$.\\
$v_{r+m}^{AL}=\min\{
y_{r}^{j} \mid(x_{1},\ldots,x_{m},y_{1},\ldots,y_{s})^{j}\in T_{v}, j=1,\ldots,n\}.$ \\
Vector $v^{AL}=(v_{1}^{AL}, v_{2}^{AL},\ldots, v_{m}^{AL}, V_{m+1}^{AL}, v_{m+2}^{AL},\ldots, v_{m+s}^{AL})\in R^{m+s}$
is called the anti-ideal point of outcome set for problem (2) . Let $\hat{v}\in R^{m+s}$ satisfy
$\hat{v}<v^{AL}$, we
define $V$ as follows:\\
$V=\{v=(v_{1},v_{2},\ldots,v_{m},v_{m+1},\ldots,v_{m+s})\mid
\hat{v}\leq v \leq \\ (-x_{1},\ldots,-x_{m},y_{1},\ldots,y_{s})$,
for some  $(x_{1},\ldots,x_{m},y_{1},\ldots,y_{s})\in T_{v}\}$.\\\\
{\large\bf Theorem 3.4.} Set $V$  is a nonempty, bounded polyhedron in $R^{m+s}$  of dimension $m+s.$ \\
{\bf Proof}. Since $\hat{v}<v^{AL}<(-X,Y),(X,Y)\in T_{v}$,
$T_{v}\neq\emptyset$, by Theorem(3.3), the definition of $V$ implies
that $V$ is a nonempty, bounded set in $R^{m+s}$. We may write
$V=V_{1}\cap \{V^{=}+V_{2}\}$, where \\ $V_{1}=\{v\in R^{m+s}\mid
v\geq \hat{v}\}$ and $V_{2}=\{Z\in R^{m+s}\mid Z\leq 0\}$, Since
$V_{1}$, $V_{2}$, $V^{=}$ are polyhedral sets as proposed by
Dauer and Gallagher [10], therefore $V$ is a polyhedral
set. Since $\hat{v}<(-X,Y), (-X,Y)\in T_{v}, 
IntV\neq\emptyset( IntV$ show interior points set of V), by
Theorem (3.2) the dimension of
$V$ is $m+s$, and the proof is complete.  $\Box$ \\\\
A point $v^{o}\in V $ is called an efficient (or admissible) point
of $V$ when no $v\in V $ exist such that $v\geq v^{o}$ and $v\neq
v^{o}$. When no $v\in V$ exist such that $v> v^{o}$, then $v^{o}$
is called a  weakly efficient (or weakly admissible) point of
$V$. Let $V_{E}$ and $V_{WE}$ denote the set all
efficient and weakly efficient points, respectively of $V$.\\\\
{\large\bf Theorem 3.5.} Let $V_{E}^{=}$ be the set of efficient point of $V^{=}$ then $V_{E}^{=}=V_{E}$.\\
{\bf Proof}. Suppose that $(-X,Y)\in V_{E}^{=}$ but $(-X,Y)$
does not belong to $V_{E}$, then by the definition of $V_{E}$
there exists a point $v^{'}\in V$  such that $v^{'}\geq  (-X,Y).$
Since $v^{'}\in V$ , there exists a point $(X^{'}, Y^{'})\in T_{v}$  such that $v^{'}\leq  (-X^{'}, Y^{'})$, therefore $(-X,Y)<(-X^{'}, Y^{'}).$ 
This contradicts $(-X,Y)\in V_{E}^{=}$ , therefore $v=(-X,Y)\in V_{E}.$\\
Now suppose $v \in V_{E}$, to show that $v \in V_{E}^{=}$, we show that
$v=(-X,Y)$ for some $(X,Y)\in T_{v}$ and $(-X,Y)\in EF$ (efficient points of outcome set for MOLP). Since
$v \in V$, therefore $v\leq(-X,Y)$ for some $(X,Y)\in T_{v}$, since $v \in V_{E}$ and  $(-X,Y)\in V$ then  $v=(-X,Y).$
Let $(X^{'}, Y^{'})\in T_{v}$  satisfy  $(-X,Y)<(-X^{'}, Y^{'}),$ $(-X,Y)\neq (-X^{'}, Y^{'}),$ then by the definition of $V$ , since
$v^{AL} <(-X^{'}, Y^{'}),$  we have $(-X^{'}, Y^{'})\in V$  , then $v=(-X,Y)$  does not belong to  $V_{E},$  but this is a contradiction,
therefore $v \in V_{E}^{=}$  and the proof is completed. $\Box$ \\
 Let
\begin{equation}\begin{array}{cc}
\hspace{-2.25cm}\beta=\max \displaystyle\sum_{r=1}^{s}
y_{r}-\displaystyle\sum_{i=1}^{m}x_{i}\\
\hspace{2cm}s.t.\quad
 (x_{1},x_{2},\ldots,x_{+m},y_{1},y_{2},\ldots,y_{s})\in T_{v}.\\
 \end{array}
\end{equation}\\\\
By Theorem(3.3) $\beta$ is a finite number. If
$(\bar{X},-\bar{Y})$ is an optimal solution of $(3)$, then
$(\bar{X},-\bar{Y})$ is an optimal solution of $(2),$ (we solve
problem $(2)$ by the wighthed-sum problem method (by choosing $w_{j}=1, \\j=1,\ldots,m+s$ ), see, Zeleny [48, 49]).\\
For $i=1,\ldots,m+s, j=1,\ldots,m+s$, we put
$q^{0}=\hat{v}=(\hat{v}_{1},\ldots,\hat{v}_{m+s})=(-\hat{X},\hat{Y})$, $\hat{v}< v^{AL}$ and
$q^{j}=(q_{1}^{j},q_{2}^{j},\ldots,q_{i}^{j},\ldots,q_{m+s}^{j})$
such that $q_{i}^{j}=\hat{v}_{i}$ for $i\neq j$ and
$q_{i}^{j}=\beta+\hat{v}_{j}-\displaystyle\sum_{r=1}^{m+s}
\hat{v}_{j}=\beta+\hat{v}_{j}-(\displaystyle\sum_{j=1}^{s}
\hat{y}_{r}-\displaystyle\sum_{i=1}^{m}\hat{x}_{i})$ for $i=j$.\\\\
{\large\bf Theorem 3.6.} The Convex hull of  $\{q^{0},q{1},\ldots,q^{m+s}\}$  is an  $m+s$--dimensional simplex and contains  $V$.\\
{\bf Proof}. First we show that $\{q^{0},q{1},\ldots,q^{m+s}\}$ is
a affinely independent, since $(q^{j}-q^{o})_{i}=0$ for $i\neq j$
and $(q^{j}-q^{o})_{i}=\beta-\displaystyle\sum_{r=1}^{s}\hat{y}_{r}+\displaystyle\sum_{i=1}^{m}\hat{x}_{i}$ 
for $i=j,$ $i=1,\ldots,m+s, j=1,\ldots,m+s,$ we put
$\gamma=\beta-\displaystyle\sum_{r=1}^{s}\hat{y}_{r}+\displaystyle\sum_{i=1}^{m}\hat{x}_{i}$,
therefore, $\gamma>0$ (by the definition of $\beta$, this is evident), Let $
\displaystyle\sum_{j=1}^{m+s}c_{j}(q^{j}-q^{0})=0$ then
$\displaystyle\sum_{j=1}^{m+s}c_{j}(q^{j}-q^{o})=(c_{1}\gamma,c_{2}\gamma,\ldots, c_{m+s}\gamma)=0$; since
$\gamma\neq0$, therefore, $c_{j}=0$, $j=1,\ldots,m+s$ and
$\{q^{0},q{1},\ldots,q^{m+s}\}$ is a affinely independent.\\
To show that the convex hull contains $V$, suppose $\bar{v}\in V$.
Since\\ $\{q{1}-q^{0},\ldots,q^{m+s}-q^{0}\}$ is a lineary independent
set, hence it is a basis for $R^{m+s}$; therefore, there is
$\alpha^{T}=(\alpha_{1},\ldots,\alpha_{m+s})$ such that $\displaystyle\sum_{j=1}^{m+s}\alpha_{j}=1$, $\alpha_{j}\geq 0$, $j=1,\ldots,m+s$ and
$\bar{v}-q^{0}=\displaystyle\sum_{j=1}^{m+s}\alpha_{j}(q^{j}-q^{0})$. If
$\displaystyle\sum_{j=1}^{m+s}\alpha_{j}>1$, we have 
$\displaystyle\sum_{j=1}^{m+s}(\bar{v}-q^{0})_{j}=\displaystyle\sum_{j=1}^{m+s}\alpha_{j}\gamma=\gamma\displaystyle\sum_{j=1}^{m+s}\alpha_{j}>\gamma,$
but we have $\displaystyle\sum_{j=1}^{m+s}(\bar{v}-q^{0})_{j}\leq \max\displaystyle\sum_{j=1}^{m+s} (v-q^{0})_{j}=
\max (\displaystyle\sum_{j=1}^{m+s}v_{j}- \displaystyle\sum_{j=1}^{m+s}\hat{v}_{j})=\beta-
\displaystyle\sum_{r=1}^{s}\hat{y}_{r}+\displaystyle\sum_{i=1}^{m}\hat{x}_{i}=\gamma$, 
which contradicts the previous paragraph. Hence
$\displaystyle\sum_{j=1}^{m+s}\alpha_{j}\leq1$ and
$\bar{v}=(1-\displaystyle\sum_{j=1}^{m+s}\alpha_{j})q^{0}+\displaystyle\sum_{j=1}^{m+s}\alpha_{j}q^{j}$
, therefore, $\bar{v}\in S$ (S is the convex hull of
$\{q^{0},q{1},\ldots,q^{m+s}\})$. Then we showed that $V\subseteq
S$ and the proof is completed. $\Box$ \\\\
{\large\bf Theorem 3.7.} $S$ may also be written as following.\\
$S=\{(-X,Y)\in
E^{m+s}\mid  (-\hat{X},\hat{Y})<(-X,Y), \displaystyle\sum_{r=1}^{s}y_{r}
-\displaystyle\sum_{i=1}^{m}x_{i}\leq\beta\}.$\\
{\bf Proof}. Suppose that $\bar{v}\in S$, then\\
$\bar{v}=\displaystyle\sum_{j=0}^{m+s}\alpha_{j}q_{j}=(1-\displaystyle\sum_{j=1}^{m+s}\alpha_{j})q^{0}+
\displaystyle\sum_{j=1}^{m+s}\alpha_{j}q^{j}=q^{0}+\displaystyle\sum_{j=1}^{m+s}\alpha_{j}(q^{j}-q^{0})
=(-\hat{X},\hat{Y})+(\alpha_{1}\gamma,\alpha_{2}\gamma,\ldots,\alpha_{m+s}\gamma)> (-\hat{X},
\hat{Y})=q^{0}$.\\
Therefore $\bar{v}>(-\hat{X},\hat{Y})$. On other hand, we havae\\
$\displaystyle\sum_{r=1}^{m+s}\bar{v}_{j}
=\displaystyle\sum_{i=1}^{m+s}\alpha_{j}\gamma=\gamma\leq \beta$, therefore\\
$\bar{v}\in \{(-X,Y)\in E^{m+s}\mid (-X,Y)>(-\hat{X},
\hat{Y}),\displaystyle\sum_{r=1}^{s}y_{r}
-\displaystyle\sum_{i=1}^{m}x_{i}\leq\beta\}.$\\
Now suppose that  $(-\bar{X},\bar{Y})\in\{(-X,Y)\in
R^{m+s}\mid(-X,Y)>(-\hat{X},
\hat{Y}),\\ \displaystyle\sum_{r=1}^{s}y_{r}
-\displaystyle\sum_{i=1}^{m}x_{i}\leq\beta\}.$ Let
$\alpha_{j}=(\frac{-\bar{x}_{j} + \hat{x}_{j}}{\gamma}), j=1,\ldots,m$,
$\alpha_{m+j}=( \frac{\bar{y}_{j}- \hat{y}_{j}}{\gamma })$,
 $j=1,\ldots,s$, $0\leq\alpha_{j}\leq1$, then $\displaystyle\sum_{j=1}^{m+s}\alpha_{j}=
\displaystyle\sum_{j=1}^{m}(\frac{-\bar{x}_{j} + \hat{x}_{j}}{\gamma})
+\displaystyle\sum_{j=1}^{s}( \frac{\bar{y}_{j}- \hat{y}_{j}}{\gamma }).$ 
We have $-\bar{x}_{j} + \hat{x}_{j}>0$,   $\bar{y}_{j} - \hat{y}_{j}>0,$ but 
$\displaystyle\sum_{j=1}^{s}\bar{y}_{r}-\displaystyle\sum_{j=1}^{m}\bar{x}_{j}-(\displaystyle\sum_{j=1}^{s}\hat{y}_{r}-\displaystyle\sum_{j=1}^{m}\hat{x}_{j})
<\beta-(\displaystyle\sum_{j=1}^{s}\hat{y}_{r}-\displaystyle\sum_{j=1}^{m}\hat{x}_{j})=\gamma,$ therefore 
$\displaystyle\sum_{j=1}^{m+s}\alpha_{j}=
\displaystyle\sum_{j=1}^{m}(\frac{-\bar{x}_{j} + \hat{x}_{j}}{\gamma})
+\displaystyle\sum_{j=1}^{s}( \frac{\bar{y}_{j}- \hat{y}_{j}}{\gamma })<1$.
By the definition of $q^{j}$, $j=1,\ldots,m+s$, we have
$q^{0}+\displaystyle\sum_{j=1}^{m+s}\alpha_{j}(q^{j}-q^{0})=q^{0}+(\alpha_{1}\gamma ,\alpha_{2}\gamma ,\ldots, \alpha_{m+s}\gamma )=(-\hat{X},\hat{Y})+\\
\gamma ( \frac{-\bar{x}_{1}+ \hat{x}_{1}}{\gamma },\ldots, \frac{-\bar{x}_{m}+ \hat{x}_{m}}{\gamma },  \frac{\bar{y}_{1}- \hat{y}_{1}}{\gamma },\ldots,  \frac{\bar{y}_{s}- \hat{y}_{s}}{\gamma })=
= (-\hat{X},\hat{Y})+(-\bar{X},\bar{Y})+(-\hat{X},\hat{Y})=(-\bar{X},\bar{Y})$, then $(-\bar{X},\bar{Y})=
\displaystyle\sum_{j=0}^{m+s}\alpha_{j}q_{j}=(1-\displaystyle\sum_{j=1}^{m+s}\alpha_{j})q^{0}+\displaystyle\sum_{j=1}^{m+s}\alpha_{j}q_{j}, $ therefore $\bar{v}=(-\bar{X},\bar{Y})\in S.$ $\Box$ \\\\
{\large\bf Theorem 3.8.} Let $(-X_{p},Y_{p})\in Int(V)$  and suppose $(-X,Y)>(-\hat{X},\hat{Y})$, $(-X,Y)$  does not belong to $V$  and
 $(-X^{w},Y^{w})=(-(X+\lambda^{\ast}(X_{p}-X)), Y+\lambda^{\ast}(Y_{p}-Y))$ , where $\lambda^{\ast}$ is the solution of problem (4) then $(-X^{w},Y^{w})\in V_{WE}.$ \\ 
\begin{equation}\begin{array}{cc}
\hspace{-10.2cm}\max\hspace{0.1cm}\lambda\\
\hspace{-7.6cm}\hspace{4.25cm}s.t.\quad
 (-(X+\lambda(X_{p}-X)), Y+\lambda(Y_{p}-Y))\in V.\\
 \end{array}
\end{equation}\\
{\bf Proof}. Suppose $(-X^{w},Y^{w})$ does not belong to
$V_{WE}$. Then we may choose a point $(-X_{0},Y_{0})\in V$ such
that $(-X_{0},Y_{0})>(-X^{w},Y^{w})$.
Since $(-X_{p},Y_{p})\in IntV,$ then $(-X_{p},Y_{p})>(-\hat{X},\hat{Y}),$ on other hand, we have $(-X,Y)>(-\hat{X},\hat{Y}),$  then $(-X^{w},Y^{w})>(-\hat{X},\hat{Y}).$  \\
Put $d_{1}=\min\{(y_{ro}-y^{w}_{r})$, $(-x_{io}+x^{w}_{i})\mid
r=1,\ldots,s,,i=1,\ldots,m$\},\\
$d_{2}=\min\{(y^{w}_{r}-\hat{y}_{r}),(-x^{w}_{i}+\hat{x}_{i})\mid
r=1,\ldots ,s,i=1,\ldots,m\}.$\\ Choose $\epsilon>0$ such that
$\epsilon<d_{1}$ and $\epsilon<d_{2}$. Let
$N_{\epsilon}(-X^{w},Y^{w})=\{(-X,Y)\in R^{m+s}\mid
 \parallel(-X,Y)-(-X^{w},Y^{w})\parallel<\epsilon\}.$ Suppose\\ $Z\in
N_{\epsilon}(-X^{w},Y^{w})$ then
$-\epsilon<z_{r}-y^{w}_{r}<\epsilon$ and $y^{w}_{r}-\epsilon<z_{r}<y^{w}_{r}+\epsilon$.
Since $\epsilon<d_{1}$ then  $y_{ro}-y^{w}_{r}>\epsilon$,
$y^{w}_{r}-\hat{y}_{r}<\epsilon$ then,  $\hat{y}_{r}<
z_{r}<y_{ro}$, $ r=1,\ldots,s,  i=1,\ldots,m$. Therefore $Z\in V$.
Similarly we show that $-\hat{x}_{i}< z_{r}<-x^{0}_{i}$,
$i=1,\ldots,m$. We conclude that
$N_{\epsilon}(-X^{w},Y^{w})\subseteq V$. Since $\epsilon>0$, this
contradicts the fact that $(-X^{w},Y^{w})$ belong to the boundary
of $V$(consider problem(4)) so the proof is complete. $ \Box$\\\\
From Rockafellar [40] and Yu [48] and the weighted-sum problem, $F$ is a face of $V$  if and only if $F$  is equal to the set of the optimal solution set of following problem 
\begin{equation}\begin{array}{cc}
\hspace{-0.7 cm}\max\hspace{0.1 cm}\displaystyle\sum_{r=1}^{s}
u_{r}y_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}x_{i}\\
\hspace{1.9cm}s.t.\quad
(x_{1},x_{2},\ldots,x_{m},y_{1},y_{2},\ldots,y_{s})\in T_{v},\\
 \end{array}
\end{equation}\\
for some$(u_{1},u_{2},\ldots,u_{s},u_{s+1},u_{s+2},\ldots,u_{s+m})\in
R^{s+m}$.\\
The variable vector $(u_{1},u_{2},\ldots,u_{s},u_{s+1},u_{s+2},\ldots,u_{s+m})$  expresses the corresponding multiples of the output and input vector\\ $(y_{1},y_{2},\ldots,y_{s}, x_{1},x_{2},\ldots,x_{m}).$
We know that $(-X^{w},Y^{w})$ is weak efficient if and only if
the optimal value of the following problem is zero. (It is the clear)\\
\begin{equation}\begin{array}{cc}
\hspace{-8.25cm}\max\hspace{0.1cm}t\\
\hspace{-3.75cm}s.t.\quad
 (-X,Y)-et\geq (-X^{w},Y^{w})\\
 \hspace{-4.5cm}(X,Y)\in T_{v},t\geq0\\
 \end{array}
\end{equation}\\
The dual of the linear program (6) is as follows:\\
\begin{equation}\begin{array}{cc}
\hspace{-2.75cm}\min\hspace{0.1cm}-\displaystyle\sum_{r=1}^{s}
u_{r}y^{w}_{r}+ \displaystyle\sum_{i=1}^{m}u_{i+s}x^{w}_{i}+v_{m+s+1}\\
\hspace{-2.25cm}s.t.\quad\ -u_{r}+v_{r}\geq0\hspace{2cm}r=1,\ldots,s\\
\hspace{-1.25cm} u_{s+i}-v_{s+i}\geq0 \hspace{1.25cm}i=1,\ldots,m\\
\hspace{-3.5cm} \displaystyle\sum_{r=1}^{s}
u_{r}+\displaystyle\sum_{i=1}^{m}u_{i+s}\geq 1\\
\hspace{-0.75cm}-\displaystyle\sum_{r=1}^{s}
v_{r}y^{j}_{r}+\displaystyle\sum_{i=1}^{m}v_{i+s}x^{j}_{i}+v_{m+s+1}\geq0\\
\hspace{-1.5cm}u_{j}\geq0,v_{j}\geq 0,j=1,\ldots,m+s.
\end{array}
\end{equation}\\
By the duality Theorem of linear programing, since the optimal
value of (6) is zero, problem (7) also has an optimal value equal to
zero, therefore $\displaystyle\sum_{r=1}^{s}
u^{\ast}_{r}y^{w}_{r}-\displaystyle\sum_{i=1}^{m}u^{\ast}_{i+s}x^{w}_{i}-v^{\ast}_{m+s+1}=0$
and \\
$(u^{\ast}_{1},u^{\ast}_{2},\ldots,u^{\ast}_{s},u^{\ast}_{s+1},u^{\ast}_{s+2},\ldots,u^{\ast}_{s+m})\geq0$,\\
$(u^{\ast}_{1},u^{\ast}_{2},\ldots,u^{\ast}_{s},u^{\ast}_{s+1},u^{\ast}_{s+2},\ldots,u^{\ast}_{s+m})\neq0$,
From Falk and Hoffman [23], we know that the optimal values of problem (5) correspond to weakly efficient faces of $V$ for
$(u_{1},u_{2},\ldots,u_{s},u_{s+1},u_{s+2},\ldots,u_{s+m})=\\
(u^{\ast}_{1},u^{\ast}_{2},\ldots,u^{\ast}_{s},
u^{\ast}_{s+1},u^{\ast}_{s+2},\ldots,u^{\ast}_{s+m})$. Inequality
$\displaystyle\sum_{r=1}^{s}
u_{r}^{\ast}y_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}^{\ast}x_{i}-v_{m+s+1}^{\ast}\leq
0$, construct inequality cuts needed in the outer approximation algorithm for generating all efficient extreme
points of $V$.
\section{Generating All Efficient Extreme Points of the production possibility set} 
We apply the outer approximation algorithm for generating all efficient extreme points of the outcome set of problem (2).
In what follows, all efficient extreme points of the production possibility set of the DEA with VRS are essentially immediately
available upon termination of the algorithm, by converting points $(-X,Y)$  in the outcome set of problem (2) to equivalent points $(X,Y)$  in $T_{v}$ .\\\\
\textbf{The Outer Approximation Algorithm }applied as follows:\\
\textbf{The Initialization step.} Compute a point $(-X_{p},Y_{p})\in
Int(V)$.\\ $(-X_{p},Y_{p})\in E^{m+s}$  may be set equal to any strict convex combination of  $v^{AL}$ and  $(-X^{\ast},Y^{\ast})$, where $(-X^{\ast},Y^{\ast})$  is any optimal solution
 to the linear program (6) with $(-X^{w},Y^{w})=(-X^{AL},Y^{AL})=v^{AL}$  and construct the $m+s$-dimensional simplex $S^{0}=S$ containing   described in Theorems (3.6) and (3.7). 
Set $S$ is a m+s-dimensional simplex consisting of the vertices of $V$, as described in Theorems (3.6).
Store both the vertex set $S$ given in Theorem (3.6) inequality representation of $S^{0}=S$  given in the Theorem (3.7). Set $k=0$ and go to iteration $k$. Iteration $k$, $k\geq0$, See Steps 1 through 4  below.\\\\
\textbf{Step 1)}. If, all vertices of $S^{k}$ belong to $V$, then
stop $S^{k}=V$. Otherwise, choose any vertex of $S^{k}$ such
that, it does not belong to $V$, for example $(-X^{k},Y^{k})$ and
continue (To test a given $(-X^{k},Y^{k})$ is membership in $V$,
one may apply the phase-1 procedure of the simplex method to
problem (6) by
putting $(-X^{w},Y^{w})=(-X^{k},Y^{k}))$.\\\\
\textbf{Step 2)}. Compute $(-X^{w},Y^{w})$ description in the
Theorem (3.8) by putting $(-X,Y)=(-X^{k},Y^{k})$.\\\\
\textbf{Step 3).} Set $S^{k+1}=S^{k}\cap\{(-X,Y)\in
R^{m+s}\mid\displaystyle\sum_{r=1}^{s}
u_{r}^{\ast}y_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}^{\ast}x_{i}\leq v_{m+s+1}^{\ast}
\}$ where $(u_{1}^{\ast},u_{2}^{\ast},\ldots,u_{s}^{\ast},u_{s+1}^{\ast},u_{s+2}^{\ast},\ldots,u_{s+m}^{\ast})$ is
any dual optimal solution to the linear programing (7) with
$(-X^{w},Y^{w})$ that have been calculated in the step (2).\\\\
\textbf{Step 4)}. Using vertex of $S^{k}$ and method that it
supposed by Falk and Hoffman [23] and definition of $S^{k+1}$
given in Step (3), determine all vertex of $S^{k+1},$ set
$k=k+1$ and go to iteration k. \\\\By definition of $S^{k+1}$ in step
(3) since $(-X^{k},Y^{k})$ don't belong to $S^{k+1}$ and
$V\subseteq S^{k+1}$, we conclude that algorithm generates
distinct polyhedral $S^{j}$, $j=0,1,\ldots k,$ so that $V\subset
S^{k}\subset S^{k-1},\ldots,\subset S^{1}\subset S^{0}$. This
implies that the algorithm must be finite and it must
terminate in some iteration $k\geq 0$. $S^{k}$ is a $m+s$-dimensional simplex including the vertices of $V,$ formed in each stage $k$.\\\\
{\large\bf Theorem 4.1.} Let $k\geq 0$ denotetheiteration
number in which $S^{k}=V$ and the outerapproximation algorithm terminate. Let  $E=\{(-X,Y)\mid
(-X,Y)$ belong to vertex set of $S^{k}$ and
$(-X,Y)>(-\hat{X},\hat{Y})\}$ then $ E $ is identical to 
 the set of all   efficient extreme points of $V^{=}$.\\\\
{\bf Proof}. From before we have $S^{k}=V=\{(-X,Y)\in
R^{m+s}\mid (-X,Y)>(-\hat{X},\hat{Y}),\displaystyle\sum_{r=1}^{s}
y_{r}-\displaystyle\sum_{i=1}^{m}x_{i}\leq
\beta\}\cap(\cap_{n=0}^{k-1}H_{n})$ and for $n=0,1,\ldots,k-1$,
$H_{n}=\{(-X,Y)\in R^{m+s}\mid \displaystyle\sum_{r=1}^{s}
u_{r}^{n}y_{r}-\displaystyle\sum_{i=1}^{m}u_{s+i}^{n}x_{i}-v_{m+s+1}^{n}\leq0, (X, Y)\in T_{v}\}.$
$(u_{1}^{n},\ldots,u_{s}^{n},u_{s+1}^{n},\ldots,u_{s+m}^{n} )$ is any dual optimal solution to the linear programing (6) with $(-X^{w},Y^{w})$ in step 3.
Notice also that \\
$\{(-X,Y)\in R^{m+s}\mid \displaystyle\sum_{r=1}^{s}
u_{r}^{n}y_{r}-\displaystyle\sum_{i=1}^{m}u_{s+i}^{n}x_{i}-v_{m+s+1}^{n}< 0, (X, Y)\in T_{v}\}\subseteq
V_{WE}.$
Suppose that $(-\bar{X},\bar{Y})\in E$, then $(-\bar{X},\bar{Y})$
are belong to vertex set of $S^{k}=V$ and $(-\bar{X},\bar{Y})>(-\hat{X},\hat{Y})$, therefore at last, $m+s$
of the inequalities below, must hold as equations at
$(-X,Y)=(-\bar{X},\bar{Y})$.\\ $\displaystyle\sum_{r=1}^{s}
y_{r}-\displaystyle\sum_{i=1}^{m}x_{i}\leq\beta,
\displaystyle\sum_{r=1}^{s}
u_{r}^{n}y_{r}-\displaystyle\sum_{i=1}^{m}u_{s+i}^{n}x_{i}\leq
v_{m+s+1}, n=0,\ldots,k-1.$\\
That implies that $(-\bar{X},\bar{Y})\in V_{WE}.$  We show $(-\bar{X},\bar{Y})\in V_{E},$ by contradiction, suppose $(-\bar{X},\bar{Y})$ dose not belong to $V_{E},$ 
therefore, there is $(-X,Y)$ such that $(-\bar{X},\bar{Y}) \leq (-X,Y),$ $(-\bar{X},\bar{Y}) \neq (-X,Y).$\\
Let $I_{11}=\{j\mid y_{j}= \bar{y}_{j}\},$ $I_{12}=\{j\mid y_{j}\neq  \bar{y}_{j}\},$ $I_{21}=\{j\mid x_{j}= \bar{x}_{j}\},$ $I_{22}=\{j\mid x_{j}\neq \bar{x}_{j}\}.$ 
For $j\in I_{12}$  let $n_{j}=y_{j}- \bar{y}_{j}$ and for $j\in I_{22}$  let $m_{j}=-x_{j}+ \bar{x}_{j}>0.$\\
We choose $M>0$  such that $\bar{y}_{j}-\frac{n_{j}}{M}> \hat{y}_{j},$ ,$\bar{x}_{j}+\frac{m_{j}}{M}< \hat{x}_{j}$  and define
$y_{j}^{new}= \bar{y}_{j}$ for $j\in I_{11}$ and $y_{j}^{new}=\bar{y}_{j}-\frac{n_{j}}{M}$ for $j\in I_{12}$ and $x_{j}^{new}=\bar{x}_{j}$ for $j\in I_{21}$ and
 $x_{j}^{new}=\bar{x}_{j}+\frac{m_{j}}{M}$ for $j\in I_{22}.$\\ Then $y_{j}=n_{j} +\bar{y}_{j}>0$ for $j\in I_{11},$ $y_{j}=\bar{y}_{j} $ for $j\in I_{12}$ and 
$x_{j}=-m_{j} +\bar{x}_{j}>0$ for $j\in I_{21},$ $x_{j}=\bar{x}_{j}$ for $j\in I_{22}.$ \\
Then $\bar{y}_{j}=\frac{M}{M+1}y_{j}^{new}+\frac{1}{M+1}y_{j}$, $\bar{x}_{j}=\frac{M}{M+1}x_{j}^{new}+\frac{1}{M+1}x_{j}$ therefore
$(-\bar{X},\bar{Y})=\frac{1}{M+1}(-X,Y)+ \frac{M}{M+1}(-X^{new},Y^{new}).$ We have $0<\frac{1}{M+1}<1$ then $(-\bar{X},\bar{Y})$  is a strict convex combination of $(-\bar{X},\bar{Y})\in V$  and 
$(-X^{new},Y^{new})\in V,$ that is contradiction (because $\bar{v}=(-\bar{X},\bar{Y})$ is belong to vertex set of $V=S^{k}$ ). Therefore $(-\bar{X},\bar{Y})\in V_{E},$ from Theorem (3.5),
we have$(-\bar{X},\bar{Y})\in V_{E}^{=}.$  Since $V^{=}\subset V$  then $(-\bar{X},\bar{Y})$ is an efficient extreme point of $V^{=} $ .\\
Now suppose that $(-X,Y)$ is a efficient extreme point of $V^{=}$
and $(-X,Y)$ don't belong to all efficient extreme points of
$S^{k}$, therefore, we choose $(-X^{1},Y^{1}),(-X^{2},Y^{2})\in V$
and $\alpha\in R, 0<\alpha<1$ so that
$(-X,Y)=\alpha(-X^{1},Y^{1})+(1-\alpha)(-X^{2},Y^{2}).$ By solving
problem (2) by wighthed-sum problem method (see, Zeleny [49]),
and from Theorem (3) that it proposed by Benson [7], we
conclude that $(-X,Y)$ is an efficient extreme point of the
polyhedron $V^{=}$.\\ We may select a point
$(u_{1},u_{2},\ldots,u_{s},u_{s+1},u_{s+2},\ldots,u_{s+m})\in
R^{m+s}$\\
$(u_{1},u_{2},\ldots,u_{s},u_{s+1},u_{s+2},\ldots,u_{s+m})>0$,
such that $(-X,Y)$ is the unique optimal solution to the problem
\begin{equation}\begin{array}{cc}
\hspace{-4.75cm}\max\hspace{0.1cm}\displaystyle\sum_{r=1}^{s}
u_{r}y_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}x_{i}\\
\hspace{-5.5cm}\hspace{4.25cm}s.t.\quad
(-x_{1},-x_{2},\ldots,-x_{m}, y_{1},y_{2},\ldots,y_{s})\in V.\\
 \end{array}
\end{equation}\\
From the definition of $V$, this implies that $(-X,Y)$ is also the
unique optimal solution to the problem (8). since
$(-X^{1},Y^{1}),(-X^{2},Y^{2})\in V$,therefore\\
$\displaystyle\sum_{r=1}^{s}
u_{r}y^{1}_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}x^{1}_{i}\leq
\displaystyle\sum_{r=1}^{s}
u_{r}y_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}x_{i}$  and\\
 $\displaystyle\sum_{r=1}^{s}
u_{r}y^{2}_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}x^{2}_{i}\leq
\displaystyle\sum_{r=1}^{s}
u_{r}y_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}x_{i}$.\\
Since $0<\alpha<1$ these inequalities imply that\\
$\alpha(\displaystyle\sum_{r=1}^{s}
u_{r}y^{1}_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}x^{1}_{i})+(1-\alpha)(\displaystyle\sum_{r=1}^{s}
u_{r}y^{2}_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}x^{2}_{i})\leq\\
\displaystyle\sum_{r=1}^{s}
u_{r}y_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}x_{i}$. 
Since $(-X,Y)=\alpha(-X^{1},Y^{1})+(1-\alpha)(-X^{2},Y^{2})$ the
left-hand-side of the previous inequality equals
$\displaystyle\sum_{r=1}^{s}
u_{r}y_{r}-\displaystyle\sum_{i=1}^{m}u_{i+s}x_{i}$, yielding a
contradiction. Therefore, $(-X,Y)$ belong to all efficient
extreme points of $S^{k}$ must be true, so that the proof is complete. $\Box$ 
\section{Application and discussion}
In this section, we illustrate the problem by two numerical example. 
\textbf{Example 1.}
Consider the case where there are seven units with an input and an output whose details have been given in the following Table.
\begin{center}
Table 5.1. The data of the seven DMUs.
 {\footnotesize
\begin{tabular}{|l| l l l l l l  l|}
\hline
 & $DMU_{1}$ & $DMU_{2}$  &$DMU_{3}$& $DMU_{4}$ & $DMU_{5}$ & $DMU_{6}$ & $DMU_{7}$ \\
\hline
Inputs&        1& 1& 2 & 4 &  5  & 6& 3\\
\hline
Outputs&        1& 3& 5 & 6 &  7  & 8& 2\\
 \hline
$\theta^{*}_{BCC}$&        1& 1& 1 & 0.83 &  0.93  & 1& 0.33\\
\hline
Efficiency status&       non-extreme& extreme& extreme & - &   - & extreme& -\\
 \hline
\end{tabular}\\
  }
\end{center}
The corresponding MOLP is
\begin{equation}\begin{array}{cc}
\hspace{-9cm}\max\hspace{0.1cm}\{-x_{1}, y_{1}\}\\
\hspace{-2.75cm}s.t\quad
 \lambda_{1}+\lambda_{2}+2\lambda_{3}+4\lambda_{4}+5\lambda_{5}+6\lambda_{6}+3\lambda_{7}
 \leq x_{1}
\\
\hspace{-1.75cm}\lambda_{1}+3\lambda_{2}+5\lambda_{3}+6\lambda_{4}+7\lambda_{5}+8\lambda_{6}+2\lambda_{7}
 \geq y_{1}
\\
\hspace{-3cm}\lambda_{1}+\lambda_{2}+\lambda_{3}+\lambda_{4}+\lambda_{5}+\lambda_{6}+\lambda_{7}=1\\
\hspace{-3 cm}\lambda_{1}\geq0, \lambda_{2}\geq0,\lambda_{3}\geq0,\lambda_{4}\geq0,\lambda_{5}\geq0 ,\\
\hspace{-4.3cm} \lambda_{6}\geq0,\lambda_{7}\geq0, x_{1}\geq0, y_{1}\geq0.\\
\end{array}
\end{equation}
Where the efficiency frontier of production possibility set of above example is shown in Figure 5.1. 
\begin{figure}
\begin{center}
\includegraphics{pic1}\\
\textit{Figure 5.1}: The efficiency frontier of production possibility set.
\end{center}
 \end{figure}
In the initialization step of the algorithm have
$v^{AL}_{1}= - max \{ x_{1}^{j}\mid j=1,,\ldots,7\}=-6,$ \\$v^{AL}_{2}=  min \{ y_{1}^{j}\mid j=1,,\ldots,7\}=1$ then
$v^{AL}$=$(v_{1},v_{2})=(-6,1).$\\ We select 
$(-\hat{X},\hat{Y})=(-6.5,0.5).$ By definishition of $V$, we have\\ $V=\{(v_{1},v_{2})\mid 0.5\leq
v_{2}\leq y_{1},-6.5\leq v_{1}\leq
 -x_{1}$, for some $(x_{1},y_{1})$ belonging to the feasible region of
problem (9)\}.\\ Put $k=1$. If we solve problem
(3), we would obtain $\beta=3$.\\Therefore
$S^{0}=\{(-x_{1},y_{1})\mid 0.5\leq y_{1}, -6.5\leq -x_{1}$,
$y_{1}-x_{1}\leq 3 \}$. As shown in Figure 5.2.
\begin{figure}
\begin{center}
\includegraphics{pic2}\\
\textit{Figure 5.2}: The $S^{0}$ set in $(x_{1}, y_{1})$  space. 
\end{center}
 \end{figure}
The vertexs set of $S^{0}$ given in
Theorem (3.6) are\\ $\{(-6.5,0.5),(-6.5,9.5),(2.5,0.5)\}$ which are computed as
follows:\\
$q^{0}=(-6.5,0.5), q^{1}_{1}=3-6.5+6=2.5, q^{1}_{2}=0.5,
q^{2}_{1}=-6.5, q^{2}_{2}=3+0.5+6=9.5, $ therefore $q^{1}=(2.5, 0.5)$ and $q^{2}=(-6.5, 9.5)$.\\In step (1) of the
algorithm, since (-6.5,9.5) does not belong to $V$, we put
$(-X^{1},Y^{1})=(-6.5,9.5)$.\\ We go to step (2). If we solved
problem (6) by $(-X^{w},Y^{w})$=$v^{AL}$=$(v_{1},v_{2})=(-6,1)$ we
would obtain $(X^{\ast},Y^{\ast},t^{\ast})=(2,5,4)$.
We put $(-X_{p},Y_{p})$=0.5 $(-X^{\ast},Y^{\ast})$+ $0.5 (v_{1}^{AL},v_{2}^{AL})$=0.5(-2,5)+0.5(-6,1), then $(-X^{p},Y^{p})=(-4,3).$
Now we solved problem (4), we put $(-X,Y)=(-X_{1}, Y_{1})=(-6.5, 9.5)$ and $(-X_{p},Y_{p})=(-4,3),$ we would obtain $\lambda^{\ast}=0.76$ and 
$(-X^{w},Y^{w})=(-5.9,7.92).$ As shown in Figure 5.3.
\begin{figure}
\begin{center}
\includegraphics{pic3}\\
\textit{Figure 5.3}: The $(-X_{p},Y_{p}),$ $(-X^{w},Y^{w}),$ and $v^{AL}$ in $(x_{1}, y_{1})$  space. 
\end{center}
 \end{figure}
We solved problem (7) by $(-X^{w},Y^{w})=(-5.9,7.92).$ , we would obtain $u^{\ast}_{1}=0.57, u^{\ast}_{2}=0.43,
v^{\ast}_{3}=2,$ then the inequality cut is as follows.\\
$0.57y_{1}-0.43x_{1}\leq 2.$\\
We go to step (3) and we organize\\
$S^{1}=\{(-x_{1},y_{1})\mid 0.5\leq y_{1} , -6.5\leq
-x_{1}, y_{1}-x_{1}\leq 3, 0.57y_{1}-0.43x_{1}\leq 2\}.$ \\
As shown in Figures 5.4 and 5.5. The vertex set of $S^{1}$ given in the Theorem (3.6) are 
$\{(-6.5,0.5), (-2,5), (2.5,0.5), (-6.5,8.375)\}.$ We put $ k=2$. 
Since (-6.5,8.35) does not belong to $V$, we put $(-X^{2},Y^{2})=(-6.5, 8.375).$ 
\begin{figure}
\begin{center}
\includegraphics{pic4}\\
\textit{Figure 5.4}: The $S^{1}$  in $(x_{1}, y_{1})$  space. 
\end{center}
 \end{figure}
\begin{figure}
\begin{center}
\includegraphics{pic5}\\
\textit{Figure 5.5}: The $S^{2}$ in $(x_{1}, y_{1})$  space. 
\end{center}
 \end{figure}
In the next step, we solved problem (4), we put $(-X,Y)=(-X^{2},Y^{2})=(-6.5, 8.375)$ and $(-X_{p},Y_{p})=(-4,3),$ we obtain 
$\lambda^{\ast}=0.924$ and $(-X^{w},Y^{w})=(-6.326, 8).$ We solved problem (7) by  $(-X^{w},Y^{w})=(-6.326, 8),$ we would obtain 
$u^{\ast}_{1}=1, u^{\ast}_{2}=0,v^{\ast}_{3}=8,$ then the inequality cut is as follows.\\
$y_{1}\leq8.\\$ We go to step (3) and we organize\\ $S^{2}= \{(-x_{1},y_{1})\mid 0.5 \leq y_{1}, -6.5\leq
-x_{1}, y_{1}-x_{1}\leq 3,  0.57y_{1}-0.43x_{1}\leq 2, y_{1}\leq 8
\}.$\\  The vertexs set of $S^{2}$ given in Theorem (3.6) are\\
$\{(-6.5,0.5),(-2,5),(2.5,0.5),(-6.5,8),(-6,8)\}.$As shown in Figure 5.5.  We put $k=3$. 
Since (2.5, 0.5) dose not belong to $V$, we put\\ $(-X^{3},Y^{3})$=(2.5, 0.5).
In the next step, we solved problem (4), we put
$(-X,Y)=(-X^{3},Y^{3})=(2.5,0.5)$ and $(-X_{p},Y_{p})=(-4,3)$, we
would obtain $\lambda^{\ast}=0.462$ and
$(-X^{w},Y^{w})=(-1,1.8462)$. We solved problem (7) by
$(-X^{w},Y^{w})=(-1,1.8462)$, we would obtain $u^{\ast}_{1}=0,
u^{\ast}_{2}=1,v^{\ast}_{3}=-1$,
then the inequality cut is as follows.\\
$x_{1}\leq -1$.\\
\begin{figure}
\begin{center}
\includegraphics{pic6}\\
\textit{Figure 5.6}: The $S^{3}$ in $(x_{1}, y_{1})$  space. 
\end{center}
 \end{figure}
We go to step (3) and we organize $S^{3}= \{(-x_{1},y_{1})\mid
0.5 \leq y_{1}, -6.5\leq -x_{1}, y_{1}-x_{1}\leq 3,
0.57y_{1}-0.43x_{1}\leq 2, y_{1}\leq 8, x_{1}\leq -1
\}.$\\ As shown in Fig.5.6. The vertexes set of $S^{3}$ given in Theorem (3.6) are
$\{(-6.5,0.5),(-1,4),(-1,0.5),(-2,5),(-6,8),(-6.5,8)\}.$\\ We put
$k=4$. Since (-1,4) does not belong to $V$, we put
$(-X^{4},Y^{4})=(-1,4)$.
\begin{figure}
\begin{center}
\includegraphics{pic7}\\
\textit{Figure 5.7}: The $S^{4}$ in $(x_{1}, y_{1})$  space. 
\end{center}
 \end{figure}
In the next step, we solved problem (4), we put
$(-X^{4},Y^{4})=(-1,4)$ and $(-X_{p},Y_{p})=(-4,3)$, we would
obtain $\lambda^{\ast}=0.857$ and
$(-X^{w},Y^{w})=(-1.429,3.857)$. We solved problem (7) by
$(-X^{w},Y^{w})=(-1.429,3.857)$, we would obtain
$u^{\ast}_{1}=0.3333, u^{\ast}_{2}=0.6666,v^{\ast}_{3}=0.3333$,
then the inequality cut is as follows.\\
$y_{1}-2x_{1}\leq 1$.\\
We go to step (3) and we organize $S^{4}= \{(-x_{1},y_{1})\mid
0.5 \leq y_{1}, -6.5\leq -x_{1}, y_{1}-x_{1}\leq 3,
0.57y_{1}-0.43x_{1}\leq 2, y_{1}\leq 8, x_{1}\leq -1,
y_{1}-2x_{1}\leq 1
\}.$\\ 
As shown in Figure 5.7, the vertexes set of $S^{4}$ given in Theorem (3.6) are
$\{(-6.5,0.5),(-1,3),(-1,0.5),(-2,5),(-6,8),(-6.5,8)\}.$\\
Since (-6.5,0.5), (-1,0.5) and (-6.5,8) have the same components
to $\hat{V}=(-\hat{X},\hat{Y})=(-6.5,0.5)$, then they do not
belong to $V^{=}$. Therefore, the vertexes set of $V^{=}_{E}$ are
as follows.\\
$\{(-1,3),(-2,5),(-6,8)\}.$ We obtain the vertexes set of
$T_{v}$ by converting
$(-x_{1},y_{1})$ to $(x_{1},y_{1})$ as follows:\\
 $(1,3),(2,5),(6,8)$.\\\\
The algorithm is terminated.\\
\textbf{Example 2.}
Consider the case where there are five units with an input and two outputs whose details have been given in the following Table.
\begin{center}
Table 5.2. The input and outputs of the DMUs.
 {\footnotesize
\begin{tabular}{|l| l l l l l |}
\hline
 & $DMU_{1}$ & $DMU_{2}$  &$DMU_{3}$& $DMU_{4}$ & $DMU_{5}$ \\
\hline
Inputs&        1& 1& 1 & 1 &  1 \\
\hline
Output1&        6& 5& 2 & 3 &  2 \\
\hline
Output2&        2& 3.5& 5 & 3.5 & 2\\
 \hline
$\theta^{*}_{BCC}$&        1& 1& 1 & 0.833 & 0.5\\
\hline
Efficiency status& extreme& extreme& extreme & - & -\\
 \hline
\end{tabular}\\
  }
\end{center}
The corresponding MOLP is
\begin{equation}\begin{array}{cc}
\hspace{-9cm}\max\hspace{0.1cm}\{-x_{1}, y_{1}, y_{2}\}\\
\hspace{-6cm}s.t\quad
 \lambda_{1}+\lambda_{2}+\lambda_{3}+\lambda_{4}+\lambda_{5} \leq x_{1},
\\
\hspace{-4.25cm} 6\lambda_{1}+5\lambda_{2}+2\lambda_{3}+3\lambda_{4}+2\lambda_{5} \geq y_{1},
\\
\hspace{-3.6cm} 2\lambda_{1}+3.5\lambda_{2}+5\lambda_{3}+3.5\lambda_{4}+2\lambda_{5} \geq y_{2},\\
\hspace{-5.25 cm}\lambda_{1}+\lambda_{2}+\lambda_{3}+\lambda_{4}+\lambda_{5}=1,\\
\hspace{-3.6 cm}\lambda_{1}\geq0, \lambda_{2}\geq0,\lambda_{3}\geq0,\lambda_{4}\geq0,\lambda_{5}\geq0 ,\\
\hspace{-6.1cm} x_{1}\geq0, y_{1}\geq0, y_{2}\geq0.\\
\end{array}
\end{equation}
In the initialization step of the algorithm have
$v^{AL}_{1}= - max \{ x_{1}^{j}\mid j=1,,\ldots,5\}=-1,$  $v^{AL}_{2}=  min \{ y_{1}^{j}\mid j=1,,\ldots,5\}=2,$
$v^{AL}_{3}=  min \{ y_{2}^{j}\mid j=1,,\ldots,5\}=2$  then
$v^{AL}$=$(v_{1},v_{2},v_{3})=(-1,2,2).$\\ We select 
$(-\hat{X},\hat{Y})=(-1.5,1.5,1.5).$ By definishition of $V$, we have\\ $V=\{(v_{1},v_{2},v_{3})\mid -1.5\leq v_{1}\leq
 -x_{1}, 1.5\leq v_{2}\leq y_{1},1.5 \leq v_{3}\leq y_{2},$ for some $(x_{1},y_{1},y_{2})$ belonging to the feasible region of
problem (10)\}.\\ Put $k=1$. If we solve problem
(3), we would obtain $\beta=7.5$.\\Therefore
$S^{0}=\{(-x_{1},y_{1},y_{2})\mid 1.5\leq y_{1},1.5\leq y_{2}, -1.5\leq -x_{1}$,
$y_{1}+y_{2}-x_{1}\leq 7.5\}.$ The vertexs set of $S^{0}$ given in
Theorem (3.6) are\\ $\{(-1.5,1.5,1.5), (-4.5, 1.5,1.5), (-1.5, 7.5, 1.5), (-1.5, 1.5, 7.5)\}$ which are computed as
follows:\\
$q^{0}=(-1.5,1.5, 1.5), q^{1}_{1}=7.5-1.5-1.5=4.5, q^{1}_{2}=1.5, q^{1}_{3}=1.5,
q^{2}_{1}=-1.5, q^{2}_{2}=7.5+1.5-(1.5)=7.5, q^{2}_{3}=1.5, q^{3}_{1}=-1.5, q^{3}_{2}=1.5, q^{3}_{3}=7.5+1.5-(1.5)=7.5, $ 
therefore $q^{1}=(-4.5, 1.5, 1.5),$ $q^{2}=(-1.5.7.5, 1.5)$ and $q^{3}=(-1.5.1.5, 7.5)$.\\In step (1) of the
algorithm, since $(-1.5.7.5, 1.5)$ does not belong to $V$, we put
$(-X^{1},Y^{1})=(-1.5.7.5, 1.5)$.\\ We go to step (2). If we solved
problem (6) by $(-X^{w},Y^{w})$=$v^{AL}$=\\$(v_{1},v_{2},v_{3})$=(-1,2,2) we
would obtain $(X^{\ast},Y^{\ast},t^{\ast})$=(-1,6,2,0).
We put $(-X_{p},Y_{p})$=0.5 $(-X^{\ast},Y^{\ast})$+0.5 $(v_{1}^{AL},v_{2}^{AL},v_{3}^{AL})$=0.5(-1,6,2)+0.5(-1,2,2), then $(-X^{p},Y^{p})=(-1,4,2).$
Now we solved problem (4), we put $(-X,Y)=(-X^{1}, Y^{1})=(-1.5, 7.5, 1.5)$ and $(-X_{p},Y_{p})=(-1,4,2),$ we would obtain $\lambda^{\ast}=0.571$ and 
$(-X^{w},Y^{w})=(-1.286,6,1.715).$\\
We solved problem (7) by $(-X^{w},Y^{w})=(-1.286,6,1.715),$  we would obtain $u^{\ast}_{1}=1, u^{\ast}_{2}=0, u^{\ast}_{3}=0,
v^{\ast}_{4}=6,$  then the inequality cut is as follows.\\
$y_{1}\leq 6.$\\ We go to step (3) and we organize\\
$S^{1}=\{(-x_{1},y_{1},y_{2})\mid 1.5\leq y_{1}, 1.5\leq y_{2},-1.5 \leq- x_{1}, y_{1}+y_{2}-x_{1}\leq 7.5,  y_{1}\leq 6 \}.$ \\
The vertex set of $S^{1}$ given in the Theorem (3.6) are\\ 
$\{(-1.5,1.5,1.5), (0,6, 1.5), (-1.5,1.5, 7.5)\}.$ We put $ k=2$. 
Since (-1.5,1.5, 7.5) does not belong to $V$, we put $(-X^{2},Y^{2})=(-1.5,1.5, 7.5).$ 
In the next step, we solved problem (4), we put $(-X,Y)=(-X^{2},Y^{2})=(-1.5,1.5, 7.5)$ and $(-X_{p},Y_{p})=(-1,4, 2).$ We would obtain 
$\lambda^{\ast}=0.471$ and $(-X^{w},Y^{w})=(-1.236, 2.823,4.591).$ We solved problem (7) by  $(-X^{w},Y^{w})=(-1.236, 2.823,4.591).$
We would obtain $u^{\ast}_{1}=\frac{1}{3}, u^{\ast}_{2}=\frac{2}{3}, u^{\ast}_{3}=0, v^{\ast}_{4}=4,$ then the inequality cut is as follows.\\
$y_{1}+2y_{2}\leq12.$\\ We go to step (3) and we organize
$ S^{2}=\{(-x_{1},y_{1},y_{2})\mid 1.5\leq y_{1}, 1.5\leq y_{2},  -1.5 \leq -x_{1}, y_{1}+y_{2}-x_{1}\leq 7.5, y_{1}\leq 6, y_{1}+2y_{2}\leq12 \}.$ \\
The vertexs set of $S^{2}$ given in Theorem (3.6) are\\
$\{(-1.5,1.5,1.5), (-1.5,6, 3), (0,1.5, 5.25), (0,3.5,4.5)\}.$ We put $k=3$. 
Since (-1.5,6, 3) dose not belong to $V$, we put\\ $(-X^{3},Y^{3})$=(-1.5,6,3). 
In the next step, we solved problem (4), we put
$(-X,Y)=(-X^{3},Y^{3})=(-1.5,6,3)$ and $(-X_{p},Y_{p})=(-1, 4, 2)$, we
would obtain $\lambda^{\ast}=0.75$ and
$(-X^{w},Y^{w})=(-1.375, 5.5, 2.75)$. We solved problem (7) by
$(-X^{w},Y^{w})=(-1.375, 5.5, 2.75)$, we would obtain $u^{\ast}_{1}=0.6,
u^{\ast}_{2}=0.4, u^{\ast}_{3}=0, v^{\ast}_{4}=4.4$,
then the inequality cut is as follows.\\
$0.6 y_{1}+0.4 y_{2}\leq 4.4$.\\
We go to step (3) and we organize \\
$ S^{3}=\{(-x_{1},y_{1},y_{2})\mid 1.5\leq y_{1} ,1.5\leq y_{2}, -1.5 \leq- x_{1}, y_{1}+y_{2}-x_{1}\leq 7.5, y_{1}\leq 6, y_{1}+2y_{2}\leq12, 0.6 y_{1}+0.4 y_{2}\leq 4.4 \}.$ 
The vertexes set of $S^{3}$ given in Theorem (3.6) are\\
$\{(-1,5,3.5), (-0.5,6, 2), (0,1.5, 5.25), (0,3,4.5), (-1,5,1.5, 1.5)\}.$\\ We put
$k=4$. Since (0,1.5, 5.25) does not belong to $V$, we put
$(-X^{4},Y^{4})=(0,1.5, 5.25)$.
In the next step, we solved problem (4), we put
$(-X^{4},Y^{4})=(0,1.5, 5.25)$ and $(-X_{p},Y_{p})=(-1,4,2)$, we would
obtain $\lambda^{\ast}=0$ and
$(-X^{w},Y^{w})=(-1,4,2)$. We solved problem (7) by
$(-X^{w},Y^{w})=(-1,4,2)$, we would obtain
$u^{\ast}_{1}=0, u^{\ast}_{2}=0, u^{\ast}_{3}=1,v^{\ast}_{4}=-1$,
then the inequality cut is as follows.\\
$-x_{1}\leq -1$.\\
We go to step (4) and we organize 
$ S^{4}=\{(-x_{1},y_{1},y_{2})\mid 1.5\leq y_{1} ,1.5\leq y_{2}, -1.5 \leq- x_{1}, y_{1}+y_{2}-x_{1}\leq 7.5, y_{1}\leq 6, y_{1}+2y_{2}\leq12, 0.6 y_{1}+0.4 y_{2}\leq 4.4, -x_{1}\leq -1 \}.$ 
The vertexes set of $S^{4}$ given in Theorem (3.6) are\\
$\{(-1,5,3.5), (-1,6, 2), (-1,1.5, 5.25), (-1,5,1.5, 1.5)\}.$\\
We put
$k=5$. Since (-1,1.5, 5.25) does not belong to $V$, we put
$(-X^{5},Y^{5})=(-1,1.5, 5.25)$.
In the next step, we solved problem (4), we put
$(-X^{5},Y^{5})=(-1,1.5, 5.25)$ and $(-X_{p},Y_{p})=(-1,4,2)$, we would
obtain $\lambda^{\ast}=0.923$ and
$(-X^{w},Y^{w})=(-1,1.693,5)$. We solved problem (7) by
$(-X^{w},Y^{w})=(-1,1.693,5)$, we would obtain
$u^{\ast}_{1}=0, u^{\ast}_{2}=1, u^{\ast}_{3}=0,v^{\ast}_{4}=2$,
then the inequality cut is as follows.\\
$y_{2}\leq 2$.\\
We go to step (5) and we organize \\
$ S^{5}=\{(-x_{1},y_{1},y_{2})\mid 1.5\leq y_{1} ,1.5\leq y_{2}, -1.5 \leq- x_{1}, y_{1}+y_{2}-x_{1}\leq 7.5, y_{1}\leq 6, y_{1}+2y_{2}\leq12, 0.6 y_{1}+0.4 y_{2}\leq 4.4, -x_{1}\leq -1, y_{2}\leq 2\}.$ 
The vertexes set of $S^{5}$ given in Theorem (3.6) are\\
$\{(-1,5,3.5), (-1,6, 2), (-1,2, 5), (-1,5,1.5, 1.5)\}.$\\
Since $(-1.5,1.5, 1.5)$ hase the same components toto $\hat{V}=(-\hat{X},\hat{Y})=(-1.5,1.5, 1.5),$ then they do not belong to
$V^{=}$. Therefore, the vertexes set of $V^{=}_{E}$ are
as follows:\\
$\{(-1,5,3.5), (-1,6, 2), (-1,2, 5)\}.$
We obtain the vertexes set of
$T_{v}$ by converting
$(-x_{1},y_{1},y_{2})$ to $(x_{1},y_{1},y_{2})$ as follows:\\
$\{(1,5,3.5), (1,6, 2), (1,2, 5)\}.$\\
The algorithm is terminated.
\section{Computational Experiment and statisical analysis} 
To conduct a preliminary computational experiment for our proposed approach, we can use the preliminary VS-Fortran code to execute the outer linear approximation algorithm, see Benson [8]. The Horst-Thoai–De Vries method [29] is used to execute the fourth step of the algorithm; the linear bisection method is used for our univariate search in the second step, and to solve the linear programming problem, we use the simplex algorithm; as implemented by the subroutines of IMSL. [51]. Benson [8] has provided the number of iterations and efficient extreme points and the CPU introduction times for thirty multiple objective linear programming problems with different dimensions. In the present research, we use the Gams software to solve our DEA models and the Lindo software is used for solving the linear programming problems. Note that in order to determine the efficient extreme points using traditional DEA models, we need to solve at least n models, which is difficult to do; it would also be quite difficult to obtain information related to the efficient surfaces. However, in this article, we arrive at all the efficient extreme points by solving only one MOLP model, and the model is not dependent on the unit under evaluation. The m+s+n model is variable in the decision set and the number of m+s is variable in the outcome set. Now, the outcome set is smaller and we can convert the efficient extreme points in this set to efficient extreme points in the decision set through a simple calculation; thus, using the presented algorithm, we can determine the efficient extreme points of the production possibility set and its efficient surfaces by solving one model and a few iterations of the algorithm. In the numerical example provided, we use the model to evaluate seven decision-making units (DMUs) under VRS technology, each having one input and one output. In the one example, there are 9 variables in the decision set and 2 variables in the outcome set; we obtained all efficient extreme points after four iterations of the algorithm. In the third step of the algorithm, we solve a linear programming problem to find the optimal values of $\lambda^{\ast}$, and to form the cutting-plane equations, a linear programming problem is solved in each stage. As can be observed, this method involves fewer calculations for finding the extreme points compared to traditional DEA models, which require solving n models for the same purpose. For the example 2, we have similar interparation. The statisical analysis of examples discriped in Table 6.1.

\begin{center}
Table 6.1. The statisical analysis of examples.
 {\footnotesize
\begin{tabular}{|l|l|}
\hline
Example  & The number 
of Inputs and outputs \\
\hline
Example 1&  2\\
\hline
Example 2&  3\\
\hline
Example   &The number of DMUs\\
\hline
Example 1&        7\\
\hline
Example 2&        5\\
\hline
Example  & The number of variables in feasible region  \\
\hline
Example 1&9\\
\hline
Example 2&  8  \\
\hline
Example  &  The number of efficient extreme points
in feasible region  \\
\hline
Example 1& 84 \\
\hline
Example 2& 70 \\
\hline
Example  &  The number of efficient extreme points in outcome space \\
\hline
Example 1&  3  \\
\hline
Example 2&   3  \\
\hline
Example  &  The number of algorithm iterations  \\
\hline
Example 1&  4\\
\hline
Example 2&   5\\
\hline
Example  &  The number of Solved LPs \\
\hline
Example 1&   9\\
\hline
Example 2& 12\\
\hline
\end{tabular}\\
  }
\end{center}
 The presented algorithm has many useful computational advantages to previous approaches for determining the efficient extreme points of the production possibility set:\\
1.	Since the algorithm produces all efficient extreme points of the outcome set based on the decision set and the outcome set is smaller than the decision set, fewer calculations are needed for finding these points.\\
2.	The proposed algorithm is precise and finite; thus, through solving one MOLP model and a number of iterations, we can arrive at all efficient extreme points and efficient surfaces.\\
3.	This algorithm does not face the issues of previous algorithms in producing the efficient extreme points, such as infeasibility and degeneracy.\\
4.	The presented approach makes a new connection between DEA and MOLP problems; in this regard, we can identify all efficient surfaces by solving one MOLP problem and multiple iterations of the algorithm.\\
5.	The presented approach can be a new method for obtaining all efficient extreme points.\\\\
\section{Conclusion} 
The purpose of this paper was to develop a new method for generating efficient extreme points
of the production possibility set with VRS. We proposed an MOLP problem whose feasible region same of is production possibility set. We applied the outer approximation algorithm for generating the efficient extreme points of MOLP problem. Since the average number of efficient extreme points in the outcome set is less than the average number of efficient extreme points in the decision set, the method proposed is pretty fit. We obtain the efficient frontier by solving an MOLP problem, the outer approximation algorithm can be implemented relatively easily by using search methods, linear programming techniques, and any one of several special methods from the global optimization literature for generating new vertexes set as linear inequality cuts are added to the containing polyhedral generated by the algorithm. This algorithm used few calculations to produce all the extreme points. The results of this paper can be used on various DEA-related application problems. The results proposed a way for extending the analysis of production efficiency to further path. In this method, we obtain the efficient frontier by solving an MOLP problem.
For future research, we suggest extending our presented approach to determine the units’ return to scale class; furthermore, we can solve the proposed MOLP problem using other methods for obtaining extreme points, such as vector maximization and make a comparison of results.
\begin{center}
\begin{thebibliography}{99} % Enter references in alphabetical order and according to the following format.
\bibitem{RefJ1} R. Allen, A.D. Athanassopoulos, R.G. Dyson, E. Thanassoulis, Weights restrictions and value judgements in data envelopment analysis: evolution,
development and future directions, Ann. Oper. Res., 73 (1997), 13-34.

\bibitem{RefJ2} R. Banker, H.Chang, W.W.Cooper, Equivalence and implementation of alternative methods for determining return to scale in data envelopment analysis, Eur. J. Oper. Res., 89 (1996), 473-481.

\bibitem{RefJ2} R. Banker, A. Charnes, W.W. Cooper, Some models for estimating technical and scale inefficiencies in data envelopment analysis, Manage. Sci., 30(9) (1984), 1078-1092. 

\bibitem{RefJ2} H.P. Benson, A Geometrical Analysis of the Efficient Outcome Set in Multiple-Objective Convex Programs with Linear Criterion Functions, J. Global Optim., 6 (1995), 231-251. 

\bibitem{RefJ2} H.P. Benson, Vector Maximization with Two Objective Functions, J. Optim Theory. Appl., 28 (1979), 253-257. 

\bibitem{RefJ2}H.P. Benson, Generating the Efficient Outcome Set in Multiple Objective Linear Programs: The Bicriteria Case, Acta Math. Vietnam. 22 (1997), 29-51. 

\bibitem{RefJ2} H.P. Benson, Admissible Points of a Convex Polyhedron, J Optim Theory Appl., 38 (1982), 341-361. 
\bibitem{RefJ2} H.P Benson, An Outer Approximation Algorithm for Generating All Efficient Extreme Points in the Outcome Set of a Multiple Objective Linear Programming Problem, J. Global Optim.,13 (1998), 1-24. 

\bibitem{RefJ2}H.P. Benson, S. Erjiang, Continuous Optimization A weight set decomposition algorithm for finding all efficient extreme points in the outcome set of a multiple
objective linear program, Eur. J. Oper. Res., 139 (2002), 26-41. 

\bibitem{RefJ2}H.P. Benson, S. Sayin, Towards Finding Global Representations of the Efficient Set in Multiple Objective Mathematical Programming, NAV RES LOG., 44 (1997), 47-67. 

\bibitem{RefJ2} G.R. Bitran, Theory and Algorithms for Linear Multiple Objective Programs with Zero one Variables, Math. Program., 17 (1979), 362-390. 

\bibitem{RefJ2}A. Charnes, W.W. Cooper, E. Rhodes, Measuring the efficiency of decision making units, Eur. J. Oper. Res., 2(6) (1978), 429-444. 

\bibitem{RefJ2}J.L. Cohon, Multiobjective Programming and Planning, Academic Press, New York (1978). 

\bibitem{RefJ2}J.P. Dauer, Analysis of the Objective Space inMultiple Objective Linear Programming, J Math Anal Appl., 126 (1987), 579-593. 

\bibitem{RefJ2}J.P. Dauer, On Degeneracy and Collapsing in the Construction of the Set of Objective Values in a Multiple Objective Linear Program, ‎Ann. Oper. Res., 47 (1993), 279-292. 

\bibitem{RefJ2} J.P. Dauer, R.J. Gallagher, A Combined Constraint-Space, Objective-Space Approach for Determining High-Dimensional Maximal Efficient Faces of Multiple Objective Linear Programs, Eur. J. Oper. Res., 88 (1996), 368-381. 

\bibitem{RefJ2}J.P. Dauer, Y.H. Liu, Solving Multiple Objective Linear Programs in Objective Space, Eur. J. Oper. Res., 46(1990), 350-357. 

\bibitem{RefJ2}J.P. Dauer, O.A. Saleh, Constructing the Set of Efficient Objective Values in Multiple Objective Linear Programs, Eur. J. Oper. Res., 46 (1990), 358-357. 

\bibitem{RefJ2} K. DE WITTE, R. MARQUES, Influential observations in frontier models, a robust non-oriented approach to the water sector. ‎Ann. Oper. Res., 181 (2010), 377-392. 

\bibitem{RefJ2} J. Doyle, R. Green, Data envelopment analysis and multiple criteria decision making, Omega, 21 (6) (1993), 713-715. 

\bibitem{RefJ2} A. Ebrahimnejad ,  M. Tavana, An interactive MOLP method for identifying target units in output-oriented DEA models: The NATO enlargement problem, Measurement, 52 (2014), 124-134. 

\bibitem{RefJ2} G.W. Evans, An Overview of Techniques for Solving Multiobjective Mathematical Programs, Manage. Sci., 30 (1984), 1268-1282. 

\bibitem{RefJ2}J.E. Falk, K.L. Hoffman, A Successive Underestimating Method for Concave Minimization Problems, ‎Math. Oper. Res., 1(1976), 251-259. 

\bibitem{RefJ2} R.J. Gallagher, O.A. Saleh, A Representation of an Efficiency Equivalent Polyhedron for the Objective Set of a Multiple Objective Linear Program, Eur. J. Oper. Res., 80 (1995), 204-212. 

\bibitem{RefJ2}A. Goicoechea, D.R. Hansen, L. Duckstein, Multiobjective Decision Analysis with Engineering and Business Applications, John Wiley and Sons, New York (1982). 

\bibitem{RefJ2} B. Golany, An interactive MOLP procedure for the extension of DEA to effectiveness analysis, J. Oper. Res. Soc., 39 (8) (1988), 725-734. 

\bibitem{RefJ2} F. Hosseinzadeh Lotfi, G.R. Jahanshahloo, M.R. Mozaffari, J. Gerami, Finding DEA-efficient hyperplanes using MOLP efficient faces, J. Comput. Appl. Math., 235 (2011), 1227-1231. 

\bibitem{RefJ2} F. Hosseinzadeh Lotfi, G.R. Jahanshaloo, M. Soltanifar, A. Ebrahimnejad, S.M. Manosourzadeh, Relationship between MOLP and DEA based on output-orientated CCR dual model, Expert Syst. Appl., 37 (6) (2010), 4331-4336. 

\bibitem{RefJ2} F. Hosseinzadeh Lotfi, A. A. Noora, G. R. Jahanshahloo, J. Jablonsky, M. R. Mozaffari, J. Gerami, An MOLP based procedure for finding efficient units in DEA models, Cent Eur J Oper Res, 17 (2009), 1-11. 

\bibitem{RefJ2} R. Horst, N.V. Thoai, J. Devries, On Finding the New Vertices and Redundant Constraints in Cutting Plane Algorithms for Global Optimization, Oper. Res. Lett, 7 (1988), 85-90. 

\bibitem{RefJ2} H. Isermann, The Enumeration of the Set of All Efficient Solutions for a Linear Multiple Objective Program, Oper. Res. Q., 28 (1977), 711-725. 

\bibitem{RefJ2}G.R. Jahanshahloo, F. Hosseinzadeh Lotfi, H. Zhiani Rezai, F. Rezai Balf, Finding strong defining hyperplanes of Production Possibility Set, Eur. J. Oper. Res., 177 (2007), 42-54. 

\bibitem{RefJ2} G.R. Jahanshahloo, F. Hosseinzadeh Lotfi, M. Zohrehbandian, Finding the piecewise linear frontier production function in Data Envelopment Analysis,
Appl. Math. Comput., 163 (1) (2005), 483-488. 

\bibitem{RefJ2} R. Joro, P. Korhonen, J. Wallenius, Structural comparison of data envelopment analysis and multiple objective linear programming. Manage. Sci., 44 (7) (1998), 962-970. 

\bibitem{RefJ2} P. Korhonen, Searching the efficient frontier in Data Envelopment Analysis, IIASA, IR, (1997), p. 97-79. 

\bibitem{RefJ2} M.M. Kostreva, M.M. Wiecek, Time Dependency in Multiple Objective Dynamic Programming, J. Math. Anal. Appl., 173 (1993), 289-307. 

\bibitem{RefJ2}H.W. Kuhn, A.W. Tucker, Nonlinear Programming, in J. Neyman (ed.), Proceedings of the 2nd Berkeley Symposium on Mathematical Statistics and Probability, University of California Press, Berkeley, California, (1950), pp. 481-492. 

\bibitem{RefJ2}D.T. Luc, Theory of Vector Optimization, Springer Verlag, Berlin/New York (1989). 

\bibitem{RefJ2}N. Malekmohammadi ,  F. Hosseinzadeh Lotfi, A. B. Jaafar, Target setting in data envelopment analysis using MOLP, Appl. Math. Model.,  35 (2011), 328-338. 

\bibitem{RefJ2}R.T. Rockafellar, Convex Analysis,pinceton University Press, Princeton, New Jersey,49 (1970). 

\bibitem{RefJ2}Y. Sawaragi, H. Nakayama, T. Tanino, Theory of Multiobjective Optimization,
Academic Press, Orlando, Florida (1985). 

\bibitem{RefJ2}S. Sayin, An algorithm based on facial decomposition for finding the efficient set in multiple objective linear programming, Oper. Res. Lett, 19 (1996), 87-94. 

\bibitem{RefJ2}R.E. Steuer, Multiple Criteria Optimization: Theory, Computation, and Application, John Wiley and Sons, New York (1986). 

\bibitem{RefJ2}R.E. Steuer, ADBASE Multiple Objective Linear Programming Package, University of Georgia, Athens, Georgia, 1989. 

\bibitem{RefJ2} B. Villarreal, M.H. Karwan, Multicriteria Integer Programming: A (Hybrid) Dynamic Programming Recursive Approach, Math. Program, 21(1981), 204-223.

\bibitem{RefJ2}B.Y.H. Wong, M. Luque, J.B. Yang, Using interactive multiobjective
methods to solve DEA problems with value judgements, Comput. Oper. Res., 36 (2009), 623-636. 

\bibitem{RefJ2}J. B. Yang, B.Y.H. Wong, Xu Dong-Ling, T.J. Stewart, Integrating DEA-oriented performance assessment and target setting using interactive MOLP methods, Eur. J. Oper. Res., 195 (2009), 205-222.

\bibitem{RefJ2}P.L. Yu, Multiple Criteria Decision Making, Plenum, New York (1985). 

\bibitem{RefJ2} P.L Yu, M. Zeleny, The set of all nondominated solutions in linear cases and multicriteria simplex method, J. Math. Anal. Appl., 49 (1975), 430-468. 

\bibitem{RefJ2}M. Zeleny, Linear Multiobjective Programming, Springer Verlag, Berlin New York (1974).  

\bibitem{RefJ2} M. Zeleny, Multiple Criteria Decision Making, McGraw-Hill, New York (1982). 

\bibitem{RefJ2}International Mathematical and Statistical Libraries, Inc., The IMSL Library Reference Manual, IMSL, Houston, Texas (1991). 
\end{thebibliography}
\end{center}

{\small

\noindent{\bf J. Gerami}

\noindent Department of Mathematics

\noindent Assistant Professor of Mathematics

\noindent Department of mathematics, Shiraz Branch, Islamic Azad University, Shiraz, Iran

\noindent shiraz, Iran

\noindent E-mail: geramijavad@gmail.com}\\

\end{document}