890 resultados para Fuzzy Set Theory
Resumo:
Does entrepreneurial optimism affect business performance? Using a unique data set based on repeated survey design, we investigate this relationship empirically. Our measures of ‘optimism’ and ‘realism’ are derived from comparing the turnover growth expectations of 133 owners-managers with the actual outcomes one year later. Our results indicate that entrepreneurial optimists perform significantly better in terms of profits than pessimists. Moreover, it is the optimist-realist combination that performs best. We interpret our results using regulatory focus theory.
Resumo:
This paper proposes a more profound discussion of the philosophical underpins of sustainability than currently exists in the MOT literature and considers their influence on the construction of the theories on green operations and technology management. Ultimately, it also debates the link between theory and practice on this subject area. The paper is derived from insights gained in three research projects completed during the past twelve years, primarily involving the first author. From 2000 to 2002, an investigation using scenario analysis, aimed at reducing atmospheric pollution in urban centres by substituting natural gas for petrol and diesel, provided the first set of insights about public policy, environmental impacts, investment analysis, and technological feasibility. The second research project, from 2003 to 2005, using a survey questionnaire, was aimed at improving environmental performance in livestock farming and explored the issues of green supply chain scope, environmental strategy and priorities. Finally, the third project, from 2006 to 2011, investigated environmental decisions in manufacturing organisations through case study research and examined the underlying sustainability drivers and decision-making processes. By integrating the findings and conclusions from these projects, the link between philosophy, theory, and practice of green operations and technology management is debated. The findings from all these studies show that the philosophical debate seems to have little influence on theory building so far. For instance, although ‘sustainable development’ emphasises ‘meeting the needs of current and future generation’, no theory links essentiality and environmental impacts. Likewise, there is a weak link between theory and the practical issues of green operations and technology management. For example, the well-known ‘life-cycle analysis’ has little application in many cases because the life cycle of products these days is dispersed within global production and consumption systems and there are different stakeholders for each life cycle stage. The results from this paper are relevant to public policy making and corporate environmental strategy and decision making. Most of the past and current studies in the subject of green operations and sustainability management deal with only a single sustainability dimension at any one time. Here the value and originality of this paper lies in its integration between philosophy, theory, and practice of green technology and operations management.
Resumo:
Data envelopment analysis (DEA) as introduced by Charnes, Cooper, and Rhodes (1978) is a linear programming technique that has widely been used to evaluate the relative efficiency of a set of homogenous decision making units (DMUs). In many real applications, the input-output variables cannot be precisely measured. This is particularly important in assessing efficiency of DMUs using DEA, since the efficiency score of inefficient DMUs are very sensitive to possible data errors. Hence, several approaches have been proposed to deal with imprecise data. Perhaps the most popular fuzzy DEA model is based on a-cut. One drawback of the a-cut approach is that it cannot include all information about uncertainty. This paper aims to introduce an alternative linear programming model that can include some uncertainty information from the intervals within the a-cut approach. We introduce the concept of "local a-level" to develop a multi-objective linear programming to measure the efficiency of DMUs under uncertainty. An example is given to illustrate the use of this method.
Resumo:
Performance evaluation in conventional data envelopment analysis (DEA) requires crisp numerical values. However, the observed values of the input and output data in real-world problems are often imprecise or vague. These imprecise and vague data can be represented by linguistic terms characterised by fuzzy numbers in DEA to reflect the decision-makers' intuition and subjective judgements. This paper extends the conventional DEA models to a fuzzy framework by proposing a new fuzzy additive DEA model for evaluating the efficiency of a set of decision-making units (DMUs) with fuzzy inputs and outputs. The contribution of this paper is threefold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA, (2) we propose a new fuzzy additive DEA model derived from the a-level approach and (3) we demonstrate the practical aspects of our model with two numerical examples and show its comparability with five different fuzzy DEA methods in the literature. Copyright © 2011 Inderscience Enterprises Ltd.
Resumo:
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. In this study, we provide a taxonomy and review of the fuzzy DEA methods. We present a classification scheme with four primary categories, namely, the tolerance approach, the a-level based approach, the fuzzy ranking approach and the possibility approach. We discuss each classification scheme and group the fuzzy DEA papers published in the literature over the past 20 years. To the best of our knowledge, this paper appears to be the only review and complete source of references on fuzzy DEA. © 2011 Elsevier B.V. All rights reserved.
Resumo:
Selecting the best alternative in a group decision making is a subject of many recent studies. The most popular method proposed for ranking the alternatives is based on the distance of each alternative to the ideal alternative. The ideal alternative may never exist; hence the ranking results are biased to the ideal point. The main aim in this study is to calculate a fuzzy ideal point that is more realistic to the crisp ideal point. On the other hand, recently Data Envelopment Analysis (DEA) is used to find the optimum weights for ranking the alternatives. This paper proposes a four stage approach based on DEA in the Fuzzy environment to aggregate preference rankings. An application of preferential voting system shows how the new model can be applied to rank a set of alternatives. Other two examples indicate the priority of the proposed method compared to the some other suggested methods.
Resumo:
Data Envelopment Analysis (DEA) is recognized as a modern approach to the assessment of performance of a set of homogeneous Decision Making Units (DMUs) that use similar sources to produce similar outputs. While DEA commonly is used with precise data, recently several approaches are introduced for evaluating DMUs with uncertain data. In the existing approaches many information on uncertainties are lost. For example in the defuzzification, the a-level and fuzzy ranking approaches are not considered. In the tolerance approach the inequality or equality signs are fuzzified but the fuzzy coefficients (inputs and outputs) are not treated directly. The purpose of this paper is to develop a new model to evaluate DMUs under uncertainty using Fuzzy DEA and to include a-level to the model under fuzzy environment. An example is given to illustrate this method in details.
Resumo:
If history matters for organization theory, then we need greater reflexivity regarding the epistemological problem of representing the past; otherwise, history might be seen as merely a repository of ready-made data. To facilitate this reflexivity, we set out three epistemological dualisms derived from historical theory to explain the relationship between history and organization theory: (1) in the dualism of explanation, historians are preoccupied with narrative construction, whereas organization theorists subordinate narrative to analysis; (2) in the dualism of evidence, historians use verifiable documentary sources, whereas organization theorists prefer constructed data; and (3) in the dualism of temporality, historians construct their own periodization, whereas organization theorists treat time as constant for chronology. These three dualisms underpin our explication of four alternative research strategies for organizational history: corporate history, consisting of a holistic, objectivist narrative of a corporate entity; analytically structured history, narrating theoretically conceptualized structures and events; serial history, using replicable techniques to analyze repeatable facts; and ethnographic history, reading documentary sources "against the grain." Ultimately, we argue that our epistemological dualisms will enable organization theorists to justify their theoretical stance in relation to a range of strategies in organizational history, including narratives constructed from documentary sources found in organizational archives. Copyright of the Academy of Management, all rights reserved.
Resumo:
In this paper, we present syllable-based duration modelling in the context of a prosody model for Standard Yorùbá (SY) text-to-speech (TTS) synthesis applications. Our prosody model is conceptualised around a modular holistic framework. This framework is implemented using the Relational Tree (R-Tree) techniques. An important feature of our R-Tree framework is its flexibility in that it facilitates the independent implementation of the different dimensions of prosody, i.e. duration, intonation, and intensity, using different techniques and their subsequent integration. We applied the Fuzzy Decision Tree (FDT) technique to model the duration dimension. In order to evaluate the effectiveness of FDT in duration modelling, we have also developed a Classification And Regression Tree (CART) based duration model using the same speech data. Each of these models was integrated into our R-Tree based prosody model. We performed both quantitative (i.e. Root Mean Square Error (RMSE) and Correlation (Corr)) and qualitative (i.e. intelligibility and naturalness) evaluations on the two duration models. The results show that CART models the training data more accurately than FDT. The FDT model, however, shows a better ability to extrapolate from the training data since it achieved a better accuracy for the test data set. Our qualitative evaluation results show that our FDT model produces synthesised speech that is perceived to be more natural than our CART model. In addition, we also observed that the expressiveness of FDT is much better than that of CART. That is because the representation in FDT is not restricted to a set of piece-wise or discrete constant approximation. We, therefore, conclude that the FDT approach is a practical approach for duration modelling in SY TTS applications. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a novel intonation modelling approach and demonstrates its applicability using the Standard Yorùbá language. Our approach is motivated by the theory that abstract and realised forms of intonation and other dimensions of prosody should be modelled within a modular and unified framework. In our model, this framework is implemented using the Relational Tree (R-Tree) technique. The R-Tree is a sophisticated data structure for representing a multi-dimensional waveform in the form of a tree. Our R-Tree for an utterance is generated in two steps. First, the abstract structure of the waveform, called the Skeletal Tree (S-Tree), is generated using tone phonological rules for the target language. Second, the numerical values of the perceptually significant peaks and valleys on the S-Tree are computed using a fuzzy logic based model. The resulting points are then joined by applying interpolation techniques. The actual intonation contour is synthesised by Pitch Synchronous Overlap Technique (PSOLA) using the Praat software. We performed both quantitative and qualitative evaluations of our model. The preliminary results suggest that, although the model does not predict the numerical speech data as accurately as contemporary data-driven approaches, it produces synthetic speech with comparable intelligibility and naturalness. Furthermore, our model is easy to implement, interpret and adapt to other tone languages.
Resumo:
Health care organizations must continuously improve their productivity to sustain long-term growth and profitability. Sustainable productivity performance is mostly assumed to be a natural outcome of successful health care management. Data envelopment analysis (DEA) is a popular mathematical programming method for comparing the inputs and outputs of a set of homogenous decision making units (DMUs) by evaluating their relative efficiency. The Malmquist productivity index (MPI) is widely used for productivity analysis by relying on constructing a best practice frontier and calculating the relative performance of a DMU for different time periods. The conventional DEA requires accurate and crisp data to calculate the MPI. However, the real-world data are often imprecise and vague. In this study, the authors propose a novel productivity measurement approach in fuzzy environments with MPI. An application of the proposed approach in health care is presented to demonstrate the simplicity and efficacy of the procedures and algorithms in a hospital efficiency study conducted for a State Office of Inspector General in the United States. © 2012, IGI Global.
Resumo:
The so called “Plural Uncertainty Model” is considered, in which statistical, maxmin, interval and Fuzzy model of uncertainty are embedded. For the last case external and internal contradictions of the theory are investigated and the modified definition of the Fuzzy Sets is proposed to overcome the troubles of the classical variant of Fuzzy Subsets by L. Zadeh. The general variants of logit- and probit- regression are the model of the modified Fuzzy Sets. It is possible to say about observations within the modification of the theory. The conception of the “situation” is proposed within modified Fuzzy Theory and the classifying problem is considered. The algorithm of the classification for the situation is proposed being the analogue of the statistical MLM(maximum likelihood method). The example related possible observing the distribution from the collection of distribution is considered.
Resumo:
In terms of binary relations the author analyses the task of an individual consumers’ choice on the teaching excerpts set. It is suggested to analyse the function of consumer’s value as additive reduction. For localization of the vector of weighting coefficients of additive reduction the procedures based on metrics of object distance towards the ideal point are suggested.
Resumo:
The system of development unstable processes prediction is given. It is based on a decision-tree method. The processing technique of the expert information is offered. It is indispensable for constructing and processing by a decision-tree method. In particular data is set in the fuzzy form. The original search algorithms of optimal paths of development of the forecast process are described. This one is oriented to processing of trees of large dimension with vector estimations of arcs.
Resumo:
In this paper a novel method for an application of digital image processing, Edge Detection is developed. The contemporary Fuzzy logic, a key concept of artificial intelligence helps to implement the fuzzy relative pixel value algorithms and helps to find and highlight all the edges associated with an image by checking the relative pixel values and thus provides an algorithm to abridge the concepts of digital image processing and artificial intelligence. Exhaustive scanning of an image using the windowing technique takes place which is subjected to a set of fuzzy conditions for the comparison of pixel values with adjacent pixels to check the pixel magnitude gradient in the window. After the testing of fuzzy conditions the appropriate values are allocated to the pixels in the window under testing to provide an image highlighted with all the associated edges.