22 resultados para fuzzy rule base models
Resumo:
Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.
Resumo:
This book contains 13 papers from the 7th Workshop on Global Sourcing, held in Val d'Isere, France, during March 11-14, 2013, which were carefully reviewed and selected from 40 submissions. They are based on a vast empirical base brought together by leading researchers in information systems, strategic management, and operations. This volume is intended for students, academics, and practitioners interested in research results and experiences on outsourcing and offshoring of information technology and business processes. The topics discussed represent both client and supplier perspectives on sourcing of global services, combine theoretical and practical insights regarding challenges that both clients and vendors face, and include case studies from client and vendor organizations.
Resumo:
Uncertainty can be defined as the difference between information that is represented in an executing system and the information that is both measurable and available about the system at a certain point in its life-time. A software system can be exposed to multiple sources of uncertainty produced by, for example, ambiguous requirements and unpredictable execution environments. A runtime model is a dynamic knowledge base that abstracts useful information about the system, its operational context and the extent to which the system meets its stakeholders' needs. A software system can successfully operate in multiple dynamic contexts by using runtime models that augment information available at design-time with information monitored at runtime. This chapter explores the role of runtime models as a means to cope with uncertainty. To this end, we introduce a well-suited terminology about models, runtime models and uncertainty and present a state-of-the-art summary on model-based techniques for addressing uncertainty both at development- and runtime. Using a case study about robot systems we discuss how current techniques and the MAPE-K loop can be used together to tackle uncertainty. Furthermore, we propose possible extensions of the MAPE-K loop architecture with runtime models to further handle uncertainty at runtime. The chapter concludes by identifying key challenges, and enabling technologies for using runtime models to address uncertainty, and also identifies closely related research communities that can foster ideas for resolving the challenges raised. © 2014 Springer International Publishing.
Resumo:
In this paper, we present syllable-based duration modelling in the context of a prosody model for Standard Yorùbá (SY) text-to-speech (TTS) synthesis applications. Our prosody model is conceptualised around a modular holistic framework. This framework is implemented using the Relational Tree (R-Tree) techniques. An important feature of our R-Tree framework is its flexibility in that it facilitates the independent implementation of the different dimensions of prosody, i.e. duration, intonation, and intensity, using different techniques and their subsequent integration. We applied the Fuzzy Decision Tree (FDT) technique to model the duration dimension. In order to evaluate the effectiveness of FDT in duration modelling, we have also developed a Classification And Regression Tree (CART) based duration model using the same speech data. Each of these models was integrated into our R-Tree based prosody model. We performed both quantitative (i.e. Root Mean Square Error (RMSE) and Correlation (Corr)) and qualitative (i.e. intelligibility and naturalness) evaluations on the two duration models. The results show that CART models the training data more accurately than FDT. The FDT model, however, shows a better ability to extrapolate from the training data since it achieved a better accuracy for the test data set. Our qualitative evaluation results show that our FDT model produces synthesised speech that is perceived to be more natural than our CART model. In addition, we also observed that the expressiveness of FDT is much better than that of CART. That is because the representation in FDT is not restricted to a set of piece-wise or discrete constant approximation. We, therefore, conclude that the FDT approach is a practical approach for duration modelling in SY TTS applications. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
This paper provides a description of the features and mechanisms of facetted short crack growth in Ni-base superalloys, and briefly reviews existing short crack growth models in terms of their application to Ni-base alloys. The concept of “soft barriers” is introduced to produce a new two-phase model for local microstructural effects on short crack growth in Waspaloy. This is derived from detailed observations of crack growth through individual grains. The model differs from all previous approaches in highlighting the importance of crack path perturbations within grains. Potential applications of the model in alloy development are discussed.
Resumo:
Traditionally, research on model-driven engineering (MDE) has mainly focused on the use of models at the design, implementation, and verification stages of development. This work has produced relatively mature techniques and tools that are currently being used in industry and academia. However, software models also have the potential to be used at runtime, to monitor and verify particular aspects of runtime behavior, and to implement self-* capabilities (e.g., adaptation technologies used in self-healing, self-managing, self-optimizing systems). A key benefit of using models at runtime is that they can provide a richer semantic base for runtime decision-making related to runtime system concerns associated with autonomic and adaptive systems. This book is one of the outcomes of the Dagstuhl Seminar 11481 on models@run.time held in November/December 2011, discussing foundations, techniques, mechanisms, state of the art, research challenges, and applications for the use of runtime models. The book comprises four research roadmaps, written by the original participants of the Dagstuhl Seminar over the course of two years following the seminar, and seven research papers from experts in the area. The roadmap papers provide insights to key features of the use of runtime models and identify the following research challenges: the need for a reference architecture, uncertainty tackled by runtime models, mechanisms for leveraging runtime models for self-adaptive software, and the use of models at runtime to address assurance for self-adaptive systems.
Resumo:
Software architecture plays an essential role in the high level description of a system design, where the structure and communication are emphasized. Despite its importance in the software engineering process, the lack of formal description and automated verification hinders the development of good software architecture models. In this paper, we present an approach to support the rigorous design and verification of software architecture models using the semantic web technology. We view software architecture models as ontology representations, where their structures and communication constraints are captured by the Web Ontology Language (OWL) and the Semantic Web Rule Language (SWRL). Specific configurations on the design are represented as concrete instances of the ontology, to which their structures and dynamic behaviors must conform. Furthermore, ontology reasoning tools can be applied to perform various automated verification on the design to ensure correctness, such as consistency checking, style recognition, and behavioral inference.