975 resultados para swarm intelligence models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The issue of information sharing and exchanging is one of the most important issues in the areas of artificial intelligence and knowledge-based systems (KBSs), or even in the broader areas of computer and information technology. This paper deals with a special case of this issue by carrying out a case study of information sharing between two well-known heterogeneous uncertain reasoning models: the certainty factor model and the subjective Bayesian method. More precisely, this paper discovers a family of exactly isomorphic transformations between these two uncertain reasoning models. More interestingly, among isomorphic transformation functions in this family, different ones can handle different degrees to which a domain expert is positive or negative when performing such a transformation task. The direct motivation of the investigation lies in a realistic consideration. In the past, expert systems exploited mainly these two models to deal with uncertainties. In other words, a lot of stand-alone expert systems which use the two uncertain reasoning models are available. If there is a reasonable transformation mechanism between these two uncertain reasoning models, we can use the Internet to couple these pre-existing expert systems together so that the integrated systems are able to exchange and share useful information with each other, thereby improving their performance through cooperation. Also, the issue of transformation between heterogeneous uncertain reasoning models is significant in the research area of multi-agent systems because different agents in a multi-agent system could employ different expert systems with heterogeneous uncertain reasonings for their action selections and the information sharing and exchanging is unavoidable between different agents. In addition, we make clear the relationship between the certainty factor model and probability theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The adoption of simulation as a powerful enabling method for knowledge management is hampered by the relatively high cost of model construction and maintenance. A two-step procedure, based on a divide and conquer strategy, is proposed in this paper. First, a simulation program is partitioned based on a reinterpretation of the model-view-controller architecture. Individual parts are then connected, in terms of abstraction, to guard against possible changes that resulted from shifting user requirements. We explore the applicability of these design principles through a detailed discussion of an industry case study. The knowledge-based perspective guides the design of architecture to accommodate the need of emulation without compromising the integrity of the simulation program. The synergy between simulation and a knowledge management perspective, as shown in the case study, has the potential to achieve the objectives of rapid development of models, with low maintenance cost. This could, in turn, facilitate an extension of the use of simulation in the knowledge management domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An unresolved but pertinent issue in the field of emotional intelligence (EI) is factorial validity. Numerous studies have investigated this issue (Gignac, 2005; Mayer, Salovey, Caruso, & Sitarenios, 2003; Petrides & Furnham, 2000; Saklofske, Austin, & Minski, 2003), but most are based on correlations among subscale scores from relevant measures, making the implicit assumption that subscale scores are unidimensional, rather than questioning the structure of subscales themselves. Accordingly, the present study adopts the Anderson and Gerbing (1988) two-step strategy of first considering the structure within subscales before examining the relationship between subscales. An evaluation was undertaken using the Emotional Intelligence Scale (EIS, Schutte et al., 1998), the Work Profile Questionnaire – Emotional Intelligence Version (WQPei, Cameron, 1999) and the Mayer–Salovey–Caruso Emotional Intelligence Test (MSCEIT V.2., Mayer, Salovey, & Caruso, 1999b). Results were characterised by instability, heterogeneity and inconsistency. Specifically, the EIS was not found to form the homogenous structure postulated by authors. Similarly, support was not found for the seven factor model of the WPQei. Large discrepancies exist between the one, two and four factor models described by Mayer et al. (2003) for the MSCEIT V.2. and the 21 components revealed at the primary level in the current analyses. Additionally, reliability statistics for the MSCEIT V.2. were less than optimal. Questions remain regarding the clarity, reliability and validity of the instruments examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this philosophical and practical-critical inquiry, I address two significant and closely related problems - whether and how those involved in the enterprise of education conceptualise a need for educational change, and the observed resistance of school cultures to change efforts. I address the apparent lack of a clear, coherent and viable theory of learning, agency and change, capable of making explicit the need, substantive nature and means of educational change. Based on a meta-analysis of numerous theories and perspectives on human knowing, learning, intelligence, agency and change, I synthesise a 'Dynamic Paradigm of Learning and Change', characterised by fifteen Constructs. I argue that this more viable Paradigm is capable of informing both design and critique of systemic curriculum and assessment policies, school organisation and planning models, professional learning and pedagogical practice, and student learning and action. The Dynamic Paradigm of Learning and Change contrasts with the assumptions reflected in the prevailing culture of institutionalised education, and I argue that dominant views of knowledge and human agency are both theoretically and practically non-viable and unsustainable. I argue that the prevailing culture and experience of schooling contributes to the formation of assumptions, identities, dispositions and orientations to the world characterised by alienation. The Dynamic Paradigm of Learning and Change also contrasts with the assumptions reflected in some educational reform efforts recently promoted at system level in Queensland, Australia. I use the Dynamic Paradigm as the reference point for a formal critique of two influential reform programs, Authentic Pedagogy and the New Basics Project, identifying significant limitations in both the conceptualisation of educational ends and means, and the implementation of these reform agendas. Within the Dynamic Paradigm of Learning and Change, knowledge and learning serve the individual's need for more adaptive or viable functioning in the world. I argue that students' attainment of knowledge of major ways in which others in our culture organise experience (interpret the world) is a legitimate goal of schooling. However, it is more viable to think of the primary function of schooling as providing for the young inspiration, opportunities and support for purposeful doing, and for assisting them in understanding the processes of 'action scheme' change to make such doing more viable. Through the practical-critical components of the inquiry, undertaken in the context of the ferment of pedagogical and curricular discussion and exploration in Queensland between 1999 and 2003, I develop the Key Abilities Model and associated guidelines and resources relating to forms of pedagogy, curriculum organisation and assessment consistent with the Dynamic Paradigm of Learning and Change. I argue the importance of showing teachers why and how their existing visions and conceptions of learning and teaching may be inadequate, and of emphasising teachers' conceptions of learning, knowing, agency and teaching, and their identities, dispositions and orientations to the world, as things that might need to change, in order to realise the intent of educational change focused on transformational student outcomes serving both the individual and collective good. A recommendation is made for implementation and research of a school-based trial of the Key Abilities Model, informed by and reflecting the Dynamic Paradigm of Learning and Change, as an important investment in the development and expression of ‘authentic' human intelligence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investigation of the role of hypothesis formation in complex (business) problem solving has resulted in a new approach to hypothesis generation. A prototypical hypothesis generation paradigm for management intelligence has been developed, reflecting a widespread need to support management in such areas as fraud detection and intelligent decision analysis. This dissertation presents this new paradigm and its application to goal directed problem solving methodologies, including case based reasoning. The hypothesis generation model, which is supported by a dynamic hypothesis space, consists of three components, namely, Anomaly Detection, Abductive Reasoning, and Conflict Resolution models. Anomaly detection activates the hypothesis generation model by scanning anomalous data and relations in its working environment. The respective heuristics are activated by initial indications of anomalous behaviour based on evidence from historical patterns, linkages with other cases, inconsistencies, etc. Abductive reasoning, as implemented in this paradigm, is based on joining conceptual graphs, and provides an inference process that can incorporate a new observation into a world model by determining what assumptions should be added to the world, so that it can explain new observations. Abductive inference is a weak mechanism for generating explanation and hypothesis. Although a practical conclusion cannot be guaranteed, the cues provided by the inference are very beneficial. Conflict resolution is crucial for the evaluation of explanations, especially those generated by a weak (abduction) mechanism.The measurements developed in this research for explanation and hypothesis provide an indirect way of estimating the ‘quality’ of an explanation for given evidence. Such methods are realistic for complex domains such as fraud detection, where the prevailing hypothesis may not always be relevant to the new evidence. In order to survive in rapidly changing environments, it is necessary to bridge the gap that exists between the system’s view of the world and reality.Our research has demonstrated the value of Case-Based Interaction, which utilises an hypothesis structure for the representation of relevant planning and strategic knowledge. Under, the guidance of case based interaction, users are active agents empowered by system knowledge, and the system acquires its auxiliary information/knowledge from this external source. Case studies using the new paradigm and drawn from the insurance industry have attracted wide interest. A prototypical system of fraud detection for motor vehicle insurance based on an hypothesis guided problem solving mechanism is now under commercial development. The initial feedback from claims managers is promising.

Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hierarchical hidden Markov model (HHMM) is an extension of the hidden Markov model to include a hierarchy of the hidden states. This form of hierarchical modeling has been found useful in applications such as handwritten character recognition, behavior recognition, video indexing, and text retrieval. Nevertheless, the state hierarchy in the original HHMM is restricted to a tree structure. This prohibits two different states from having the same child, and thus does not allow for sharing of common substructures in the model. In this paper, we present a general HHMM in which the state hierarchy can be a lattice allowing arbitrary sharing of substructures. Furthermore, we provide a method for numerical scaling to avoid underflow, an important issue in dealing with long observation sequences. We demonstrate the working of our method in a simulated environment where a hierarchical behavioral model is automatically learned and later used for recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter presents an introduction to computational intelligence (CI) paradigms. A number of CI definitions are first presented to provide a general concept of this new, innovative computing field. The main constituents of CI, which include artificial neural networks, fuzzy systems, and evolutionary algorithms, are explained. In addition, different hybrid CI models arisen from synergy of neural, fuzzy, and evolutionary computational paradigms are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the design and implementation of a new t-way test generation strategy, known as the Particle Swarm Test Generator (PSTG). Complementing the existing work on t-way testing strategies, PSTG serves as our research vehicle to investigate the applicability of Particle Swarm Optimization for t-way test data generation. The experimental results demonstrate that PSTG is capable of outperforming some of the existing strategies as far as the test size is concerned. Additionally, the evaluation also indicates the effectiveness of PSTG in generating an efficient test suite for testing consideration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction intervals (PIs) are excellent tools for quantification of uncertainties associated with point forecasts and predictions. This paper adopts and develops the lower upper bound estimation (LUBE) method for construction of PIs using neural network (NN) models. This method is fast and simple and does not require calculation of heavy matrices, as required by traditional methods. Besides, it makes no assumption about the data distribution. A new width-based index is proposed to quantitatively check how much PIs are informative. Using this measure and the coverage probability of PIs, a multi-objective optimization problem is formulated to train NN models in the LUBE method. The optimization problem is then transformed into a training problem through definition of a PI-based cost function. Particle swarm optimization (PSO) with the mutation operator is used to minimize the cost function. Experiments with synthetic and real-world case studies indicate that the proposed PSO-based LUBE method can construct higher quality PIs in a simpler and faster manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the application of an improved particle swarm optimization (PSO) technique for training an artificial neural network (ANN) to predict water levels for the Heshui watershed, China. Daily values of rainfall and water levels from 1988 to 2000 were first analyzed using ANNs trained with the conjugate-gradient, gradient descent and Levenberg-Marquardt neural network (LM-NN) algorithms. The best results were obtained from LM-NN and these results were then compared with those from PSO-based ANNs, including conventional PSO neural network (CPSONN) and improved PSO neural network (IPSONN) with passive congregation. The IPSONN algorithm improves PSO convergence by using the selfish herd concept in swarm behavior. Our results show that the PSO-based ANNs performed better than LM-NN. For models run using a single parameter (rainfall) as input, the root mean square error (RMSE) of the testing dataset for IPSONN was the lowest (0.152 m) compared to those for CPSONN (0.161 m) and LM-NN (0.205 m). For multi-parameter (rainfall and water level) inputs, the RMSE of the testing dataset for IPSONN was also the lowest (0.089 m) compared to those for CPSONN (0.105 m) and LM-NN (0.145 m). The results also indicate that the LM-NN model performed poorly in predicting the low and peak water levels, in comparison to the PSO-based ANNs. Moreover, the IPSONN model was superior to CPSONN in predicting extreme water levels. Lastly, IPSONN had a quicker convergence rate compared to CPSONN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the relationship between consensus measures used in different settings depending on how voters or experts express their preferences. We propose some new models for single-preference voting, which we derive from the evenness concept in ecology, and show that some of these can be placed within the framework of existing consensus measures using the discrete distance. Finally, we suggest some generalizations of the single-preference consensus measures allowing the incorporation of more general notions of distance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Type reduction (TR) is one of the key components of interval type-2 fuzzy logic systems (IT2FLSs). Minimizing the computational requirements has been one of the key design criteria for developing TR algorithms. Often researchers give more rewards to computationally less expensive TR algorithms. This paper evaluates and compares five frequently used TR algorithms based on their contribution to the forecasting performance of IT2FLS models. Algorithms are judged based on the generalization power of IT2FLS models developed using them. Synthetic and real world case studies with different levels of uncertainty are considered to examine effects of TR algorithms on forecasts' accuracies. As per obtained results, Coupland-Jonh TR algorithm leads to models with a higher and more stable forecasting performance. However, there is no obvious and consistent relationship between the widths of the type reduced set and the TR algorithm. © 2013 Elsevier B.V.