975 resultados para swarm intelligence models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hughes, N., Chou E., Price, C. J. Lee M. H.(1999). Automating Mechanical FMEA Using Functional Models, Proceedings 12th Int. Florida AI Research Soc. Conf. (FLAIRS-99), AAAI Press, May 1999, pp. 394-398.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coghill, G. M., Garrett, S. M. and King, R. D. (2004) Learning Qualitative Metabolic Models. European Conference on Artificial Intelligence (ECAI'04)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulation of pedestrian evacuations of smart buildings in emergency is a powerful tool for building analysis, dynamic evacuation planning and real-time response to the evolving state of evacuations. Macroscopic pedestrian models are low-complexity models that are and well suited to algorithmic analysis and planning, but are quite abstract. Microscopic simulation models allow for a high level of simulation detail but can be computationally intensive. By combining micro- and macro- models we can use each to overcome the shortcomings of the other and enable new capability and applications for pedestrian evacuation simulation that would not be possible with either alone. We develop the EvacSim multi-agent pedestrian simulator and procedurally generate macroscopic flow graph models of building space, integrating micro- and macroscopic approaches to simulation of the same emergency space. By “coupling” flow graph parameters to microscopic simulation results, the graph model captures some of the higher detail and fidelity of the complex microscopic simulation model. The coupled flow graph is used for analysis and prediction of the movement of pedestrians in the microscopic simulation, and investigate the performance of dynamic evacuation planning in simulated emergencies using a variety of strategies for allocation of macroscopic evacuation routes to microscopic pedestrian agents. The predictive capability of the coupled flow graph is exploited for the decomposition of microscopic simulation space into multiple future states in a scalable manner. By simulating multiple future states of the emergency in short time frames, this enables sensing strategy based on simulation scenario pattern matching which we show to achieve fast scenario matching, enabling rich, real-time feedback in emergencies in buildings with meagre sensing capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical models are important tools used in engineering fields to predict the behaviour and the impact of physical elements. There may be advantages to be gained by combining Case-Based Reasoning (CBR) techniques with numerical models. This paper considers how CBR can be used as a flexible query engine to improve the usability of numerical models. Particularly they can help to solve inverse and mixed problems, and to solve constraint problems. We discuss this idea with reference to the illustrative example of a pneumatic conveyor problem. The paper describes example problems faced by design engineers in this context and the issues that need to be considered in this approach. Solution of these problems require methods to handle constraints in both the retrieval phase and the adaptation phase of a typical CBR cycle. We show approaches to the solution of these problesm via a CBR tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents two new approaches for use in complete process monitoring. The firstconcerns the identification of nonlinear principal component models. This involves the application of linear
principal component analysis (PCA), prior to the identification of a modified autoassociative neural network (AAN) as the required nonlinear PCA (NLPCA) model. The benefits are that (i) the number of the reduced set of linear principal components (PCs) is smaller than the number of recorded process variables, and (ii) the set of PCs is better conditioned as redundant information is removed. The result is a new set of input data for a modified neural representation, referred to as a T2T network. The T2T NLPCA model is then used for complete process monitoring, involving fault detection, identification and isolation. The second approach introduces a new variable reconstruction algorithm, developed from the T2T NLPCA model. Variable reconstruction can enhance the findings of the contribution charts still widely used in industry by reconstructing the outputs from faulty sensors to produce more accurate fault isolation. These ideas are illustrated using recorded industrial data relating to developing cracks in an industrial glass melter process. A comparison of linear and nonlinear models, together with the combined use of contribution charts and variable reconstruction, is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The eng-genes concept involves the use of fundamental known system functions as activation functions in a neural model to create a 'grey-box' neural network. One of the main issues in eng-genes modelling is to produce a parsimonious model given a model construction criterion. The challenges are that (1) the eng-genes model in most cases is a heterogenous network consisting of more than one type of nonlinear basis functions, and each basis function may have different set of parameters to be optimised; (2) the number of hidden nodes has to be chosen based on a model selection criterion. This is a mixed integer hard problem and this paper investigates the use of a forward selection algorithm to optimise both the network structure and the parameters of the system-derived activation functions. Results are included from case studies performed on a simulated continuously stirred tank reactor process, and using actual data from a pH neutralisation plant. The resulting eng-genes networks demonstrate superior simulation performance and transparency over a range of network sizes when compared to conventional neural models. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ashton and colleagues concede in their response (Ashton, Lee, & Visser, in this issue), that neuroimaging methods provide a relatively unambiguous measure of the levels to which cognitive tasks co-recruit dif- ferent functional brain networks (task mixing). It is also evident from their response that they now accept that task mixing differs from the blended models of the classic literature. However, they still have not grasped how the neuroimaging data can help to constrain models of the neural basis of higher order ‘g’. Specifically, they claim that our analyses are invalid as we assume that functional networks have uncorrelated capacities. They use the simple analogy of a set of exercises that recruit multiple muscle groups to varying extents and highlight the fact that individual differences in strength may correlate across muscle groups. Contrary to their claim, we did not assume in the original article (Hampshire, High- field, Parkin, & Owen, 2012) that functional networks had uncorrelated capacities; instead, the analyses were specifically designed to estimate the scale of those correlations, which we referred to as spatially ‘diffuse’ factors

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What makes one person more intellectually able than another? Can the entire distribution of human intelligence be accounted for by just one general factor? Is intelligence supported by a single neural system? Here, we provide a perspective on human intelligence that takes into account how general abilities or ‘‘factors’’ reflect the functional organiza- tion of the brain. By comparing factor models of individual differences in performance with factor models of brain functional organization, we demon- strate that different components of intelligence have their analogs in distinct brain networks. Using simulations based on neuroimaging data, we show that the higher-order factor ‘‘g’’ is accounted for by cognitive tasks corecruiting multiple networks. Finally, we confirm the independence of these com- ponents of intelligence by dissociating them using questionnaire variables. We propose that intelli- gence is an emergent property of anatomically distinct cognitive systems, each of which has its own capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term contractual decisions are the basis of an efficient risk management. However those types of decisions have to be supported with a robust price forecast methodology. This paper reports a different approach for long-term price forecast which tries to give answers to that need. Making use of regression models, the proposed methodology has as main objective to find the maximum and a minimum Market Clearing Price (MCP) for a specific programming period, and with a desired confidence level α. Due to the problem complexity, the meta-heuristic Particle Swarm Optimization (PSO) was used to find the best regression parameters and the results compared with the obtained by using a Genetic Algorithm (GA). To validate these models, results from realistic data are presented and discussed in detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The elastic behavior of the demand consumption jointly used with other available resources such as distributed generation (DG) can play a crucial role for the success of smart grids. The intensive use of Distributed Energy Resources (DER) and the technical and contractual constraints result in large-scale non linear optimization problems that require computational intelligence methods to be solved. This paper proposes a Particle Swarm Optimization (PSO) based methodology to support the minimization of the operation costs of a virtual power player that manages the resources in a distribution network and the network itself. Resources include the DER available in the considered time period and the energy that can be bought from external energy suppliers. Network constraints are considered. The proposed approach uses Gaussian mutation of the strategic parameters and contextual self-parameterization of the maximum and minimum particle velocities. The case study considers a real 937 bus distribution network, with 20310 consumers and 548 distributed generators. The obtained solutions are compared with a deterministic approach and with PSO without mutation and Evolutionary PSO, both using self-parameterization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demand response programs and models have been developed and implemented for an improved performance of electricity markets, taking full advantage of smart grids. Studying and addressing the consumers’ flexibility and network operation scenarios makes possible to design improved demand response models and programs. The methodology proposed in the present paper aims to address the definition of demand response programs that consider the demand shifting between periods, regarding the occurrence of multi-period demand response events. The optimization model focuses on minimizing the network and resources operation costs for a Virtual Power Player. Quantum Particle Swarm Optimization has been used in order to obtain the solutions for the optimization model that is applied to a large set of operation scenarios. The implemented case study illustrates the use of the proposed methodology to support the decisions of the Virtual Power Player in what concerns the duration of each demand response event.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Wechsler Intelligence Scale for Children-fourth edition (i.e. WISC-IV) recognizes a four-factor scoring structure in addition to the Full Scale IQ (FSIQ) score: Verbal Comprehension (VCI), Perceptual Reasoning (PRI), Working Memory (WMI), and Processing Speed (PSI) indices. However, several authors suggested that models based on the Cattell-Horn-Carroll (CHC) theory with 5 or 6 factors provided a better fit to the data than does the current four-factor solution. By comparing the current four-factor structure to CHC-based models, this research aimed to investigate the factorial structure and the constructs underlying the WISC-IV subtest scores with French-speaking Swiss children (N = 249). To deal with this goal, confirmatory factor analyses (CFAs) were conducted. Results showed that a CHC-based model with five factors better fitted the French-Swiss data than did the current WISC-IV scoring structure. All together, these results support the hypothesis of the appropriateness of the CHC model with French-speaking children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experimental Extended X-ray Absorption Fine Structure (EXAFS) spectra carry information about the chemical structure of metal protein complexes. However, pre- dicting the structure of such complexes from EXAFS spectra is not a simple task. Currently methods such as Monte Carlo optimization or simulated annealing are used in structure refinement of EXAFS. These methods have proven somewhat successful in structure refinement but have not been successful in finding the global minima. Multiple population based algorithms, including a genetic algorithm, a restarting ge- netic algorithm, differential evolution, and particle swarm optimization, are studied for their effectiveness in structure refinement of EXAFS. The oxygen-evolving com- plex in S1 is used as a benchmark for comparing the algorithms. These algorithms were successful in finding new atomic structures that produced improved calculated EXAFS spectra over atomic structures previously found.