978 resultados para neural algorithms
Resumo:
Artificial neural networks have been used to analyze a number of engineering problems, including settlement caused by different tunneling methods in various types of ground mass. This paper focuses on settlement over shotcrete- supported tunnels on Sao Paulo subway line 2 (West Extension) that were excavated in Tertiary sediments using the sequential excavation method. The adjusted network is a good tool for predicting settlement above new tunnels to be excavated in similar conditions. The influence of network training parameters on the quality of results is also discussed. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a strategy for the solution of the WDM optical networks planning. Specifically, the problem of Routing and Wavelength Allocation (RWA) in order to minimize the amount of wavelengths used. In this case, the problem is known as the Min-RWA. Two meta-heuristics (Tabu Search and Simulated Annealing) are applied to take solutions of good quality and high performance. The key point is the degradation of the maximum load on the virtual links in favor of minimization of number of wavelengths used; the objective is to find a good compromise between the metrics of virtual topology (load in Gb/s) and of the physical topology (quantity of wavelengths). The simulations suggest good results when compared to some existing in the literature.
Resumo:
This technical note develops information filter and array algorithms for a linear minimum mean square error estimator of discrete-time Markovian jump linear systems. A numerical example for a two-mode Markovian jump linear system, to show the advantage of using array algorithms to filter this class of systems, is provided.
Resumo:
The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This work presents the development and implementation of an artificial neural network based algorithm for transmission lines distance protection. This algorithm was developed to be used in any transmission line regardless of its configuration or voltage level. The described ANN-based algorithm does not need any topology adaptation or ANN parameters adjustment when applied to different electrical systems. This feature makes this solution unique since all ANN-based solutions presented until now were developed for particular transmission lines, which means that those solutions cannot be implemented in commercial relays. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
There are several ways to attempt to model a building and its heat gains from external sources as well as internal ones in order to evaluate a proper operation, audit retrofit actions, and forecast energy consumption. Different techniques, varying from simple regression to models that are based on physical principles, can be used for simulation. A frequent hypothesis for all these models is that the input variables should be based on realistic data when they are available, otherwise the evaluation of energy consumption might be highly under or over estimated. In this paper, a comparison is made between a simple model based on artificial neural network (ANN) and a model that is based on physical principles (EnergyPlus) as an auditing and predicting tool in order to forecast building energy consumption. The Administration Building of the University of Sao Paulo is used as a case study. The building energy consumption profiles are collected as well as the campus meteorological data. Results show that both models are suitable for energy consumption forecast. Additionally, a parametric analysis is carried out for the considered building on EnergyPlus in order to evaluate the influence of several parameters such as the building profile occupation and weather data on such forecasting. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and set-valued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among experts; they can also encode models based on belief functions and possibilistic measures. All algorithms for approximate inference in this paper rely on exact inferences in credal networks based on polytrees with binary variables, as these inferences have polynomial complexity. We are inspired by approximate algorithms for Bayesian networks; thus the Loopy 2U algorithm resembles Loopy Belief Propagation, while the Iterated Partial Evaluation and Structured Variational 2U algorithms are, respectively, based on Localized Partial Evaluation and variational techniques. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
In this work, the oxidation of the model pollutant phenol has been studied by means of the O(3), O(3)-UV, and O(3)-H(2)O(2) processes. Experiments were carried out in a fed-batch system to investigate the effects of initial dissolved organic carbon concentration, initial, ozone concentration in the gas phase, the presence or absence of UVC radiation, and initial hydrogen peroxide concentration. Experimental results were used in the modeling of the degradation processes by neural networks in order to simulate DOC-time profiles and evaluate the relative importance of process variables.
Resumo:
In order to model the synchronization of brain signals, a three-node fully-connected network is presented. The nodes are considered to be voltage control oscillator neurons (VCON) allowing to conjecture about how the whole process depends on synaptic gains, free-running frequencies and delays. The VCON, represented by phase-locked loops (PLL), are fully-connected and, as a consequence, an asymptotically stable synchronous state appears. Here, an expression for the synchronous state frequency is derived and the parameter dependence of its stability is discussed. Numerical simulations are performed providing conditions for the use of the derived formulae. Model differential equations are hard to be analytically treated, but some simplifying assumptions combined with simulations provide an alternative formulation for the long-term behavior of the fully-connected VCON network. Regarding this kind of network as models for brain frequency signal processing, with each PLL representing a neuron (VCON), conditions for their synchronization are proposed, considering the different bands of brain activity signals and relating them to synaptic gains, delays and free-running frequencies. For the delta waves, the synchronous state depends strongly on the delays. However, for alpha, beta and theta waves, the free-running individual frequencies determine the synchronous state. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The flowshop scheduling problem with blocking in-process is addressed in this paper. In this environment, there are no buffers between successive machines: therefore intermediate queues of jobs waiting in the system for their next operations are not allowed. Heuristic approaches are proposed to minimize the total tardiness criterion. A constructive heuristic that explores specific characteristics of the problem is presented. Moreover, a GRASP-based heuristic is proposed and Coupled with a path relinking strategy to search for better outcomes. Computational tests are presented and the comparisons made with an adaptation of the NEH algorithm and with a branch-and-bound algorithm indicate that the new approaches are promising. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009
Resumo:
The coordination of movement is governed by a coalition of constraints. The expression of these constraints ranges from the concrete—the restricted range of motion offered by the mechanical configuration of our muscles and joints; to the abstract—the difficulty that we experience in combining simple movements into complex rhythms. We seek to illustrate that the various constraints on coordination are complementary and inclusive, and the means by which their expression and interaction are mediated systematically by the integrative action of the central nervous system (CNS). Beyond identifying the general principles at the behavioural level that govern the mutual interplay of constraints, we attempt to demonstrate that these principles have as their foundation specific functional properties of the cortical motor systems. We propose that regions of the brain upstream of the motor cortex may play a significant role in mediating interactions between the functional representations of muscles engaged in sensorimotor coordination tasks. We also argue that activity in these ldquosupramotorrdquo regions may mediate the stabilising role of augmented sensory feedback.