907 resultados para computationally efficient algorithm
Resumo:
This report examines how to estimate the parameters of a chaotic system given noisy observations of the state behavior of the system. Investigating parameter estimation for chaotic systems is interesting because of possible applications for high-precision measurement and for use in other signal processing, communication, and control applications involving chaotic systems. In this report, we examine theoretical issues regarding parameter estimation in chaotic systems and develop an efficient algorithm to perform parameter estimation. We discover two properties that are helpful for performing parameter estimation on non-structurally stable systems. First, it turns out that most data in a time series of state observations contribute very little information about the underlying parameters of a system, while a few sections of data may be extraordinarily sensitive to parameter changes. Second, for one-parameter families of systems, we demonstrate that there is often a preferred direction in parameter space governing how easily trajectories of one system can "shadow'" trajectories of nearby systems. This asymmetry of shadowing behavior in parameter space is proved for certain families of maps of the interval. Numerical evidence indicates that similar results may be true for a wide variety of other systems. Using the two properties cited above, we devise an algorithm for performing parameter estimation. Standard parameter estimation techniques such as the extended Kalman filter perform poorly on chaotic systems because of divergence problems. The proposed algorithm achieves accuracies several orders of magnitude better than the Kalman filter and has good convergence properties for large data sets.
Resumo:
We present a set of techniques that can be used to represent and detect shapes in images. Our methods revolve around a particular shape representation based on the description of objects using triangulated polygons. This representation is similar to the medial axis transform and has important properties from a computational perspective. The first problem we consider is the detection of non-rigid objects in images using deformable models. We present an efficient algorithm to solve this problem in a wide range of situations, and show examples in both natural and medical images. We also consider the problem of learning an accurate non-rigid shape model for a class of objects from examples. We show how to learn good models while constraining them to the form required by the detection algorithm. Finally, we consider the problem of low-level image segmentation and grouping. We describe a stochastic grammar that generates arbitrary triangulated polygons while capturing Gestalt principles of shape regularity. This grammar is used as a prior model over random shapes in a low level algorithm that detects objects in images.
Resumo:
In this paper, we present a P2P-based database sharing system that provides information sharing capabilities through keyword-based search techniques. Our system requires neither a global schema nor schema mappings between different databases, and our keyword-based search algorithms are robust in the presence of frequent changes in the content and membership of peers. To facilitate data integration, we introduce keyword join operator to combine partial answers containing different keywords into complete answers. We also present an efficient algorithm that optimize the keyword join operations for partial answer integration. Our experimental study on both real and synthetic datasets demonstrates the effectiveness of our algorithms, and the efficiency of the proposed query processing strategies.
Resumo:
Two-dimensional flood inundation modelling is a widely used tool to aid flood risk management. In urban areas, the model spatial resolution required to represent flows through a typical street network often results in an impractical computational cost at the city scale. This paper presents the calibration and evaluation of a recently developed formulation of the LISFLOOD-FP model, which is more computationally efficient at these resolutions. Aerial photography was available for model evaluation on 3 days from the 24 to the 31 of July. The new formulation was benchmarked against the original version of the model at 20 and 40 m resolutions, demonstrating equally accurate simulation, given the evaluation data but at a 67 times faster computation time. The July event was then simulated at the 2 m resolution of the available airborne LiDAR DEM. This resulted in more accurate simulation of the floodplain drying dynamics compared with the coarse resolution models, although maximum inundation levels were simulated equally well at all resolutions tested.
Resumo:
When speech is in competition with interfering sources in rooms, monaural indicators of intelligibility fail to take account of the listener’s abilities to separate target speech from interfering sounds using the binaural system. In order to incorporate these segregation abilities and their susceptibility to reverberation, Lavandier and Culling [J. Acoust. Soc. Am. 127, 387–399 (2010)] proposed a model which combines effects of better-ear listening and binaural unmasking. A computationally efficient version of this model is evaluated here under more realistic conditions that include head shadow, multiple stationary noise sources, and real-room acoustics. Three experiments are presented in which speech reception thresholds were measured in the presence of one to three interferers using real-room listening over headphones, simulated by convolving anechoic stimuli with binaural room impulse-responses measured with dummy-head transducers in five rooms. Without fitting any parameter of the model, there was close correspondence between measured and predicted differences in threshold across all tested conditions. The model’s components of better-ear listening and binaural unmasking were validated both in isolation and in combination. The computational efficiency of this prediction method allows the generation of complex “intelligibility maps” from room designs. © 2012 Acoustical Society of America
Resumo:
Simulating spiking neural networks is of great interest to scientists wanting to model the functioning of the brain. However, large-scale models are expensive to simulate due to the number and interconnectedness of neurons in the brain. Furthermore, where such simulations are used in an embodied setting, the simulation must be real-time in order to be useful. In this paper we present NeMo, a platform for such simulations which achieves high performance through the use of highly parallel commodity hardware in the form of graphics processing units (GPUs). NeMo makes use of the Izhikevich neuron model which provides a range of realistic spiking dynamics while being computationally efficient. Our GPU kernel can deliver up to 400 million spikes per second. This corresponds to a real-time simulation of around 40 000 neurons under biologically plausible conditions with 1000 synapses per neuron and a mean firing rate of 10 Hz.
Resumo:
Ensemble-based data assimilation is rapidly proving itself as a computationally-efficient and skilful assimilation method for numerical weather prediction, which can provide a viable alternative to more established variational assimilation techniques. However, a fundamental shortcoming of ensemble techniques is that the resulting analysis increments can only span a limited subspace of the state space, whose dimension is less than the ensemble size. This limits the amount of observational information that can effectively constrain the analysis. In this paper, a data selection strategy that aims to assimilate only the observational components that matter most and that can be used with both stochastic and deterministic ensemble filters is presented. This avoids unnecessary computations, reduces round-off errors and minimizes the risk of importing observation bias in the analysis. When an ensemble-based assimilation technique is used to assimilate high-density observations, the data-selection procedure allows the use of larger localization domains that may lead to a more balanced analysis. Results from the use of this data selection technique with a two-dimensional linear and a nonlinear advection model using both in situ and remote sounding observations are discussed.
Resumo:
FAMOUS fills an important role in the hierarchy of climate models, both explicitly resolving atmospheric and oceanic dynamics yet being sufficiently computationally efficient that either very long simulations or large ensembles are possible. An improved set of carbon cycle parameters for this model has been found using a perturbed physics ensemble technique. This is an important step towards building the "Earth System" modelling capability of FAMOUS, which is a reduced resolution, and hence faster running, version of the Hadley Centre Climate model, HadCM3. Two separate 100 member perturbed parameter ensembles were performed; one for the land surface and one for the ocean. The land surface scheme was tested against present-day and past representations of vegetation and the ocean ensemble was tested against observations of nitrate. An advantage of using a relatively fast climate model is that a large number of simulations can be run and hence the model parameter space (a large source of climate model uncertainty) can be more thoroughly sampled. This has the associated benefit of being able to assess the sensitivity of model results to changes in each parameter. The climatologies of surface and tropospheric air temperature and precipitation are improved relative to previous versions of FAMOUS. The improved representation of upper atmosphere temperatures is driven by improved ozone concentrations near the tropopause and better upper level winds.
Resumo:
The study of the mechanical energy budget of the oceans using Lorenz available potential energy (APE) theory is based on knowledge of the adiabatically re-arranged Lorenz reference state of minimum potential energy. The compressible and nonlinear character of the equation of state for seawater has been thought to cause the reference state to be ill-defined, casting doubt on the usefulness of APE theory for investigating ocean energetics under realistic conditions. Using a method based on the volume frequency distribution of parcels as a function of temperature and salinity in the context of the seawater Boussinesq approximation, which we illustrate using climatological data, we show that compressibility effects are in fact minor. The reference state can be regarded as a well defined one-dimensional function of depth, which forms a surface in temperature, salinity and density space between the surface and the bottom of the ocean. For a very small proportion of water masses, this surface can be multivalued and water parcels can have up to two statically stable levels in the reference density profile, of which the shallowest is energetically more accessible. Classifying parcels from the surface to the bottom gives a different reference density profile than classifying in the opposite direction. However, this difference is negligible. We show that the reference state obtained by standard sorting methods is equivalent, though computationally more expensive, to the volume frequency distribution approach. The approach we present can be applied systematically and in a computationally efficient manner to investigate the APE budget of the ocean circulation using models or climatological data.
Resumo:
The pipe sizing of water networks via evolutionary algorithms is of great interest because it allows the selection of alternative economical solutions that meet a set of design requirements. However, available evolutionary methods are numerous, and methodologies to compare the performance of these methods beyond obtaining a minimal solution for a given problem are currently lacking. A methodology to compare algorithms based on an efficiency rate (E) is presented here and applied to the pipe-sizing problem of four medium-sized benchmark networks (Hanoi, New York Tunnel, GoYang and R-9 Joao Pessoa). E numerically determines the performance of a given algorithm while also considering the quality of the obtained solution and the required computational effort. From the wide range of available evolutionary algorithms, four algorithms were selected to implement the methodology: a PseudoGenetic Algorithm (PGA), Particle Swarm Optimization (PSO), a Harmony Search and a modified Shuffled Frog Leaping Algorithm (SFLA). After more than 500,000 simulations, a statistical analysis was performed based on the specific parameters each algorithm requires to operate, and finally, E was analyzed for each network and algorithm. The efficiency measure indicated that PGA is the most efficient algorithm for problems of greater complexity and that HS is the most efficient algorithm for less complex problems. However, the main contribution of this work is that the proposed efficiency ratio provides a neutral strategy to compare optimization algorithms and may be useful in the future to select the most appropriate algorithm for different types of optimization problems.
Resumo:
The issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among interacting robots or Virtual agents have brought that issue to a central place in more applied research fields, such as computational linguistics and neural networks, as well. An attractive approach to learning an object-word mapping is the so-called cross-situational learning. This learning scenario is based on the intuitive notion that a learner can determine the meaning of a word by finding something in common across all observed uses of that word. Here we show how the deterministic Neural Modeling Fields (NMF) categorization mechanism can be used by the learner as an efficient algorithm to infer the correct object-word mapping. To achieve that we first reduce the original on-line learning problem to a batch learning problem where the inputs to the NMF mechanism are all possible object-word associations that Could be inferred from the cross-situational learning scenario. Since many of those associations are incorrect, they are considered as clutter or noise and discarded automatically by a clutter detector model included in our NMF implementation. With these two key ingredients - batch learning and clutter detection - the NMF mechanism was capable to infer perfectly the correct object-word mapping. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The subgradient optimization method is a simple and flexible linear programming iterative algorithm. It is much simpler than Newton's method and can be applied to a wider variety of problems. It also converges when the objective function is non-differentiable. Since an efficient algorithm will not only produce a good solution but also take less computing time, we always prefer a simpler algorithm with high quality. In this study a series of step size parameters in the subgradient equation is studied. The performance is compared for a general piecewise function and a specific p-median problem. We examine how the quality of solution changes by setting five forms of step size parameter.
Resumo:
Background qtl.outbred is an extendible interface in the statistical environment, R, for combining quantitative trait loci (QTL) mapping tools. It is built as an umbrella package that enables outbred genotype probabilities to be calculated and/or imported into the software package R/qtl. Findings Using qtl.outbred, the genotype probabilities from outbred line cross data can be calculated by interfacing with a new and efficient algorithm developed for analyzing arbitrarily large datasets (included in the package) or imported from other sources such as the web-based tool, GridQTL. Conclusion qtl.outbred will improve the speed for calculating probabilities and the ability to analyse large future datasets. This package enables the user to analyse outbred line cross data accurately, but with similar effort than inbred line cross data.
Resumo:
The aim of this paper is to develop a flexible model for analysis of quantitative trait loci (QTL) in outbred line crosses, which includes both additive and dominance effects. Our flexible intercross analysis (FIA) model accounts for QTL that are not fixed within founder lines and is based on the variance component framework. Genome scans with FIA are performed using a score statistic, which does not require variance component estimation. RESULTS: Simulations of a pedigree with 800 F2 individuals showed that the power of FIA including both additive and dominance effects was almost 50% for a QTL with equal allele frequencies in both lines with complete dominance and a moderate effect, whereas the power of a traditional regression model was equal to the chosen significance value of 5%. The power of FIA without dominance effects included in the model was close to those obtained for FIA with dominance for all simulated cases except for QTL with overdominant effects. A genome-wide linkage analysis of experimental data from an F2 intercross between Red Jungle Fowl and White Leghorn was performed with both additive and dominance effects included in FIA. The score values for chicken body weight at 200 days of age were similar to those obtained in FIA analysis without dominance. CONCLUSION: We have extended FIA to include QTL dominance effects. The power of FIA was superior, or similar, to standard regression methods for QTL effects with dominance. The difference in power for FIA with or without dominance is expected to be small as long as the QTL effects are not overdominant. We suggest that FIA with only additive effects should be the standard model to be used, especially since it is more computationally efficient.
Resumo:
Um programa baseado na técnica de evolução diferencial foi desenvolvido para a definição da contribuição genética ótima na seleção de candidatos a reprodução. A função- objetivo a ser otimizada foi composta pelo mérito genético esperado da futura progênie e pela coascendência média dos animais em reprodução. Conjuntos de dados reais e simulados de populações com gerações sobrepostas foram usados para validar e testar o desempenho do programa desenvolvido. O programa se mostrou computacionalmente eficiente e viável para ser aplicado na prática e as consequências esperadas de sua aplicação, em comparação a procedimentos empíricos de controle da endogamia e/ou com a seleção baseada apenas no valor genético esperado, seriam a melhora da resposta genética futura e limitação mais efetiva da taxa de endogamia.