72 resultados para Nature inspired algorithms

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

80.00% 80.00%

Publicador:

Resumo:

L'objectiu d'aquest projecte ha estat el desenvolupament d'algorismes biològicament inspirats per a l'olfacció artificial. Per a assolir-lo ens hem basat en el paradigma de les màquines amb suport vectorial. Hem construit algoritmes que imitaven els processos computacionals dels diferents sistemes que formen el sistema olfactiu dels insectes, especialment de la llagosta Schistocerca gregaria. Ens hem centrat en el lòbuls de les antenes, i en el cos fungiforme. El primer està considerat un dispositiu de codificació de les olors, que a partir de la resposta temporal dels receptors olfactius a les antenes genera un patró d'activació espaial i temporal. Quant al cos fungiforme es considera que la seva funció és la d'una memòria per als olors, així com un centre per a la integració multi-sensorial. El primer pas ha estat la construcció de models detallats dels dos sistemes. A continuació, hem utilitzat aquests models per a processar diferents tipus de senyals amb l'objectiu de abstraure els principis computacionals subjacents. Finalment, hem avaluat les capacitats d'aquests models abstractes, i els hem utilitzat per al processat de dades provinents de sensors de gasos. Els resultats mostren que el models abstractes tenen millor comportament front el soroll i més capacitat d'emmagatzematge de records que altres models més clàssics, com ara les memòries associatives de Hopfield o fins i tot en determinades circumstàncies que les mateixes Support Vector Machines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present building blocks for algorithms for the efficient reduction of square factor, i.e. direct repetitions in strings. So the basic problem is this: given a string, compute all strings that can be obtained by reducing factors of the form zz to z. Two types of algorithms are treated: an offline algorithm is one that can compute a data structure on the given string in advance before the actual search for the square begins; in contrast, online algorithms receive all input only at the time when a request is made. For offline algorithms we treat the following problem: Let u and w be two strings such that w is obtained from u by reducing a square factor zz to only z. If we further are given the suffix table of u, how can we derive the suffix table for w without computing it from scratch? As the suffix table plays a key role in online algorithms for the detection of squares in a string, this derivation can make the iterated reduction of squares more efficient. On the other hand, we also show how a suffix array, used for the offline detection of squares, can be adapted to the new string resulting from the deletion of a square. Because the deletion is a very local change, this adaption is more eficient than the computation of the new suffix array from scratch.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a comprehensive study of different Independent Component Analysis (ICA) algorithms for the calculation of coherency and sharpness of electroencephalogram (EEG) signals, in order to investigate the possibility of early detection of Alzheimer’s disease (AD). We found that ICA algorithms can help in the artifact rejection and noise reduction, improving the discriminative property of features in high frequency bands (specially in high alpha and beta ranges). In addition to different ICA algorithms, the optimum number of selected components is investigated, in order to help decision processes for future works.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a quantitative comparisons of different independent component analysis (ICA) algorithms in order to investigate their potential use in preprocessing (such as noise reduction and feature extraction) the electroencephalogram (EEG) data for early detection of Alzhemier disease (AD) or discrimination between AD (or mild cognitive impairment, MCI) and age-match control subjects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Self-organization is a growing interdisciplinary field of research about a phenomenon that can be observed in the Universe, in Nature and in social contexts. Research on self-organization tries to describe and explain forms, complex patterns and behaviours that arise from a collection of entities without an external organizer. As researchers in artificial systems, our aim is not to mimic self-organizing phenomena arising in Nature, but to understand and to control underlying mechanisms allowing desired emergence of forms, complex patterns and behaviours. Rather than attempting to eliminate such self-organization in artificial systems, we think that this might be deliberately harnessed in order to reach desirable global properties. In this paper we analyze three forms of self-organization: stigmergy, reinforcement mechanisms and cooperation. The amplification phenomena founded in stigmergic process or in reinforcement process are different forms of positive feedbacks that play a major role in building group activity or social organization. Cooperation is a functional form for self-organization because of its ability to guide local behaviours in order to obtain a relevant collective one. For each forms of self-organisation, we present a case study to show how we transposed it to some artificial systems and then analyse the strengths and weaknesses of such an approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We adapt the Shout and Act algorithm to Digital Objects Preservation where agents explore file systems looking for digital objects to be preserved (victims). When they find something they “shout” so that agent mates can hear it. The louder the shout, the urgent or most important the finding is. Louder shouts can also refer to closeness. We perform several experiments to show that this system works very scalably, showing that heterogeneous teams of agents outperform homogeneous ones over a wide range of tasks complexity. The target at-risk documents are MS Office documents (including an RTF file) with Excel content or in Excel format. Thus, an interesting conclusion from the experiments is that fewer heterogeneous (varying skills) agents can equal the performance of many homogeneous (combined super-skilled) agents, implying significant performance increases with lower overall cost growth. Our results impact the design of Digital Objects Preservation teams: a properly designed combination of heterogeneous teams is cheaper and more scalable when confronted with uncertain maps of digital objects that need to be preserved. A cost pyramid is proposed for engineers to use for modeling the most effective agent combinations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is common to find in experimental data persistent oscillations in the aggregate outcomes and high levels of heterogeneity in individual behavior. Furthermore, it is not unusual to find significant deviations from aggregate Nash equilibrium predictions. In this paper, we employ an evolutionary model with boundedly rational agents to explain these findings. We use data from common property resource experiments (Casari and Plott, 2003). Instead of positing individual-specific utility functions, we model decision makers as selfish and identical. Agent interaction is simulated using an individual learning genetic algorithm, where agents have constraints in their working memory, a limited ability to maximize, and experiment with new strategies. We show that the model replicates most of the patterns that can be found in common property resource experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we carefully link knowledge flows to and from a firms innovation process with this firms investment decisions. Three types of investments are considered: investments in applied research, investments in basic research, and investments in intellectual property protection. Only when basic research is performed, can the firm effectively access incoming knowledge flows and these incoming spillovers serve to increase the efficiency of own applied research.. The firm can at the same time influence outgoing knowledge flows, improving appropriability of its innovations, by investing in protection. Our results indicate that firms with small budgets for innovation will not invest in basic research. This occurs in the short run, when the budget for know-how creation is restricted, or in the long-run, when market opportunities are low, when legal protection is not very important, or, when the pool of accessible and relevant external know-how is limited. The ratio! of basic to applied research is non-decreasing in the size of the pool of accessible external know-how, the size and opportunity of the market, and, the effectiveness of intellectual property rights protection. This indicates the existence of economies of scale in basic research due to external market related factors. Empirical evidence from a sample of innovative manufacturing firms in Belgium confirms the economies of scale in basic research as a consequence of the firms capacity to access external knowledge flows and to protect intellectual property, as well as the complementarity between legal and strategic investments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Vegeu el resum a l'inici del fitxer adjunt."

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present experimental and theoretical analyses of data requirements for haplotype inference algorithms. Our experiments include a broad range of problem sizes under two standard models of tree distribution and were designed to yield statistically robust results despite the size of the sample space. Our results validate Gusfield's conjecture that a population size of n log n is required to give (with high probability) sufficient information to deduce the n haplotypes and their complete evolutionary history. The experimental results inspired our experimental finding with theoretical bounds on the population size. We also analyze the population size required to deduce some fixed fraction of the evolutionary history of a set of n haplotypes and establish linear bounds on the required sample size. These linear bounds are also shown theoretically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the properties of the well known Replicator Dynamics when applied to a finitely repeated version of the Prisoners' Dilemma game. We characterize the behavior of such dynamics under strongly simplifying assumptions (i.e. only 3 strategies are available) and show that the basin of attraction of defection shrinks as the number of repetitions increases. After discussing the difficulties involved in trying to relax the 'strongly simplifying assumptions' above, we approach the same model by means of simulations based on genetic algorithms. The resulting simulations describe a behavior of the system very close to the one predicted by the replicator dynamics without imposing any of the assumptions of the analytical model. Our main conclusion is that analytical and computational models are good complements for research in social sciences. Indeed, while on the one hand computational models are extremely useful to extend the scope of the analysis to complex scenar

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forest fires are a serious threat to humans and nature from an ecological, social and economic point of view. Predicting their behaviour by simulation still delivers unreliable results and remains a challenging task. Latest approaches try to calibrate input variables, often tainted with imprecision, using optimisation techniques like Genetic Algorithms. To converge faster towards fitter solutions, the GA is guided with knowledge obtained from historical or synthetical fires. We developed a robust and efficient knowledge storage and retrieval method. Nearest neighbour search is applied to find the fire configuration from knowledge base most similar to the current configuration. Therefore, a distance measure was elaborated and implemented in several ways. Experiments show the performance of the different implementations regarding occupied storage and retrieval time with overly satisfactory results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we develop numerical algorithms that use small requirements of storage and operations for the computation of invariant tori in Hamiltonian systems (exact symplectic maps and Hamiltonian vector fields). The algorithms are based on the parameterization method and follow closely the proof of the KAM theorem given in [LGJV05] and [FLS07]. They essentially consist in solving a functional equation satisfied by the invariant tori by using a Newton method. Using some geometric identities, it is possible to perform a Newton step using little storage and few operations. In this paper we focus on the numerical issues of the algorithms (speed, storage and stability) and we refer to the mentioned papers for the rigorous results. We show how to compute efficiently both maximal invariant tori and whiskered tori, together with the associated invariant stable and unstable manifolds of whiskered tori. Moreover, we present fast algorithms for the iteration of the quasi-periodic cocycles and the computation of the invariant bundles, which is a preliminary step for the computation of invariant whiskered tori. Since quasi-periodic cocycles appear in other contexts, this section may be of independent interest. The numerical methods presented here allow to compute in a unified way primary and secondary invariant KAM tori. Secondary tori are invariant tori which can be contracted to a periodic orbit. We present some preliminary results that ensure that the methods are indeed implementable and fast. We postpone to a future paper optimized implementations and results on the breakdown of invariant tori.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the use of probabilistic or randomized algorithms for solving combinatorial optimization problems. Our approach employs non-uniform probability distributions to add a biased random behavior to classical heuristics so a large set of alternative good solutions can be quickly obtained in a natural way and without complex conguration processes. This procedure is especially useful in problems where properties such as non-smoothness or non-convexity lead to a highly irregular solution space, for which the traditional optimization methods, both of exact and approximate nature, may fail to reach their full potential. The results obtained are promising enough to suggest that randomizing classical heuristics is a powerful method that can be successfully applied in a variety of cases.

Relevância:

20.00% 20.00%

Publicador: