821 resultados para Algorithms complexity
Resumo:
This essay is a trial on measuring complexity in a three-trophic level system by using a convex function of the informational entropy. The complexity measure defined here is compatible with the fact that real complexity lies between ordered and disordered states. Applying this measure to the data collected for two three-trophic level systems some hints about their organization are obtained. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This letter addresses the optimization and complexity reduction of switch-reconfigured antennas. A new optimization technique based on graph models is investigated. This technique is used to minimize the redundancy in a reconfigurable antenna structure and reduce its complexity. A graph modeling rule for switch-reconfigured antennas is proposed, and examples are presented.
Resumo:
In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.
Resumo:
The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.
Resumo:
This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
This paper analyzes the complexity-performance trade-off of several heuristic near-optimum multiuser detection (MuD) approaches applied to the uplink of synchronous single/multiple-input multiple-output multicarrier code division multiple access (S/MIMO MC-CDMA) systems. Genetic algorithm (GA), short term tabu search (STTS) and reactive tabu search (RTS), simulated annealing (SA), particle swarm optimization (PSO), and 1-opt local search (1-LS) heuristic multiuser detection algorithms (Heur-MuDs) are analyzed in details, using a single-objective antenna-diversity-aided optimization approach. Monte- Carlo simulations show that, after convergence, the performances reached by all near-optimum Heur-MuDs are similar. However, the computational complexities may differ substantially, depending on the system operation conditions. Their complexities are carefully analyzed in order to obtain a general complexity-performance framework comparison and to show that unitary Hamming distance search MuD (uH-ds) approaches (1-LS, SA, RTS and STTS) reach the best convergence rates, and among them, the 1-LS-MuD provides the best trade-off between implementation complexity and bit error rate (BER) performance.
Resumo:
The flowshop scheduling problem with blocking in-process is addressed in this paper. In this environment, there are no buffers between successive machines: therefore intermediate queues of jobs waiting in the system for their next operations are not allowed. Heuristic approaches are proposed to minimize the total tardiness criterion. A constructive heuristic that explores specific characteristics of the problem is presented. Moreover, a GRASP-based heuristic is proposed and Coupled with a path relinking strategy to search for better outcomes. Computational tests are presented and the comparisons made with an adaptation of the NEH algorithm and with a branch-and-bound algorithm indicate that the new approaches are promising. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009
Resumo:
Since their discovery 150 years ago, Neanderthals have been considered incapable of behavioural change and innovation. Traditional synchronic approaches to the study of Neanderthal behaviour have perpetuated this view and shaped our understanding of their lifeways and eventual extinction. In this thesis I implement an innovative diachronic approach to the analysis of Neanderthal faunal extraction, technology and symbolic behaviour as contained in the archaeological record of the critical period between 80,000 and 30,000 years BP. The thesis demonstrates patterns of change in Neanderthal behaviour which are at odds with traditional perspectives and which are consistent with an interpretation of increasing behavioural complexity over time, an idea that has been suggested but never thoroughly explored in Neanderthal archaeology. Demonstrating an increase in behavioural complexity in Neanderthals provides much needed new data with which to fuel the debate over the behavioural capacities of Neanderthals and the first appearance of Modern Human Behaviour in Europe. It supports the notion that Neanderthal populations were active agents of behavioural innovation prior to the arrival of Anatomically Modern Humans in Europe and, ultimately, that they produced an early Upper Palaeolithic cultural assemblage (the Châtelperronian) independent of modern humans. Overall, this thesis provides an initial step towards the development of a quantitative approach to measuring behavioural complexity which provides fresh insights into the cognitive and behavioural capabilities of Neanderthals.
Resumo:
Diachronic approaches provide potential for a more sophisticated framework within which to examine change in Neanderthal behavioural complexity using archaeological proxies such as symbolic artefacts, faunal assemblages and technology. Analysis of the temporal appearance and distribution of such artefacts and assemblages provide the basis for identifying changes in Neanderthal behavioural complexity in terms of symbolism, faunal extraction and technology respectively. Although changes in technology and faunal extraction were examined in the wider study, only the results of the symbolic study are presented below to illustrate the potential of the approach.
Resumo:
This paper presents an analysis of dysfluencies in two oral tellings of a familiar children's story by a young boy with autism. Thurber & Tager-Flusberg (1993) postulate a lower degree of cognitive and communicative investment to explain a lower frequency of non-grammatical pauses observed in elicited narratives of children with autism in comparison to typically developing and intellectually disabled controls. we also found a very low frequency of non-grammatical pauses in our data, but indications of high engagement and cognitive and communicative investment. We point to a wider range of disfluencies as indicators of cognitive load, and show that the kind and location of dysfluencies produced may reveal which aspects of the narrative task are creating the greatest cognitive demand: here, mental state ascription, perspectivization, and adherence to story schema. This paper thus generates analytical options and hypotheses that can be explored further in a larger population of children with autism and typically developing controls.
Resumo:
This paper presents a new relative measure of signal complexity, referred to here as relative structural complexity, which is based on the matching pursuit (MP) decomposition. By relative, we refer to the fact that this new measure is highly dependent on the decomposition dictionary used by MP. The structural part of the definition points to the fact that this new measure is related to the structure, or composition, of the signal under analysis. After a formal definition, the proposed relative structural complexity measure is used in the analysis of newborn EEG. To do this, firstly, a time-frequency (TF) decomposition dictionary is specifically designed to compactly represent the newborn EEG seizure state using MP. We then show, through the analysis of synthetic and real newborn EEG data, that the relative structural complexity measure can indicate changes in EEG structure as it transitions between the two EEG states; namely seizure and background (non-seizure).
Resumo:
In this paper we follow the BOID (Belief, Obligation, Intention, Desire) architecture to describe agents and agent types in Defeasible Logic. We argue, in particular, that the introduction of obligations can provide a new reading of the concepts of intention and intentionality. Then we examine the notion of social agent (i.e., an agent where obligations prevail over intentions) and discuss some computational and philosophical issues related to it. We show that the notion of social agent either requires more complex computations or has some philosophical drawbacks.
Resumo:
Despite many successes of conventional DNA sequencing methods, some DNAs remain difficult or impossible to sequence. Unsequenceable regions occur in the genomes of many biologically important organisms, including the human genome. Such regions range in length from tens to millions of bases, and may contain valuable information such as the sequences of important genes. The authors have recently developed a technique that renders a wide range of problematic DNAs amenable to sequencing. The technique is known as sequence analysis via mutagenesis (SAM). This paper presents a number of algorithms for analysing and interpreting data generated by this technique.
Resumo:
There are many factors which affect the L2 learner’s performance at the levels of phonology, morphology and syntax. Consequently when L2 learners attempt to communicate in the target language, their language production will show systematic variability across the above mentioned linguistic domains. This variation can be attributed to some factors such as interlocutors, topic familiarity, prior knowledge, task condition, planning time and tasks types. This paper reports the results of an on going research investigating the issue of variability attributed to the task type. It is hypothesized that the particular type of task learners are required to perform will result in variation in their performance. Results of the statistical analyses of this study investigating the issue of variation in the performance of twenty L2 learners at the English department of Tabriz University provided evidence in support of the hypothesis that performance of L2 learners show systematic variability attributed to task.