963 resultados para random search algorithms
Resumo:
Functional magnetic resonance imaging studies have indicated that efficient feature search (FS) and inefficient conjunction search (CS) activate partially distinct frontoparietal cortical networks. However, it remains a matter of debate whether the differences in these networks reflect differences in the early processing during FS and CS. In addition, the relationship between the differences in the networks and spatial shifts of attention also remains unknown. We examined these issues by applying a spatio-temporal analysis method to high-resolution visual event-related potentials (ERPs) and investigated how spatio-temporal activation patterns differ for FS and CS tasks. Within the first 450 msec after stimulus onset, scalp potential distributions (ERP maps) revealed 7 different electric field configurations for each search task. Configuration changes occurred simultaneously in the two tasks, suggesting that contributing processes were not significantly delayed in one task compared to the other. Despite this high spatial and temporal correlation, two ERP maps (120-190 and 250-300 msec) differed between the FS and CS. Lateralized distributions were observed only in the ERP map at 250-300 msec for the FS. This distribution corresponds to that previously described as the N2pc component (a negativity in the time range of the N2 complex over posterior electrodes of the hemisphere contralateral to the target hemifield), which has been associated with the focusing of attention onto potential target items in the search display. Thus, our results indicate that the cortical networks involved in feature and conjunction searching partially differ as early as 120 msec after stimulus onset and that the differences between the networks employed during the early stages of FS and CS are not necessarily caused by spatial attention shifts.
Resumo:
En aquest article comparem el rendiment que presenten dos sistemes de reconeixement de punts característics en imatges: en el primer utilitzem la tècnica Random Ferns bàsica i en el segon (que anomenem Ferns amb Informació Mútua o FIM) apliquem una tècnica d'obtenció de Ferns utilitzant un criteri simplificat de la informació mútua.
Resumo:
L'objectiu del treball és dissenyar i implementar un sistema de simulació de votació electrònica, emprant una adaptació sobre corbes el·líptiques del criptosistema ElGamal, per tal d'estudiar-ne la viabilitat, centrant l'atenció en temes de seguretat, especialment en el procés de mescla de vots per tal de desvincular un vot de la persona que l'ha emès.
Resumo:
Classical treatments of problems of sequential mate choice assume that the distribution of the quality of potential mates is known a priori. This assumption, made for analytical purposes, may seem unrealistic, opposing empirical data as well as evolutionary arguments. Using stochastic dynamic programming, we develop a model that includes the possibility for searching individuals to learn about the distribution and in particular to update mean and variance during the search. In a constant environment, a priori knowledge of the parameter values brings strong benefits in both time needed to make a decision and average value of mate obtained. Knowing the variance yields more benefits than knowing the mean, and benefits increase with variance. However, the costs of learning become progressively lower as more time is available for choice. When parameter values differ between demes and/or searching periods, a strategy relying on fixed a priori information might lead to erroneous decisions, which confers advantages on the learning strategy. However, time for choice plays an important role as well: if a decision must be made rapidly, a fixed strategy may do better even when the fixed image does not coincide with the local parameter values. These results help in delineating the ecological-behavior context in which learning strategies may spread.
Exact asymptotics and limit theorems for supremum of stationary chi-processes over a random interval
Resumo:
Long polymers in solution frequently adopt knotted configurations. To understand the physical properties of knotted polymers, it is important to find out whether the knots formed at thermodynamic equilibrium are spread over the whole polymer chain or rather are localized as tight knots. We present here a method to analyze the knottedness of short linear portions of simulated random chains. Using this method, we observe that knot-determining domains are usually very tight, so that, for example, the preferred size of the trefoil-determining portions of knotted polymer chains corresponds to just seven freely jointed segments.
Resumo:
In a thermally fluctuating long linear polymeric chain in a solution, the ends, from time to time, approach each other. At such an instance, the chain can be regarded as closed and thus will form a knot or rather a virtual knot. Several earlier studies of random knotting demonstrated that simpler knots show a higher occurrence for shorter random walks than do more complex knots. However, up to now there have been no rules that could be used to predict the optimal length of a random walk, i.e. the length for which a given knot reaches its highest occurrence. Using numerical simulations, we show here that a power law accurately describes the relation between the optimal lengths of random walks leading to the formation of different knots and the previously characterized lengths of ideal knots of a corresponding type.
Resumo:
Sleep apnea syndrome (SAS) consists of nocturnal snoring interrupted by obstructive apnea and of diurnal symptoms like hypersomnolence as a consequence of sleep fragmentation. Cardiovascular morbidity and mortality associated with this syndrome justify early detection and appropriate treatment. Polysomnography is still a frequently used method for early detection; however, several disadvantages like duration, discomfort and expense led to a search for alternatives. Since the beginning of the eighties, oximetry allows recording of nocturnal oxygen saturation of hemoglobin even at home. Nocturnal oximetry reveals O2-desaturation associated with apnea and thus permits often to diagnose or exclude SAS. Diagnosis of SAS is made when at least 20 desaturations per hour with an amplitude of at least 4% are recorded. On the other hand, normal nocturnal oximetry nearly excludes SAS. In those cases where nocturnal oximetry is not diagnostic, polysomnography remains the method of choice. Departing from published work, a model for SAS detection, based mainly on nocturnal oximetry, is proposed.
Resumo:
We present building blocks for algorithms for the efficient reduction of square factor, i.e. direct repetitions in strings. So the basic problem is this: given a string, compute all strings that can be obtained by reducing factors of the form zz to z. Two types of algorithms are treated: an offline algorithm is one that can compute a data structure on the given string in advance before the actual search for the square begins; in contrast, online algorithms receive all input only at the time when a request is made. For offline algorithms we treat the following problem: Let u and w be two strings such that w is obtained from u by reducing a square factor zz to only z. If we further are given the suffix table of u, how can we derive the suffix table for w without computing it from scratch? As the suffix table plays a key role in online algorithms for the detection of squares in a string, this derivation can make the iterated reduction of squares more efficient. On the other hand, we also show how a suffix array, used for the offline detection of squares, can be adapted to the new string resulting from the deletion of a square. Because the deletion is a very local change, this adaption is more eficient than the computation of the new suffix array from scratch.
Resumo:
Biochemical systems are commonly modelled by systems of ordinary differential equations (ODEs). A particular class of such models called S-systems have recently gained popularity in biochemical system modelling. The parameters of an S-system are usually estimated from time-course profiles. However, finding these estimates is a difficult computational problem. Moreover, although several methods have been recently proposed to solve this problem for ideal profiles, relatively little progress has been reported for noisy profiles. We describe a special feature of a Newton-flow optimisation problem associated with S-system parameter estimation. This enables us to significantly reduce the search space, and also lends itself to parameter estimation for noisy data. We illustrate the applicability of our method by applying it to noisy time-course data synthetically produced from previously published 4- and 30-dimensional S-systems. In addition, we propose an extension of our method that allows the detection of network topologies for small S-systems. We introduce a new method for estimating S-system parameters from time-course profiles. We show that the performance of this method compares favorably with competing methods for ideal profiles, and that it also allows the determination of parameters for noisy profiles.
Resumo:
Annotation of protein-coding genes is a key goal of genome sequencing projects. In spite of tremendous recent advances in computational gene finding, comprehensive annotation remains a challenge. Peptide mass spectrometry is a powerful tool for researching the dynamic proteome and suggests an attractive approach to discover and validate protein-coding genes. We present algorithms to construct and efficiently search spectra against a genomic database, with no prior knowledge of encoded proteins. By searching a corpus of 18.5 million tandem mass spectra (MS/MS) from human proteomic samples, we validate 39,000 exons and 11,000 introns at the level of translation. We present translation-level evidence for novel or extended exons in 16 genes, confirm translation of 224 hypothetical proteins, and discover or confirm over 40 alternative splicing events. Polymorphisms are efficiently encoded in our database, allowing us to observe variant alleles for 308 coding SNPs. Finally, we demonstrate the use of mass spectrometry to improve automated gene prediction, adding 800 correct exons to our predictions using a simple rescoring strategy. Our results demonstrate that proteomic profiling should play a role in any genome sequencing project.
Resumo:
The construction of metagenomic libraries has permitted the study of microorganisms resistant to isolation and the analysis of 16S rDNA sequences has been used for over two decades to examine bacterial biodiversity. Here, we show that the analysis of random sequence reads (RSRs) instead of 16S is a suitable shortcut to estimate the biodiversity of a bacterial community from metagenomic libraries. We generated 10,010 RSRs from a metagenomic library of microorganisms found in human faecal samples. Then searched them using the program BLASTN against a prokaryotic sequence database to assign a taxon to each RSR. The results were compared with those obtained by screening and analysing the clones containing 16S rDNA sequences in the whole library. We found that the biodiversity observed by RSR analysis is consistent with that obtained by 16S rDNA. We also show that RSRs are suitable to compare the biodiversity between different metagenomic libraries. RSRs can thus provide a good estimate of the biodiversity of a metagenomic library and, as an alternative to 16S, this approach is both faster and cheaper.