976 resultados para False-medideira caterpillar


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of discovering frequent poly-regions (i.e. regions of high occurrence of a set of items or patterns of a given alphabet) in a sequence is studied, and three efficient approaches are proposed to solve it. The first one is entropy-based and applies a recursive segmentation technique that produces a set of candidate segments which may potentially lead to a poly-region. The key idea of the second approach is the use of a set of sliding windows over the sequence. Each sliding window covers a sequence segment and keeps a set of statistics that mainly include the number of occurrences of each item or pattern in that segment. Combining these statistics efficiently yields the complete set of poly-regions in the given sequence. The third approach applies a technique based on the majority vote, achieving linear running time with a minimal number of false negatives. After identifying the poly-regions, the sequence is converted to a sequence of labeled intervals (each one corresponding to a poly-region). An efficient algorithm for mining frequent arrangements of intervals is applied to the converted sequence to discover frequently occurring arrangements of poly-regions in different parts of DNA, including coding regions. The proposed algorithms are tested on various DNA sequences producing results of significant biological meaning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When we look at a scene, how do we consciously see surfaces infused with lightness and color at the correct depths? Random Dot Stereograms (RDS) probe how binocular disparity between the two eyes can generate such conscious surface percepts. Dense RDS do so despite the fact that they include multiple false binocular matches. Sparse stereograms do so even across large contrast-free regions with no binocular matches. Stereograms that define occluding and occluded surfaces lead to surface percepts wherein partially occluded textured surfaces are completed behind occluding textured surfaces at a spatial scale much larger than that of the texture elements themselves. Earlier models suggest how the brain detects binocular disparity, but not how RDS generate conscious percepts of 3D surfaces. A neural model predicts how the layered circuits of visual cortex generate these 3D surface percepts using interactions between visual boundary and surface representations that obey complementary computational rules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Investment funds provide a low cost method of sharing in the rewards from capitalism. Recently “alternative investments” such as hedge funds have grown rapidly and the trading strategies open to hedge funds are now becoming available to mutual funds and even to ordinary retail investors. In this paper we analyze problems in assessing fund performance and the prospects for investment fund sectors. Choosing genuine outperformers among top funds requires a careful assessment of non-normality, order statistics and the possibility of false discoveries. The risk adjusted performance of the average hedge fund over the last 10-15 is actually not that impressive, although the “top” funds do appear to have statistically significant positive alphas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation sets out to provide immanent critique and deconstruction of ecological modernisation or ecomodernism.It does so, from a critical social theory approach, in order to correctly address the essential issues at the heart of the environmental crisis that ecomodernism purports to address. This critical approach argues that the solution to the environmental crisis can only be concretely achieved by recognising its root cause as being foremost the issue of material interaction between classes in society, and not simply between society and nature in any structurally meaningful way. Based on a metaphysic of false dualism, ecological modernisation attributes a materiality of exchange value relations to issues of society, while simultaneously offering a non- material ontology to issues of nature. Thus ecomodernism serves asymmetrical relations of power whereby, as a polysemic policy discourse, it serves the material interests of those who have the power to impose abstract interpretations on the materiality of actual phenomena. The research of this dissertation is conducted by the critical evaluation of the empirical data from two exemplary Irish case studies. Discovery of the causal processes of the various public issues in the case studies and thereafter the revelation of the meaning structures under- pinning such causal processes, is a theoretically- driven task requiring analysis of those social practices found in the cognitive, cultural and structural constitutions respectively of actors, mediations and systems.Therefore, the imminent critique of the case study paradigms serves as a research strategy for comprehending Ireland’s nature- society relations as influenced essentially by a systems (techno- corporatist) ecomodernist discourse. Moreover, the deconstruction of this systems ideological discourse serves not only to demonstrate how weak ecomodernism practically undermines its declared ecological objectives, but also indicates how such objectives intervene as systemic contradictions at the cultural heart of Ireland’s late modernisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates the extent and range of the ocular vocabulary and themes employed by the playwright Thomas Middleton in context with early modern scientific, medical, and moral-philosophical writing on vision. More specifically, this thesis concerns Middleton’s revelation of the substance or essence of outward forms through mimesis. This paradoxical stance implies Middleton’s use of an illusory (theatrical) art form to explore hidden truths. This can be related to the early modern belief in the imagination (or fantasy) as chief mediator between the corporeal and spiritual worlds as well as to a reformed belief in the power of signs to indicate divine truth. This thesis identifies striking parallels between Middleton’s policy of social diagnosis and cure and an increased preoccupation with knowledge of interior man which culminates in Robert Burton’s Anatomy of Melancholy of 1621. All of these texts seek a cure for diseased internal sense faculties (such as fantasy and will) which cause the raging passions to destroy the individual. The purpose of this thesis is to demonstrate how Middleton takes a similar ‘mental-medicinal’ approach which investigates the idols created by the imagination before ‘purging’ the same and restoring order (Corneanu and Vermeir 184). The idea of infection incurred through the eyes which are fixed on vice (or error) has moral, religious, and political implications and discovery of corruption involves stripping away the illusions of false appearances to reveal the truth within whereby disease and disorder can be cured and restored. Finally, Middleton’s use of theatrical fantasy to detect the idols of the diseased imagination can be read as a Paracelsian, rather than Galenic, form of medicine whereby like is ‘joined with their like’ (Bostocke C7r) to restore health.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a by-product of the ‘information revolution’ which is currently unfolding, lifetimes of man (and indeed computer) hours are being allocated for the automated and intelligent interpretation of data. This is particularly true in medical and clinical settings, where research into machine-assisted diagnosis of physiological conditions gains momentum daily. Of the conditions which have been addressed, however, automated classification of allergy has not been investigated, even though the numbers of allergic persons are rising, and undiagnosed allergies are most likely to elicit fatal consequences. On the basis of the observations of allergists who conduct oral food challenges (OFCs), activity-based analyses of allergy tests were performed. Algorithms were investigated and validated by a pilot study which verified that accelerometer-based inquiry of human movements is particularly well-suited for objective appraisal of activity. However, when these analyses were applied to OFCs, accelerometer-based investigations were found to provide very poor separation between allergic and non-allergic persons, and it was concluded that the avenues explored in this thesis are inadequate for the classification of allergy. Heart rate variability (HRV) analysis is known to provide very significant diagnostic information for many conditions. Owing to this, electrocardiograms (ECGs) were recorded during OFCs for the purpose of assessing the effect that allergy induces on HRV features. It was found that with appropriate analysis, excellent separation between allergic and nonallergic subjects can be obtained. These results were, however, obtained with manual QRS annotations, and these are not a viable methodology for real-time diagnostic applications. Even so, this was the first work which has categorically correlated changes in HRV features to the onset of allergic events, and manual annotations yield undeniable affirmation of this. Fostered by the successful results which were obtained with manual classifications, automatic QRS detection algorithms were investigated to facilitate the fully automated classification of allergy. The results which were obtained by this process are very promising. Most importantly, the work that is presented in this thesis did not obtain any false positive classifications. This is a most desirable result for OFC classification, as it allows complete confidence to be attributed to classifications of allergy. Furthermore, these results could be particularly advantageous in clinical settings, as machine-based classification can detect the onset of allergy which can allow for early termination of OFCs. Consequently, machine-based monitoring of OFCs has in this work been shown to possess the capacity to significantly and safely advance the current state of clinical art of allergy diagnosis

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Helicobacter pylori is a gastric pathogen which infects ~50% of the global population and can lead to the development of gastritis, gastric and duodenal ulcers and carcinoma. Genome sequencing of H. pylori revealed high levels of genetic variability; this pathogen is known for its adaptability due to mechanisms including phase variation, recombination and horizontal gene transfer. Motility is essential for efficient colonisation by H. pylori. The flagellum is a complex nanomachine which has been studied in detail in E. coli and Salmonella. In H. pylori, key differences have been identified in the regulation of flagellum biogenesis, warranting further investigation. In this study, the genomes of two H. pylori strains (CCUG 17874 and P79) were sequenced and published as draft genome sequences. Comparative studies identified the potential role of restriction modification systems and the comB locus in transformation efficiency differences between these strains. Core genome analysis of 43 H. pylori strains including 17874 and P79 defined a more refined core genome for the species than previously published. Comparative analysis of the genome sequences of strains isolated from individuals suffering from H. pylori related diseases resulted in the identification of “disease-specific” genes. Structure-function analysis of the essential motility protein HP0958 was performed to elucidate its role during flagellum assembly in H. pylori. The previously reported HP0958-FliH interaction could not be substantiated in this study and appears to be a false positive. Site-directed mutagenesis confirmed that the coiled-coil domain of HP0958 is involved in the interaction with RpoN (74-284), while the Zn-finger domain is required for direct interaction with the full length flaA mRNA transcript. Complementation of a non-motile hp0958-null derivative strain of P79 with site-directed mutant alleles of hp0958 resulted in cells producing flagellar-type extrusions from non-polar positions. Thus, HP0958 may have a novel function in spatial localisation of flagella in H. pylori

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L»l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (~25-km grid spacing) to the same resolution as the NCEP stage IV products (~4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied. © 2010 American Meteorological Society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: There is considerable interest in the development of methods to efficiently identify all coding variants present in large sample sets of humans. There are three approaches possible: whole-genome sequencing, whole-exome sequencing using exon capture methods, and RNA-Seq. While whole-genome sequencing is the most complete, it remains sufficiently expensive that cost effective alternatives are important. RESULTS: Here we provide a systematic exploration of how well RNA-Seq can identify human coding variants by comparing variants identified through high coverage whole-genome sequencing to those identified by high coverage RNA-Seq in the same individual. This comparison allowed us to directly evaluate the sensitivity and specificity of RNA-Seq in identifying coding variants, and to evaluate how key parameters such as the degree of coverage and the expression levels of genes interact to influence performance. We find that although only 40% of exonic variants identified by whole genome sequencing were captured using RNA-Seq; this number rose to 81% when concentrating on genes known to be well-expressed in the source tissue. We also find that a high false positive rate can be problematic when working with RNA-Seq data, especially at higher levels of coverage. CONCLUSIONS: We conclude that as long as a tissue relevant to the trait under study is available and suitable quality control screens are implemented, RNA-Seq is a fast and inexpensive alternative approach for finding coding variants in genes with sufficiently high expression levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People often do not realize they are being influenced by an incidental emotional state. As a result, decisions based on a fleeting incidental emotion can become the basis for future decisions and hence outlive the original cause for the behavior (i.e., the emotion itself). Using a sequence of ultimatum and dictator games, we provide empirical evidence for the enduring impact of transient emotions on economic decision making. Behavioral consistency and false consensus are presented as potential underlying processes. © 2009 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technological advances in genotyping have given rise to hypothesis-based association studies of increasing scope. As a result, the scientific hypotheses addressed by these studies have become more complex and more difficult to address using existing analytic methodologies. Obstacles to analysis include inference in the face of multiple comparisons, complications arising from correlations among the SNPs (single nucleotide polymorphisms), choice of their genetic parametrization and missing data. In this paper we present an efficient Bayesian model search strategy that searches over the space of genetic markers and their genetic parametrization. The resulting method for Multilevel Inference of SNP Associations, MISA, allows computation of multilevel posterior probabilities and Bayes factors at the global, gene and SNP level, with the prior distribution on SNP inclusion in the model providing an intrinsic multiplicity correction. We use simulated data sets to characterize MISA's statistical power, and show that MISA has higher power to detect association than standard procedures. Using data from the North Carolina Ovarian Cancer Study (NCOCS), MISA identifies variants that were not identified by standard methods and have been externally "validated" in independent studies. We examine sensitivity of the NCOCS results to prior choice and method for imputing missing data. MISA is available in an R package on CRAN.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: A public that is an informed partner in clinical research is important for ethical, methodological, and operational reasons. There are indications that the public is unaware or misinformed, and not sufficiently engaged in clinical research but studies on the topic are lacking. PARTAKE - Public Awareness of Research for Therapeutic Advancements through Knowledge and Empowerment is a program aimed at increasing public awareness and partnership in clinical research. The PARTAKE Survey is a component of the program. OBJECTIVE: To study public knowledge and perceptions of clinical research. METHODS: A 40-item questionnaire combining multiple-choice and open-ended questions was administered to 175 English- or Hindi-speaking individuals in 8 public locations representing various socioeconomic strata in New Delhi, India. RESULTS: Interviewees were 18-84 old (mean: 39.6, SD ± 16.6), 23.6% female, 68.6% employed, 7.3% illiterate, 26.3% had heard of research, 2.9% had participated and 58.9% expressed willingness to participate in clinical research. The following perceptions were reported (% true/% false/% not aware): 'research benefits society' (94.1%/3.5%/2.3%), 'the government protects against unethical clinical research' (56.7%/26.3%/16.9%), 'research hospitals provide better care' (67.2%/8.7%/23.9%), 'confidentiality is adequately protected' (54.1%/12.3%/33.5%), 'participation in research is voluntary' (85.3%/5.8%/8.7%); 'participants treated like 'guinea pigs'' (20.7%/53.2%/26.0%), and 'compensation for participation is adequate' (24.7%/12.9%/62.3%). CONCLUSIONS: Results suggest the Indian public is aware of some key features of clinical research (e.g., purpose, value, voluntary nature of participation), and supports clinical research in general but is unaware of other key features (e.g., compensation, confidentiality, protection of human participants) and exhibits some distrust in the conduct and reporting of clinical trials. Larger, cross-cultural surveys are required to inform educational programs addressing these issues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pezdek, Blandon-Gitlin, and Gabbay (2006) found that perceptions of the plausibility of events increase the likelihood that imagination may induce false memories of those events. Using a survey conducted by Gallup, we asked a large sample of the general population how plausible it would be for a person with longstanding emotional problems and a need for psychotherapy to be a victim of childhood sexual abuse, even though the person could not remember the abuse. Only 18% indicated that it was implausible or very implausible, whereas 67% indicated that such an occurrence was either plausible or very plausible. Combined with Pezdek et al.s' findings, and counter to their conclusions, our findings imply that there is a substantial danger of inducing false memories of childhood sexual abuse through imagination in psychotherapy.