865 resultados para false problem
Resumo:
Includes bibliography
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.
Resumo:
Rare variants are becoming the new candidates in the search for genetic variants that predispose individuals to a phenotype of interest. Their low prevalence in a population requires the development of dedicated detection and analytical methods. A family-based approach could greatly enhance their detection and interpretation because rare variants are nearly family specific. In this report, we test several distinct approaches for analyzing the information provided by rare and common variants and how they can be effectively used to pinpoint putative candidate genes for follow-up studies. The analyses were performed on the mini-exome data set provided by Genetic Analysis Workshop 17. Eight approaches were tested, four using the trait’s heritability estimates and four using QTDT models. These methods had their sensitivity, specificity, and positive and negative predictive values compared in light of the simulation parameters. Our results highlight important limitations of current methods to deal with rare and common variants, all methods presented a reduced specificity and, consequently, prone to false positive associations. Methods analyzing common variants information showed an enhanced sensibility when compared to rare variants methods. Furthermore, our limited knowledge of the use of biological databases for gene annotations, possibly for use as covariates in regression models, imposes a barrier to further research.
Resumo:
False-positive and false-negative values were calculated for five different designs of the trend test and it was demonstrated that a design suggested by Portier and Hoel in 1984 for a different problem produced the lowest false-positive and false-negative rates when applied to historical spontaneous tumor rate data for Fischer Rats. ^
Resumo:
Praying mantids use binocular cues to judge whether their prey is in striking distance. When there are several moving targets within their binocular visual field, mantids need to solve the correspondence problem. They must select between the possible pairings of retinal images in the two eyes so that they can strike at a single real target. In this study, mantids were presented with two targets in various configurations, and the resulting fixating saccades that precede the strike were analyzed. The distributions of saccades show that mantids consistently prefer one out of several possible matches. Selection is in part guided by the position and the spatiotemporal features of the target image in each eye. Selection also depends upon the binocular disparity of the images, suggesting that insects can perform local binocular computations. The pairing rules ensure that mantids tend to aim at real targets and not at “ghost” targets arising from false matches.
Resumo:
Brewin and Andrews (2016) propose that just 15% of people, or even fewer, are susceptible to false childhood memories. If this figure were true, then false memories would still be a serious problem. But the figure is higher than 15%. False memories occur even after a few short and low-pressure interviews, and with each successive interview they become richer, more compelling, and more likely to occur. It is therefore dangerously misleading to claim that the scientific data provide an “upper bound” on susceptibility to memory errors. We also raise concerns about the peer review process.
Resumo:
Due to the powerful nature of confession evidence, it is imperative that we investigate the factors that affect the likelihood of obtaining true and false confessions. Previous research has been conducted with a paradigm limited to the study of false confessions to an act of negligence, thereby limiting the generalizability of the findings. The first goal of the current study was to introduce a novel paradigm involving a more serious, intentional act that can be used in the study of both true and false confessions. The second goal was to explore the effects of two police interrogation tactics, minimization and an offer of leniency, on true and false confession rates. ^ Three hundred and thirty-four undergraduates at a large southeastern university were recruited to participate in a study on problem-solving and decision-making. During the course of the laboratory experiment, participants were induced to intentionally break or not break an experimental rule, an act that was characterized as “cheating.” All participants (i.e., both innocent and guilty) were later accused of the act and interrogated. For half of the participants, the interrogator used minimization tactics, which involved downplaying the seriousness of the offense, expressing sympathy, and providing face-saving excuses, in order to encourage the participant to confess. An offer of leniency was also manipulated in which half the participants were offered a “deal” that involved the option of confessing and accepting a known punishment or not confessing and facing the threat of harsher punishment. Results indicated that guilty persons were more likely to confess than innocent persons, and that the use of minimization and an explicit offer of leniency increased both the true and false confession rates. Furthermore, a cumulative effect of techniques was observed, such that the diagnosticity of the interrogation (the ratio of true confessions to false confessions) decreased as the number of techniques used increased. Taken together, the results suggest that caution should be used when implementing these techniques in the interrogation room. ^
Resumo:
*Designated as an exemplary master's project for 2015-16*
The American approach to disparities in educational achievement is deficit focused and based on false assumptions of equal educational opportunity and social mobility. The labels attached to children served by compensatory early childhood education programs have evolved, e.g., from “culturally deprived” into “at-risk” for school failure, yet remain rooted in deficit discourses and ideology. Drawing on multiple bodies of literature, this thesis analyzes the rhetoric of compensatory education as viewed through the conceptual lens of the deficit thinking paradigm, in which school failure is attributed to perceived genetic, cultural, or environmental deficiencies, rather than institutional and societal inequalities. With a focus on the evolution of deficit thinking, the thesis begins with late 19th century U.S. early childhood education as it set the stage for more than a century of compensatory education responses to the needs of children, inadequacies of immigrant and minority families, and threats to national security. Key educational research and publications on genetic-, cultural-, and environmental-deficits are aligned with trends in achievement gaps and compensatory education initiatives, beginning mid-20th century following the Brown vs Board declaration of 1954 and continuing to the present. This analysis then highlights patterns in the oppression, segregation, and disenfranchisement experienced by low-income and minority students, largely ignored within the mainstream compensatory education discourse. This thesis concludes with a heterodox analysis of how the deficit thinking paradigm is dependent on assumptions of equal educational opportunity and social mobility, which helps perpetuate the cycle of school failure amid larger social injustices.Biased Random-key Genetic Algorithms For The Winner Determination Problem In Combinatorial Auctions.
Resumo:
Abstract In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the first-price model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased random-key genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integer-linear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the best-performing heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for large-scale auctions.
Resumo:
Ecological science contributes to solving a broad range of environmental problems. However, lack of ecological literacy in practice often limits application of this knowledge. In this paper, we highlight a critical but often overlooked demand on ecological literacy: to enable professionals of various careers to apply scientific knowledge when faced with environmental problems. Current university courses on ecology often fail to persuade students that ecological science provides important tools for environmental problem solving. We propose problem-based learning to improve the understanding of ecological science and its usefulness for real-world environmental issues that professionals in careers as diverse as engineering, public health, architecture, social sciences, or management will address. Courses should set clear learning objectives for cognitive skills they expect students to acquire. Thus, professionals in different fields will be enabled to improve environmental decision-making processes and to participate effectively in multidisciplinary work groups charged with tackling environmental issues.
Resumo:
This paper addresses the capacitated lot sizing problem (CLSP) with a single stage composed of multiple plants, items and periods with setup carry-over among the periods. The CLSP is well studied and many heuristics have been proposed to solve it. Nevertheless, few researches explored the multi-plant capacitated lot sizing problem (MPCLSP), which means that few solution methods were proposed to solve it. Furthermore, to our knowledge, no study of the MPCLSP with setup carry-over was found in the literature. This paper presents a mathematical model and a GRASP (Greedy Randomized Adaptive Search Procedure) with path relinking to the MPCLSP with setup carry-over. This solution method is an extension and adaptation of a previously adopted methodology without the setup carry-over. Computational tests showed that the improvement of the setup carry-over is significant in terms of the solution value with a low increase in computational time.
Resumo:
Introduction: Work disability is a major consequence of rheumatoid arthritis (RA), associated not only with traditional disease activity variables, but also more significantly with demographic, functional, occupational, and societal variables. Recent reports suggest that the use of biologic agents offers potential for reduced work disability rates, but the conclusions are based on surrogate disease activity measures derived from studies primarily from Western countries. Methods: The Quantitative Standard Monitoring of Patients with RA (QUEST-RA) multinational database of 8,039 patients in 86 sites in 32 countries, 16 with high gross domestic product (GDP) (>24K US dollars (USD) per capita) and 16 low-GDP countries (<11K USD), was analyzed for work and disability status at onset and over the course of RA and clinical status of patients who continued working or had stopped working in high-GDP versus low-GDP countries according to all RA Core Data Set measures. Associations of work disability status with RA Core Data Set variables and indices were analyzed using descriptive statistics and regression analyses. Results: At the time of first symptoms, 86% of men (range 57%-100% among countries) and 64% (19%-87%) of women <65 years were working. More than one third (37%) of these patients reported subsequent work disability because of RA. Among 1,756 patients whose symptoms had begun during the 2000s, the probabilities of continuing to work were 80% (95% confidence interval (CI) 78%-82%) at 2 years and 68% (95% CI 65%-71%) at 5 years, with similar patterns in high-GDP and low-GDP countries. Patients who continued working versus stopped working had significantly better clinical status for all clinical status measures and patient self-report scores, with similar patterns in high-GDP and low-GDP countries. However, patients who had stopped working in high-GDP countries had better clinical status than patients who continued working in low-GDP countries. The most significant identifier of work disability in all subgroups was Health Assessment Questionnaire (HAQ) functional disability score. Conclusions: Work disability rates remain high among people with RA during this millennium. In low-GDP countries, people remain working with high levels of disability and disease activity. Cultural and economic differences between societies affect work disability as an outcome measure for RA.
Resumo:
Aims. An analytical solution for the discrepancy between observed core-like profiles and predicted cusp profiles in dark matter halos is studied. Methods. We calculate the distribution function for Navarro-Frenk-White halos and extract energy from the distribution, taking into account the effects of baryonic physics processes. Results. We show with a simple argument that we can reproduce the evolution of a cusp to a flat density profile by a decrease of the initial potential energy.
Resumo:
The energy spectrum of an electron confined in a quantum dot (QD) with a three-dimensional anisotropic parabolic potential in a tilted magnetic field was found analytically. The theory describes exactly the mixing of in-plane and out-of-plane motions of an electron caused by a tilted magnetic field, which could be seen, for example, in the level anticrossing. For charged QDs in a tilted magnetic field we predict three strong resonant lines in the far-infrared-absorption spectra.