146 resultados para error rates

em Université de Lausanne, Switzerland


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this paper is to review the scientific literature from August 2007 to July 2010. The review is focused on more than 420 published papers. The review will not cover information coming from international meetings available only in abstract form. Fingermarks constitute an important chapter with coverage of the identification process as well as detection techniques on various surfaces. We note that the research has been very dense both at exploring and understanding current detection methods as well as bringing groundbreaking techniques to increase the number of marks detected from various objects. The recent report from the US National Research Council (NRC) is a milestone that has promoted a critical discussion on the state of forensic science and its associated research. We can expect a surge of interest in research in relation to cognitive aspect of mark and print comparison, establishment of relevant forensic error rates and statistical modelling of the selectivity of marks' attributes. Other biometric means of forensic identification such as footmarks or earmarks are also covered in the report. Compared to previous years, we noted a decrease in the number of submission in these areas. No doubt that the NRC report has set the seed for further investigation of these fields as well.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Limited information is available regarding the methodology required to characterize hashish seizures for assessing the presence or the absence of a chemical link between two seizures. This casework report presents the methodology applied for assessing that two different police seizures were coming from the same block before this latter one was split. The chemical signature was extracted using GC-MS analysis and the implemented methodology consists in a study of intra- and inter-variability distributions based on the measurement of the chemical profiles similarity using a number of hashish seizures and the calculation of the Pearson correlation coefficient. Different statistical scenarios (i.e., a combination of data pretreatment techniques and selection of target compounds) were tested to find the most discriminating one. Seven compounds showing high discrimination capabilities were selected on which a specific statistical data pretreatment was applied. Based on the results, the statistical model built for comparing the hashish seizures leads to low error rates. Therefore, the implemented methodology is suitable for the chemical profiling of hashish seizures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: In vitro aggregating brain cell cultures containing all types of brain cells have been shown to be useful for neurotoxicological investigations. The cultures are used for the detection of nervous system-specific effects of compounds by measuring multiple endpoints, including changes in enzyme activities. Concentration-dependent neurotoxicity is determined at several time points. METHODS: A Markov model was set up to describe the dynamics of brain cell populations exposed to potentially neurotoxic compounds. Brain cells were assumed to be either in a healthy or stressed state, with only stressed cells being susceptible to cell death. Cells may have switched between these states or died with concentration-dependent transition rates. Since cell numbers were not directly measurable, intracellular lactate dehydrogenase (LDH) activity was used as a surrogate. Assuming that changes in cell numbers are proportional to changes in intracellular LDH activity, stochastic enzyme activity models were derived. Maximum likelihood and least squares regression techniques were applied for estimation of the transition rates. Likelihood ratio tests were performed to test hypotheses about the transition rates. Simulation studies were used to investigate the performance of the transition rate estimators and to analyze the error rates of the likelihood ratio tests. The stochastic time-concentration activity model was applied to intracellular LDH activity measurements after 7 and 14 days of continuous exposure to propofol. The model describes transitions from healthy to stressed cells and from stressed cells to death. RESULTS: The model predicted that propofol would affect stressed cells more than healthy cells. Increasing propofol concentration from 10 to 100 μM reduced the mean waiting time for transition to the stressed state by 50%, from 14 to 7 days, whereas the mean duration to cellular death reduced more dramatically from 2.7 days to 6.5 hours. CONCLUSION: The proposed stochastic modeling approach can be used to discriminate between different biological hypotheses regarding the effect of a compound on the transition rates. The effects of different compounds on the transition rate estimates can be quantitatively compared. Data can be extrapolated at late measurement time points to investigate whether costs and time-consuming long-term experiments could possibly be eliminated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract-Due to the growing use of biometric technologies inour modern society, spoofing attacks are becoming a seriousconcern. Many solutions have been proposed to detect the use offake "fingerprints" on an acquisition device. In this paper, wepropose to take advantage of intrinsic features of friction ridgeskin: pores. The aim of this study is to investigate the potential ofusing pores to detect spoofing attacks.Results show that the use of pores is a promising approach. Fourmajor observations were made: First, results confirmed that thereproduction of pores on fake "fingerprints" is possible. Second,the distribution of the total number of pores between fake andgenuine fingerprints cannot be discriminated. Third, thedifference in pore quantities between a query image and areference image (genuine or fake) can be used as a discriminatingfactor in a linear discriminant analysis. In our sample, theobserved error rates were as follows: 45.5% of false positive (thefake passed the test) and 3.8% of false negative (a genuine printhas been rejected). Finally, the performance is improved byusing the difference of pore quantity obtained between adistorted query fingerprint and a non-distorted referencefingerprint. By using this approach, the error rates improved to21.2% of false acceptation rate and 8.3% of false rejection rate.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report 22 new polymorphic microsatellites for the Ivory gull (Pagophila eburnea), and we describe how they can be efficiently co-amplified using multiplexed polymerase chain reactions. In addition, we report DNA concentration, amplification success, rates of genotyping errors and the number of genotyping repetitions required to obtain reliable data with three types of noninvasive or nondestructive samples: shed feathers collected in colonies, feathers plucked from living individuals and buccal swabs. In two populations from Greenland (n=21) and Russia (Severnaya Zemlya Archipelago, n=21), the number of alleles per locus varied between 2 and 17, and expected heterozygosity per population ranged from 0.18 to 0.92. Twenty of the markers conformed to Hardy-Weinberg and linkage equilibrium expectations. Most markers were easily amplified and highly reliable when analysed from buccal swabs and plucked feathers, showing that buccal swabbing is a very efficient approach allowing good quality DNA retrieval. Although DNA amplification success using single shed feathers was generally high, the genotypes obtained from this type of samples were prone to error and thus need to be amplified several times. The set of microsatellite markers described here together with multiplex amplification conditions and genotyping error rates will be useful for population genetic studies of the Ivory gull.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Swain corrects the chi-square overidentification test (i.e., likelihood ratio test of fit) for structural equation models whethr with or without latent variables. The chi-square statistic is asymptotically correct; however, it does not behave as expected in small samples and/or when the model is complex (cf. Herzog, Boomsma, & Reinecke, 2007). Thus, particularly in situations where the ratio of sample size (n) to the number of parameters estimated (p) is relatively small (i.e., the p to n ratio is large), the chi-square test will tend to overreject correctly specified models. To obtain a closer approximation to the distribution of the chi-square statistic, Swain (1975) developed a correction; this scaling factor, which converges to 1 asymptotically, is multiplied with the chi-square statistic. The correction better approximates the chi-square distribution resulting in more appropriate Type 1 reject error rates (see Herzog & Boomsma, 2009; Herzog, et al., 2007).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this research was to evaluate how fingerprint analysts would incorporate information from newly developed tools into their decision making processes. Specifically, we assessed effects using the following: (1) a quality tool to aid in the assessment of the clarity of the friction ridge details, (2) a statistical tool to provide likelihood ratios representing the strength of the corresponding features between compared fingerprints, and (3) consensus information from a group of trained fingerprint experts. The measured variables for the effect on examiner performance were the accuracy and reproducibility of the conclusions against the ground truth (including the impact on error rates) and the analyst accuracy and variation for feature selection and comparison.¦The results showed that participants using the consensus information from other fingerprint experts demonstrated more consistency and accuracy in minutiae selection. They also demonstrated higher accuracy, sensitivity, and specificity in the decisions reported. The quality tool also affected minutiae selection (which, in turn, had limited influence on the reported decisions); the statistical tool did not appear to influence the reported decisions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Integrated approaches using different in vitro methods in combination with bioinformatics can (i) increase the success rate and speed of drug development; (ii) improve the accuracy of toxicological risk assessment; and (iii) increase our understanding of disease. Three-dimensional (3D) cell culture models are important building blocks of this strategy which has emerged during the last years. The majority of these models are organotypic, i.e., they aim to reproduce major functions of an organ or organ system. This implies in many cases that more than one cell type forms the 3D structure, and often matrix elements play an important role. This review summarizes the state of the art concerning commonalities of the different models. For instance, the theory of mass transport/metabolite exchange in 3D systems and the special analytical requirements for test endpoints in organotypic cultures are discussed in detail. In the next part, 3D model systems for selected organs--liver, lung, skin, brain--are presented and characterized in dedicated chapters. Also, 3D approaches to the modeling of tumors are presented and discussed. All chapters give a historical background, illustrate the large variety of approaches, and highlight up- and downsides as well as specific requirements. Moreover, they refer to the application in disease modeling, drug discovery and safety assessment. Finally, consensus recommendations indicate a roadmap for the successful implementation of 3D models in routine screening. It is expected that the use of such models will accelerate progress by reducing error rates and wrong predictions from compound testing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

False identity documents constitute a potential powerful source of forensic intelligence because they are essential elements of transnational crime and provide cover for organized crime. In previous work, a systematic profiling method using false documents' visual features has been built within a forensic intelligence model. In the current study, the comparison process and metrics lying at the heart of this profiling method are described and evaluated. This evaluation takes advantage of 347 false identity documents of four different types seized in two countries whose sources were known to be common or different (following police investigations and dismantling of counterfeit factories). Intra-source and inter-sources variations were evaluated through the computation of more than 7500 similarity scores. The profiling method could thus be validated and its performance assessed using two complementary approaches to measuring type I and type II error rates: a binary classification and the computation of likelihood ratios. Very low error rates were measured across the four document types, demonstrating the validity and robustness of the method to link documents to a common source or to differentiate them. These results pave the way for an operational implementation of a systematic profiling process integrated in a developed forensic intelligence model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Involuntary choreiform movements are a clinical hallmark of Huntington's disease. Studies in clinically affected patients suggest a shift of motor activations to parietal cortices in response to progressive neurodegeneration. Here, we studied pre-symptomatic gene carriers to examine the compensatory mechanisms that underlie the phenomenon of retained motor function in the presence of degenerative change. Fifteen pre-symptomatic gene carriers and 12 matched controls performed button presses paced by a metronome at either 0.5 or 2 Hz with four fingers of the right hand whilst being scanned with functional magnetic resonance imaging. Subjects pressed buttons either in the order of a previously learnt 10-item finger sequence, from left to right, or kept still. Error rates ranged from 2% to 7% in the pre-symptomatic gene carriers and from 0.5% to 4% in controls, depending on the condition. No significant difference in task performance was found between groups for any of the conditions. Activations in the supplementary motor area (SMA) and superior parietal lobe differed with gene status. Compared with healthy controls, gene carriers showed greater activations of left caudal SMA with all movement conditions. Activations correlated with increasing speed of movement were greater the closer the gene carriers were to estimated clinical diagnosis, defined by the onset of unequivocal motor signs. Activations associated with increased movement complexity (i.e. with the pre-learnt 10-item sequence) decreased in the rostral SMA with nearing diagnostic onset. The left superior parietal lobe showed reduced activation with increased movement complexity in gene carriers compared with controls, and in the right superior parietal lobe showed greater activations with all but the most demanding movements. We identified a complex pattern of motor compensation in pre-symptomatic gene carriers. The results show that preclinical compensation goes beyond a simple shift of activity from premotor to parietal regions involving multiple compensatory mechanisms in executive and cognitive motor areas. Critically, the pattern of motor compensation is flexible depending on the actual task demands on motor control.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many ants forage in complex environments and use a combination of trail pheromone information and route memory to navigate between food sources and the nest. Previous research has shown that foraging routes differ in how easily they are learned. In particular, it is easier to learn feeding locations that are reached by repeating (e.g. left-left or right-right) than alternating choices (left-right or right-left) along a route with two T-bifurcations. This raises the hypothesis that the learnability of the feeding sites may influence overall colony foraging patterns. We studied this in the mass-recruiting ant Lasius niger. We used mazes with two T-bifurcations, and allowed colonies to exploit two equidistant food sources that differed in how easily their locations were learned. In experiment 1, learnability was manipulated by using repeating versus alternating routes from nest to feeder. In experiment 2, we added visual landmarks along the route to one food source. Our results suggest that colonies preferentially exploited the feeding site that was easier to learn. This was the case even if the more difficult to learn feeding site was discovered first. Furthermore, we show that these preferences were at least partly caused by lower error rates (experiment 1) and greater foraging speeds (experiment 2) of foragers visiting the more easily learned feeder locations. Our results indicate that the learnability of feeding sites is an important factor influencing collective foraging patterns of ant colonies under more natural conditions, given that in natural environments foragers often face multiple bifurcations on their way to food sources.