951 resultados para unavoidable type ii error


Relevância:

100.00% 100.00%

Publicador:

Resumo:

False identity documents constitute a potential powerful source of forensic intelligence because they are essential elements of transnational crime and provide cover for organized crime. In previous work, a systematic profiling method using false documents' visual features has been built within a forensic intelligence model. In the current study, the comparison process and metrics lying at the heart of this profiling method are described and evaluated. This evaluation takes advantage of 347 false identity documents of four different types seized in two countries whose sources were known to be common or different (following police investigations and dismantling of counterfeit factories). Intra-source and inter-sources variations were evaluated through the computation of more than 7500 similarity scores. The profiling method could thus be validated and its performance assessed using two complementary approaches to measuring type I and type II error rates: a binary classification and the computation of likelihood ratios. Very low error rates were measured across the four document types, demonstrating the validity and robustness of the method to link documents to a common source or to differentiate them. These results pave the way for an operational implementation of a systematic profiling process integrated in a developed forensic intelligence model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Postoperative cognitive dysfunction (POCD) occurs frequently after cardiac surgery. Some data suggest that inflammation plays a key role in the development of POCD. N-3 fatty acids have been shown to have a beneficial effect on inflammation. We hypothesised that perioperative n-3 enriched nutrition therapy would reduce the incidence of POCD in this group of patients. Methods: Randomized, double blind placebo controlled trial in patients aged 65 or older undergoing elective cardiac surgery with cardiopulmonary bypass. 2x 250 mL placebo (Ensure Plus™, Abbott Nutrition) or n-3 enriched nutrition therapy (ProSure™ Abbott Nutrition) were administered for ten days starting 5 days prior to surgery. Cognition was assessed preoperatively and 7 days after surgery with the Consortium to Establish a Registry for Alzheimer's Disease - Neuropsychological Assessment Battery (CERAD-NAB) [1]. Results: 16 patients were included. Mean age was 72 } 5.3 for placebo and 75 } 4.8 for ProSure™ respectively. CRP and IL-6 did not differ significantly between groups preoperatively and on postoperative days 1, 3, and 7. Preoperative CERAD total scores were 86 } 10 and 81 } 9 (p = n.s.) for Placebo and ProSure™, respectively. Postoperative scores were 88 } 12, and 77 } 19 (p = n.s.) The change in score was not different between the two groups (Placebo: +3 } 5; ProSure: -5 } 11). Conclusion: In this very small sample no effect of preoperatively started n-3 enriched nutritional supplements on inflammation or cognitive functions were detected. However, there is a large likelihood of a type II error and more patients need to be included to assess possible beneficial effects of this intervention in elderly patients undergoing elective cardiac surgery. 1 Chandler MJ, et al. Neurology. 2005;65:102-6.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. In order to obtain information about each possible data division we carried out a conditional Monte Carlo simulation with 100,000 samples for each systematically chosen triplet. Robustness and power are studied under several experimental conditions: different autocorrelation levels and different effect sizes, as well as different phase lengths determined by the points of change. Type I error rates were distorted by the presence of autocorrelation for the majority of data divisions. Satisfactory Type II error rates were obtained only for large treatment effects. The relationship between the lengths of the four phases appeared to be an important factor for the robustness and the power of the randomization test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: The Bonferroni correction adjusts probability (p) values because of the increased risk of a type I error when making multiple statistical tests. The routine use of this test has been criticised as deleterious to sound statistical judgment, testing the wrong hypothesis, and reducing the chance of a type I error but at the expense of a type II error; yet it remains popular in ophthalmic research. The purpose of this article was to survey the use of the Bonferroni correction in research articles published in three optometric journals, viz. Ophthalmic & Physiological Optics, Optometry & Vision Science, and Clinical & Experimental Optometry, and to provide advice to authors contemplating multiple testing. RECENT FINDINGS: Some authors ignored the problem of multiple testing while others used the method uncritically with no rationale or discussion. A variety of methods of correcting p values were employed, the Bonferroni method being the single most popular. Bonferroni was used in a variety of circumstances, most commonly to correct the experiment-wise error rate when using multiple 't' tests or as a post-hoc procedure to correct the family-wise error rate following analysis of variance (anova). Some studies quoted adjusted p values incorrectly or gave an erroneous rationale. SUMMARY: Whether or not to use the Bonferroni correction depends on the circumstances of the study. It should not be used routinely and should be considered if: (1) a single test of the 'universal null hypothesis' (Ho ) that all tests are not significant is required, (2) it is imperative to avoid a type I error, and (3) a large number of tests are carried out without preplanned hypotheses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The necessity of elemental analysis techniques to solve forensic problems continues to expand as the samples collected from crime scenes grow in complexity. Laser ablation ICP-MS (LA-ICP-MS) has been shown to provide a high degree of discrimination between samples that originate from different sources. In the first part of this research, two laser ablation ICP-MS systems were compared, one using a nanosecond laser and another a femtosecond laser source for the forensic analysis of glass. The results showed that femtosecond LA-ICP-MS did not provide significant improvements in terms of accuracy, precision and discrimination, however femtosecond LA-ICP-MS did provide lower detection limits. In addition, it was determined that even for femtosecond LA-ICP-MS an internal standard should be utilized to obtain accurate analytical results for glass analyses. In the second part, a method using laser induced breakdown spectroscopy (LIBS) for the forensic analysis of glass was shown to provide excellent discrimination for a glass set consisting of 41 automotive fragments. The discrimination power was compared to two of the leading elemental analysis techniques, μXRF and LA-ICP-MS, and the results were similar; all methods generated >99% discrimination and the pairs found indistinguishable were similar. An extensive data analysis approach for LIBS glass analyses was developed to minimize Type I and II errors en route to a recommendation of 10 ratios to be used for glass comparisons. Finally, a LA-ICP-MS method for the qualitative analysis and discrimination of gel ink sources was developed and tested for a set of ink samples. In the first discrimination study, qualitative analysis was used to obtain 95.6% discrimination for a blind study consisting of 45 black gel ink samples provided by the United States Secret Service. A 0.4% false exclusion (Type I) error rate and a 3.9% false inclusion (Type II) error rate was obtained for this discrimination study. In the second discrimination study, 99% discrimination power was achieved for a black gel ink pen set consisting of 24 self collected samples. The two pairs found to be indistinguishable came from the same source of origin (the same manufacturer and type of pen purchased in different locations). It was also found that gel ink from the same pen, regardless of the age, was indistinguishable as were gel ink pens (four pens) originating from the same pack.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From a sociocultural perspective, individuals learn best from contextualized experiences. In preservice teacher education, contextualized experiences include authentic literacy experiences, which include a real reader and writer and replicate real life communication. To be prepared to teach well, preservice teachers need to gain literacy content knowledge and possess reading maturity. The purpose of this study was to examine the effect of authentic literacy experiences as Book Buddies with Hispanic fourth graders on preservice teachers’ literacy content knowledge and reading maturity. The study was a pretest/posttest design conducted over 12 weeks. Preservice teacher participants, the focus of the study, were elementary education majors taking the third of four required reading courses in non-probabilistic convenience groups, 43 (n = 33 experimental, n = 10 comparison) Elementary Education majors. The Survey of Preservice Teachers’ Knowledge of Teaching and Technology (SPTKTT), specifically designed for preservice teachers majoring in elementary or early childhood education and the Reading Maturity Survey (RMS) were used in this study. Preservice teachers chose either the experimental or comparison group based on the opportunity to earn extra credit points (experimental = 30 points, comparison = 15). After exchanging introductory letters preservice teachers and Hispanic fourth graders each read four books. After reading each book preservice teachers wrote letters to their student asking higher order thinking questions. Preservice teachers received scanned copies of their student’s unedited letters via email which enabled them to see their student’s authentic answers and writing levels. A series of analyses of covariance were used to determine whether there were significant differences in the dependent variables between the experimental and comparison groups. This quasi-experimental study tested two hypotheses. Using the appropriate pretest scores as covariates for adjusting the posttest means of the subcategory Literacy Content Knowledge (LCK), of the SPTKTT and the RMS, the mean adjusted posttest scores from the experimental group and comparison group were compared. No significant differences were found on the LCK dependent variable using the .05 level of significance, which may be due to Type II error caused by the small sample size. Significant differences were found on RMS using the .05 level of significance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The necessity of elemental analysis techniques to solve forensic problems continues to expand as the samples collected from crime scenes grow in complexity. Laser ablation ICP-MS (LA-ICP-MS) has been shown to provide a high degree of discrimination between samples that originate from different sources. In the first part of this research, two laser ablation ICP-MS systems were compared, one using a nanosecond laser and another a femtosecond laser source for the forensic analysis of glass. The results showed that femtosecond LA-ICP-MS did not provide significant improvements in terms of accuracy, precision and discrimination, however femtosecond LA-ICP-MS did provide lower detection limits. In addition, it was determined that even for femtosecond LA-ICP-MS an internal standard should be utilized to obtain accurate analytical results for glass analyses. In the second part, a method using laser induced breakdown spectroscopy (LIBS) for the forensic analysis of glass was shown to provide excellent discrimination for a glass set consisting of 41 automotive fragments. The discrimination power was compared to two of the leading elemental analysis techniques, µXRF and LA-ICP-MS, and the results were similar; all methods generated >99% discrimination and the pairs found indistinguishable were similar. An extensive data analysis approach for LIBS glass analyses was developed to minimize Type I and II errors en route to a recommendation of 10 ratios to be used for glass comparisons. Finally, a LA-ICP-MS method for the qualitative analysis and discrimination of gel ink sources was developed and tested for a set of ink samples. In the first discrimination study, qualitative analysis was used to obtain 95.6% discrimination for a blind study consisting of 45 black gel ink samples provided by the United States Secret Service. A 0.4% false exclusion (Type I) error rate and a 3.9% false inclusion (Type II) error rate was obtained for this discrimination study. In the second discrimination study, 99% discrimination power was achieved for a black gel ink pen set consisting of 24 self collected samples. The two pairs found to be indistinguishable came from the same source of origin (the same manufacturer and type of pen purchased in different locations). It was also found that gel ink from the same pen, regardless of the age, was indistinguishable as were gel ink pens (four pens) originating from the same pack.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alzheimer's disease (AD) is a neurodegenerative disorder characterized by a marked decline in cognition and memory function. Increasing evidence highlights the essential role of neuroinflammatory and immune-related molecules, including those produced at the brain barriers, on brain immune surveillance, cellular dysfunction and amyloid beta (Aß) pathology in AD. Therefore, understanding the response at the brain barriers may unravel novel pathways of relevance for the pathophysiology of AD. Herein, we focused on the study of the choroid plexus (CP), which constitutes the blood-cerebrospinal fluid barrier, in aging and in AD. Specifically, we used the PDGFB-APPSwInd (J20) transgenic mouse model of AD, which presents early memory decline and progressive Aß accumulation, and littermate age-matched wild-type (WT) mice, to characterize the CP transcriptome at 3, 5-6 and 11-12months of age. The most striking observation was that the CP of J20 mice displayed an overall overexpression of type I interferon (IFN) response genes at all ages. Moreover, J20 mice presented a high expression of type II IFN genes in the CP at 3months, which became lower than WT at 5-6 and 11-12months. Importantly, along with a marked memory impairment and increased glial activation, J20 mice also presented a similar overexpression of type I IFN genes in the dorsal hippocampus at 3months. Altogether, these findings provide new insights on a possible interplay between type I and II IFN responses in AD and point to IFNs as targets for modulation in cognitive decline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, the mature domains of type I (CPB) and type II (CPA) cysteine proteinases (CPs) of Leishmania infantum were expressed and their immunogenic properties defined using sera from active and recovered cases of human visceral leishmaniasis and sera from infected dogs. Immunoblotting and ELISA analysis indicated that a freeze/thaw extract of parasite antigens showed similar and intensive recognition in both active cases of human and dog sera but lower recognition in recovered human individuals. The total IgG of actively infected human sera was higher than in recovered cases when rCPs were used as antigen. In contrast to dog sera, both active and recovered human cases have higher recognition toward rCPB than rCPA. Furthermore, the asymptomatic dogs in contrast to the symptomatic cases exhibited specific lymphocyte proliferation to both crude antigens and rCPs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Self-dual doubly even linear binary error-correcting codes, often referred to as Type II codes, are codes closely related to many combinatorial structures such as 5-designs. Extremal codes are codes that have the largest possible minimum distance for a given length and dimension. The existence of an extremal (72,36,16) Type II code is still open. Previous results show that the automorphism group of a putative code C with the aforementioned properties has order 5 or dividing 24. In this work, we present a method and the results of an exhaustive search showing that such a code C cannot admit an automorphism group Z6. In addition, we present so far unpublished construction of the extended Golay code by P. Becker. We generalize the notion and provide example of another Type II code that can be obtained in this fashion. Consequently, we relate Becker's construction to the construction of binary Type II codes from codes over GF(2^r) via the Gray map.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is increasing interest in combining Phases II and III of clinical development into a single trial in which one of a small number of competing experimental treatments is ultimately selected and where a valid comparison is made between this treatment and the control treatment. Such a trial usually proceeds in stages, with the least promising experimental treatments dropped as soon as possible. In this paper we present a highly flexible design that uses adaptive group sequential methodology to monitor an order statistic. By using this approach, it is possible to design a trial which can have any number of stages, begins with any number of experimental treatments, and permits any number of these to continue at any stage. The test statistic used is based upon efficient scores, so the method can be easily applied to binary, ordinal, failure time, or normally distributed outcomes. The method is illustrated with an example, and simulations are conducted to investigate its type I error rate and power under a range of scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short-term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short-term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^