988 resultados para t-way testing
Resumo:
This report outlines the current drugs testing practices and using these practices for testing requirements.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
Histological subtyping and grading by malignancy are the cornerstones of the World Health Organization (WHO) classification of tumors of the central nervous system. They shall provide clinicians with guidance as to the course of disease to be expected and the choices of treatment to be made. Nonetheless, patients with histologically identical tumors may have very different outcomes, notably in patients with astrocytic and oligodendroglial gliomas of WHO grades II and III. In gliomas of adulthood, 3 molecular markers have undergone extensive studies in recent years: 1p/19q chromosomal codeletion, O(6)-methylguanine methyltransferase (MGMT) promoter methylation, and mutations of isocitrate dehydrogenase (IDH) 1 and 2. However, the assessment of these molecular markers has so far not been implemented in clinical routine because of the lack of therapeutic implications. In fact, these markers were considered to be prognostic irrespective of whether patients were receiving radiotherapy (RT), chemotherapy, or both (1p/19q, IDH1/2), or of limited value because testing is too complex and no chemotherapy alternative to temozolomide was available (MGMT). In 2012, this situation has changed: long-term follow-up of the Radiation Therapy Oncology Group 9402 and European Organisation for Research and Treatment of Cancer 26951 trials demonstrated an overall survival benefit from the addition to RT of chemotherapy with procarbazine/CCNU/vincristine confined to patients with anaplastic oligodendroglial tumors with (vs without) 1p/19q codeletion. Furthermore, in elderly glioblastoma patients, the NOA-08 and the Nordic trial of RT alone versus temozolomide alone demonstrated a profound impact of MGMT promoter methylation on outcome by therapy and thus established MGMT as a predictive biomarker in this patient population. These recent results call for the routine implementation of 1p/19q and MGMT testing at least in subpopulations of malignant glioma patients and represent an encouraging step toward the development of personalized therapeutic approaches in neuro-oncology.
Resumo:
In this paper we report on the growth of thick films of magnetoresistive La2/3Sr1/3MnO3 by using spray and screen printing techniques on various substrates (Al2O3 and ZrO2). The growth conditions are explored in order to optimize the microstructure of the films. The films display a room-temperature magnetoresistance of 0.0012%/Oe in the 1 kOe field region. A magnetic sensor is described and tested.
Resumo:
Culverts are common means to convey flow through the roadway system for small streams. In general, larger flows and road embankment heights entail the use of multibarrel culverts (a.k.a. multi-box) culverts. Box culverts are generally designed to handle events with a 50-year return period, and therefore convey considerably lower flows much of the time. While there are no issues with conveying high flows, many multi-box culverts in Iowa pose a significant problem related to sedimentation. The highly erosive Iowa soils can easily lead to the situation that some of the barrels can silt-in early after their construction, becoming partially filled with sediment in few years. Silting can reduce considerably the capacity of the culvert to handle larger flow events. Phase I of this Iowa Highway Research Board project (TR-545) led to an innovative solution for preventing sedimentation. The solution was comprehensively investigated through laboratory experiments and numerical modeling aimed at screening design alternatives and testing their hydraulic and sediment conveyance performance. Following this study phase, the Technical Advisory Committee suggested to implement the recommended sediment mitigation design to a field site. The site selected for implementation was a 3-box culvert crossing Willow Creek on IA Hwy 1W in Iowa City. The culvert was constructed in 1981 and the first cleanup was needed in 2000. Phase II of the TR 545 entailed the monitoring of the site with and without the selfcleaning sedimentation structure in place (similarly with the study conducted in laboratory). The first monitoring stage (Sept 2010 to December 2012) was aimed at providing a baseline for the operation of the as-designed culvert. In order to support Phase II research, a cleanup of the IA Hwy 1W culvert was conducted in September 2011. Subsequently, a monitoring program was initiated to document the sedimentation produced by individual and multiple storms propagating through the culvert. The first two years of monitoring showed inception of the sedimentation in the first spring following the cleanup. Sedimentation continued to increase throughout the monitoring program following the depositional patterns observed in the laboratory tests and those documented in the pre-cleaning surveys. The second part of Phase II of the study was aimed at monitoring the constructed self-cleaning structure. Since its construction in December 2012, the culvert site was continuously monitored through systematic observations. The evidence garnered in this phase of the study demonstrates the good performance of the self-cleaning structure in mitigating the sediment deposition at culverts. Besides their beneficial role in sediment mitigation, the designed self-cleaning structures maintain a clean and clear area upstream the culvert, keep a healthy flow through the central barrel offering hydraulic and aquatic habitat similar with that in the undisturbed stream reaches upstream and downstream the culvert. It can be concluded that the proposed self-cleaning structural solution “streamlines” the area upstream the culvert in a way that secures the safety of the culvert structure at high flows while producing much less disturbance in the stream behavior compared with the current constructive approaches.
Resumo:
OBJECTIVES: To obtain information about the prevalence of, reasons for, and adequacy of HIV testing in the general population in Switzerland in 1992. DESIGN: Telephone survey (n = 2800). RESULTS: Some 47% of the sample underwent one HIV test performed through blood donation (24%), voluntary testing (17%) or both (6%). Of the sample, 46% considered themselves well or very well informed about the HIV test. Patients reported unsystematic pre-test screening by doctors for the main HIV risks. People having been in situations of potential exposure to risk were more likely to have had the test than others. Overall, 85% of those HIV-tested had a relevant, generally risk-related reason for having it performed. CONCLUSIONS: HIV testing is widespread in Switzerland. Testing is mostly performed for relevant reasons. Pre-test counselling is poor and an opportunity for prevention is thus lost.
Resumo:
When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.
Resumo:
The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized.
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
This research evaluated the concrete strength of two mixes which were used in the Polk County project NHS-500-1(3)--10-77 and were developed to meet a contract requirement of 900 psi third-point 28-day flexural strength. Two concrete mixes, the Proposed Mix and the Enhanced Mix, were tested for strength. Based on the experimental results, it was found that the addition of 50 lb of cementitious materials did not significantly increase concrete strength. The requirement of 900 psi 28-day third-point flexural strength (MOR-TPL) was not achieved by this amount of addition of cementitious materials.
Resumo:
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.