831 resultados para progeny testing
Resumo:
This report outlines the current drugs testing practices and using these practices for testing requirements.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
This newsletter from The Department of Public Health about perinatal health care and statistics.
Resumo:
Histological subtyping and grading by malignancy are the cornerstones of the World Health Organization (WHO) classification of tumors of the central nervous system. They shall provide clinicians with guidance as to the course of disease to be expected and the choices of treatment to be made. Nonetheless, patients with histologically identical tumors may have very different outcomes, notably in patients with astrocytic and oligodendroglial gliomas of WHO grades II and III. In gliomas of adulthood, 3 molecular markers have undergone extensive studies in recent years: 1p/19q chromosomal codeletion, O(6)-methylguanine methyltransferase (MGMT) promoter methylation, and mutations of isocitrate dehydrogenase (IDH) 1 and 2. However, the assessment of these molecular markers has so far not been implemented in clinical routine because of the lack of therapeutic implications. In fact, these markers were considered to be prognostic irrespective of whether patients were receiving radiotherapy (RT), chemotherapy, or both (1p/19q, IDH1/2), or of limited value because testing is too complex and no chemotherapy alternative to temozolomide was available (MGMT). In 2012, this situation has changed: long-term follow-up of the Radiation Therapy Oncology Group 9402 and European Organisation for Research and Treatment of Cancer 26951 trials demonstrated an overall survival benefit from the addition to RT of chemotherapy with procarbazine/CCNU/vincristine confined to patients with anaplastic oligodendroglial tumors with (vs without) 1p/19q codeletion. Furthermore, in elderly glioblastoma patients, the NOA-08 and the Nordic trial of RT alone versus temozolomide alone demonstrated a profound impact of MGMT promoter methylation on outcome by therapy and thus established MGMT as a predictive biomarker in this patient population. These recent results call for the routine implementation of 1p/19q and MGMT testing at least in subpopulations of malignant glioma patients and represent an encouraging step toward the development of personalized therapeutic approaches in neuro-oncology.
Resumo:
In this paper we report on the growth of thick films of magnetoresistive La2/3Sr1/3MnO3 by using spray and screen printing techniques on various substrates (Al2O3 and ZrO2). The growth conditions are explored in order to optimize the microstructure of the films. The films display a room-temperature magnetoresistance of 0.0012%/Oe in the 1 kOe field region. A magnetic sensor is described and tested.
Resumo:
OBJECTIVES: To obtain information about the prevalence of, reasons for, and adequacy of HIV testing in the general population in Switzerland in 1992. DESIGN: Telephone survey (n = 2800). RESULTS: Some 47% of the sample underwent one HIV test performed through blood donation (24%), voluntary testing (17%) or both (6%). Of the sample, 46% considered themselves well or very well informed about the HIV test. Patients reported unsystematic pre-test screening by doctors for the main HIV risks. People having been in situations of potential exposure to risk were more likely to have had the test than others. Overall, 85% of those HIV-tested had a relevant, generally risk-related reason for having it performed. CONCLUSIONS: HIV testing is widespread in Switzerland. Testing is mostly performed for relevant reasons. Pre-test counselling is poor and an opportunity for prevention is thus lost.
Resumo:
When researchers introduce a new test they have to demonstrate that it is valid, using unbiased designs and suitable statistical procedures. In this article we use Monte Carlo analyses to highlight how incorrect statistical procedures (i.e., stepwise regression, extreme scores analyses) or ignoring regression assumptions (e.g., heteroscedasticity) contribute to wrong validity estimates. Beyond these demonstrations, and as an example, we re-examined the results reported by Warwick, Nettelbeck, and Ward (2010) concerning the validity of the Ability Emotional Intelligence Measure (AEIM). Warwick et al. used the wrong statistical procedures to conclude that the AEIM was incrementally valid beyond intelligence and personality traits in predicting various outcomes. In our re-analysis, we found that the reliability-corrected multiple correlation of their measures with personality and intelligence was up to .69. Using robust statistical procedures and appropriate controls, we also found that the AEIM did not predict incremental variance in GPA, stress, loneliness, or well-being, demonstrating the importance for testing validity instead of looking for it.
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
This research evaluated the concrete strength of two mixes which were used in the Polk County project NHS-500-1(3)--10-77 and were developed to meet a contract requirement of 900 psi third-point 28-day flexural strength. Two concrete mixes, the Proposed Mix and the Enhanced Mix, were tested for strength. Based on the experimental results, it was found that the addition of 50 lb of cementitious materials did not significantly increase concrete strength. The requirement of 900 psi 28-day third-point flexural strength (MOR-TPL) was not achieved by this amount of addition of cementitious materials.
Resumo:
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.
Resumo:
The objective of this research project was to service load test a representative sample of old reinforced concrete bridges (some of them historic and some of them scheduled for demolition) with the results being used to create a database so the performance of similar bridges could be predicted. The types of bridges tested included two reinforced concrete open spandrel arches, two reinforced concrete filled spandrel arches, one reinforced concrete slab bridge, and one two span reinforced concrete stringer bridge. The testing of each bridge consisted of applying a static load at various locations on the bridges and monitoring strains and deflections in critical members. The load was applied by means of a tandem axle dump truck with varying magnitudes of load. At each load increment, the truck was stopped at predetermined transverse and longitudinal locations and strain and deflection data were obtained. The strain data obtained were then evaluated in relation to the strain values predicted by traditional analytical procedures and a carrying capacity of the bridges was determined based on the experimental data. The response of a majority of the bridges tested was considerably lower than that predicted by analysis. Thus, the safe load carrying capacities of the bridges were greater than those predicted by the analytical models, and in a few cases, the load carrying capacities were found to be three or four times greater than calculated values. However, the test results of one bridge were lower than those predicted by analysis and thus resulted in the analytical rating being reduced. The results of the testing verified that traditional analytical methods, in most instances, are conservative and that the safe load carrying capacities of a majority of the reinforced concrete bridges are considerably greater than what one would determine on the basis of analytical analysis alone. In extrapolating the results obtained from diagnostic load tests to levels greater than those placed on the bridge during the load test, care must be taken to ensure safe bridge performance at the higher load levels. To extrapolate the load test results from the bridges tested in this investigation, the method developed by Lichtenstein in NCHRP Project 12-28(13)A was used.