938 resultados para Test methods
Resumo:
Since the introduction of the rope-pump in Nicaragua in the 1990s, the dependence on wells in rural areas has grown steadily. However, little or no attention is paid to rope-pump well performance after installation. Due to financial restraints, groundwater resource monitoring using conventional testing methods is too costly and out of reach of rural municipalities. Nonetheless, there is widespread agreement that without a way to quantify the changes in well performance over time, prioritizing regulatory actions is impossible. A manual pumping test method is presented, which at a fraction of the cost of a conventional pumping test, measures the specific capacity of rope-pump wells. The method requires only sight modifcations to the well and reasonable limitations on well useage prior to testing. The pumping test was performed a minimum of 33 times in three wells over an eight-month period in a small rural community in Chontales, Nicaragua. Data was used to measure seasonal variations in specific well capacity for three rope-pump wells completed in fractured crystalline basalt. Data collected from the tests were analyzed using four methods (equilibrium approximation, time-drawdown during pumping, time-drawdown during recovery, and time-drawdown during late-time recovery) to determine the best data-analyzing method. One conventional pumping test was performed to aid in evaluating the manual method. The equilibrim approximation can be performed while in the field with only a calculator and is the most technologically appropriate method for analyzing data. Results from this method overestimate specific capacity by 41% when compared to results from the conventional pumping test. The other analyes methods, requiring more sophisticated tools and higher-level interpretation skills, yielded results that agree to within 14% (pumping phase), 31% (recovery phase) and 133% (late-time recovery) of the conventional test productivity value. The wide variability in accuracy results principally from difficulties in achieving equilibrated pumping level and casing storage effects in the puping/recovery data. Decreases in well productivity resulting from naturally occuring seasonal water-table drops varied from insignificant in two wells to 80% in the third. Despite practical and theoretical limitations on the method, the collected data may be useful for municipal institutions to track changes in well behavior, eventually developing a database for planning future ground water development projects. Furthermore, the data could improve well-users’ abilities to self regulate well usage without expensive aquifer characterization.
Resumo:
State standardized testing has always been a tool to measure a school’s performance and to help evaluate school curriculum. However, with the school of choice legislation in 1992, the MEAP test became a measuring stick to grade schools by and a major tool in attracting school of choice students. Now, declining enrollment and a state budget struggling to stay out of the red have made school of choice students more important than ever before. MEAP scores have become the deciding factor in some cases. For the past five years, the Hancock Middle School staff has been working hard to improve their students’ MEAP scores in accordance with President Bush's “No Child Left Behind” legislation. In 2005, the school was awarded a grant that enabled staff to work for two years on writing and working towards school goals that were based on the improvement of MEAP scores in writing and math. As part of this effort, the school purchased an internet-based program geared at giving students practice on state content standards. This study examined the results of efforts by Hancock Middle School to help improve student scores in mathematics on the MEAP test through the use of an online program called “Study Island.” In the past, the program was used to remediate students, and as a review with an incentive at the end of the year for students completing a certain number of objectives. It had also been used as a review before upcoming MEAP testing in the fall. All of these methods may have helped a few students perform at an increased level on their standardized test, but the question remained of whether a sustained use of the program in a classroom setting would increase an understanding of concepts and performance on the MEAP for the masses. This study addressed this question. Student MEAP scores and Study Island data from experimental and comparison groups of students were compared to understand how a sustained use of Study Island in the classroom would impact student test scores on the MEAP. In addition, these data were analyzed to determine whether Study Island results provide a good indicator of students’ MEAP performance. The results of the study suggest that there were limited benefits related to sustained use of Study Island and gave some indications about the effectiveness of the mathematics curriculum at Hancock Middle School. These results and implications for instruction are discussed.
Resumo:
A significant cost for foundations is the design and installation of piles when they are required due to poor ground conditions. Not only is it important that piles be designed properly, but also that the installation equipment and total cost be evaluated. To assist in the evaluation of piles a number of methods have been developed. In this research three of these methods were investigated, which were developed by the Federal Highway Administration, the US Corps of Engineers and the American Petroleum Institute (API). The results from these methods were entered into the program GRLWEAPTM to assess the pile drivability and to provide a standard base for comparing the three methods. An additional element of this research was to develop EXCEL spreadsheets to implement these three methods. Currently the Army Corps and API methods do not have publicly available software and must be performed manually, which requires that data is taken off of figures and tables, which can introduce error in the prediction of pile capacities. Following development of the EXCEL spreadsheet, they were validated with both manual calculations and existing data sets to ensure that the data output is correct. To evaluate the three pile capacity methods data was utilized from four project sites from North America. The data included site geotechnical data along with field determined pile capacities. In order to achieve a standard comparison of the data, the pile capacities and geotechnical data from the three methods were entered into GRLWEAPTM. The sites consisted of both cohesive and cohesionless soils; where one site was primarily cohesive, one was primarily cohesionless, and the other two consisted of inter-bedded cohesive and cohesionless soils. Based on this limited set of data the results indicated that the US Corps of Engineers method more closely compared with the field test data, followed by the API method to a lesser degree. The DRIVEN program compared favorably in cohesive soils, but over predicted in cohesionless material.
Resumo:
Complex human diseases are a major challenge for biological research. The goal of my research is to develop effective methods for biostatistics in order to create more opportunities for the prevention and cure of human diseases. This dissertation proposes statistical technologies that have the ability of being adapted to sequencing data in family-based designs, and that account for joint effects as well as gene-gene and gene-environment interactions in the GWA studies. The framework includes statistical methods for rare and common variant association studies. Although next-generation DNA sequencing technologies have made rare variant association studies feasible, the development of powerful statistical methods for rare variant association studies is still underway. Chapter 2 demonstrates two adaptive weighting methods for rare variant association studies based on family data for quantitative traits. The results show that both proposed methods are robust to population stratification, robust to the direction and magnitude of the effects of causal variants, and more powerful than the methods using weights suggested by Madsen and Browning [2009]. In Chapter 3, I extended the previously proposed test for Testing the effect of an Optimally Weighted combination of variants (TOW) [Sha et al., 2012] for unrelated individuals to TOW &ndash F, TOW for Family &ndash based design. Simulation results show that TOW &ndash F can control for population stratification in wide range of population structures including spatially structured populations, is robust to the directions of effect of causal variants, and is relatively robust to percentage of neutral variants. In GWA studies, this dissertation consists of a two &ndash locus joint effect analysis and a two-stage approach accounting for gene &ndash gene and gene &ndash environment interaction. Chapter 4 proposes a novel two &ndash stage approach, which is promising to identify joint effects, especially for monotonic models. The proposed approach outperforms a single &ndash marker method and a regular two &ndash stage analysis based on the two &ndash locus genotypic test. In Chapter 5, I proposed a gene &ndash based two &ndash stage approach to identify gene &ndash gene and gene &ndash environment interactions in GWA studies which can include rare variants. The two &ndash stage approach is applied to the GAW 17 dataset to identify the interaction between KDR gene and smoking status.
Resumo:
As the development of genotyping and next-generation sequencing technologies, multi-marker testing in genome-wide association study and rare variant association study became active research areas in statistical genetics. This dissertation contains three methodologies for association study by exploring different genetic data features and demonstrates how to use those methods to test genetic association hypothesis. The methods can be categorized into in three scenarios: 1) multi-marker testing for strong Linkage Disequilibrium regions, 2) multi-marker testing for family-based association studies, 3) multi-marker testing for rare variant association study. I also discussed the advantage of using these methods and demonstrated its power by simulation studies and applications to real genetic data.
Resumo:
BACKGROUND: To determine the value of the distance doubling visual acuity test in the diagnosis of nonorganic visual loss in a comparative observational case series. METHODS: Twenty-one consecutive patients with nonorganic visual acuity loss and 21 subjects with organic visual loss as controls were included. Best corrected visual acuity was tested at the normal distance of 5 meters using Landolt Cs. The patient was then repositioned and best corrected visual acuity was tested with the previous optotypes at double the distance via a mirror. RESULTS: Nonorganic visual acuity loss was identified in 21 of 21 patients. Sensitivity and specificity of distance-doubling visual acuity test in functional visual loss were found to be 100% (CI; 83%-100%) and 100% (CI; 82%-100%), respectively. CONCLUSION: Distance doubling visual acuity test is widely used to detect nonorganic visual loss. Our results show that this test has a high specificity and sensitivity to detect nonorganic visual impairment.
Resumo:
OBJECTIVES: The STAndards for Reporting studies of Diagnostic accuracy (STARD) for investigators and editors and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) for reviewers and readers offer guidelines for the quality and reporting of test accuracy studies. These guidelines address and propose some solutions to two major threats to validity: spectrum bias and test review bias. STUDY DESIGN AND SETTING: Using a clinical example, we demonstrate that these solutions fail and propose an alternative solution that concomitantly addresses both sources of bias. We also derive formulas that prove the generality of our arguments. RESULTS: A logical extension of our ideas is to extend STARD item 23 by adding a requirement for multivariable statistical adjustment using information collected in QUADAS items 1, 2, and 12 and STARD items 3-5, 11, 15, and 18. CONCLUSION: We recommend reporting not only variation of diagnostic accuracy across subgroups (STARD item 23) but also the effects of the multivariable adjustments on test performance. We also suggest that the QUADAS be supplemented by an item addressing the appropriateness of statistical methods, in particular whether multivariable adjustments have been included in the analysis.
Resumo:
INTRODUCTION: The simple bedside method for sampling undiluted distal pulmonary edema fluid through a normal suction catheter (s-Cath) has been experimentally and clinically validated. However, there are no data comparing non-bronchoscopic bronchoalveolar lavage (mini-BAL) and s-Cath for assessing lung inflammation in acute hypoxaemic respiratory failure. We designed a prospective study in two groups of patients, those with acute lung injury (ALI)/acute respiratory distress syndrome (ARDS) and those with acute cardiogenic lung edema (ACLE), designed to investigate the clinical feasibility of these techniques and to evaluate inflammation in both groups using undiluted sampling obtained by s-Cath. To test the interchangeability of the two methods in the same patient for studying the inflammation response, we further compared mini-BAL and s-Cath for agreement of protein concentration and percentage of polymorphonuclear cells (PMNs). METHODS: Mini-BAL and s-Cath sampling was assessed in 30 mechanically ventilated patients, 21 with ALI/ARDS and 9 with ACLE. To analyse agreement between the two sampling techniques, we considered only simultaneously collected mini-BAL and s-Cath paired samples. The protein concentration and polymorphonuclear cell (PMN) count comparisons were performed using undiluted sampling. Bland-Altman plots were used for assessing the mean bias and the limits of agreement between the two sampling techniques; comparison between groups was performed by using the non-parametric Mann-Whitney-U test; continuous variables were compared by using the Student t-test, Wilcoxon signed rank test, analysis of variance or Student-Newman-Keuls test; and categorical variables were compared by using chi-square analysis or Fisher exact test. RESULTS: Using protein content and PMN percentage as parameters, we identified substantial variations between the two sampling techniques. When the protein concentration in the lung was high, the s-Cath was a more sensitive method; by contrast, as inflammation increased, both methods provided similar estimates of neutrophil percentages in the lung. The patients with ACLE showed an increased PMN count, suggesting that hydrostatic lung edema can be associated with a concomitant inflammatory process. CONCLUSIONS: There are significant differences between the s-Cath and mini-BAL sampling techniques, indicating that these procedures cannot be used interchangeably for studying the lung inflammatory response in patients with acute hypoxaemic lung injury.
Resumo:
There is no accepted way of measuring prothrombin time without time loss for patients undergoing major surgery who are at risk of intraoperative dilution and consumption coagulopathy due to bleeding and volume replacement with crystalloids or colloids. Decisions to transfuse fresh frozen plasma and procoagulatory drugs have to rely on clinical judgment in these situations. Point-of-care devices are considerably faster than the standard laboratory methods. In this study we assessed the accuracy of a Point-of-care (PoC) device measuring prothrombin time compared to the standard laboratory method. Patients undergoing major surgery and intensive care unit patients were included. PoC prothrombin time was measured by CoaguChek XS Plus (Roche Diagnostics, Switzerland). PoC and reference tests were performed independently and interpreted under blinded conditions. Using a cut-off prothrombin time of 50%, we calculated diagnostic accuracy measures, plotted a receiver operating characteristic (ROC) curve and tested for equivalence between the two methods. PoC sensitivity and specificity were 95% (95% CI 77%, 100%) and 95% (95% CI 91%, 98%) respectively. The negative likelihood ratio was 0.05 (95% CI 0.01, 0.32). The positive likelihood ratio was 19.57 (95% CI 10.62, 36.06). The area under the ROC curve was 0.988. Equivalence between the two methods was confirmed. CoaguChek XS Plus is a rapid and highly accurate test compared with the reference test. These findings suggest that PoC testing will be useful for monitoring intraoperative prothrombin time when coagulopathy is suspected. It could lead to a more rational use of expensive and limited blood bank resources.
Resumo:
BACKGROUND: Complete investigation of thrombophilic or hemorrhagic clinical presentations is a time-, apparatus-, and cost-intensive process. Sensitive screening tests for characterizing the overall function of the hemostatic system, or defined parts of it, would be very useful. For this purpose, we are developing an electrochemical biosensor system that allows measurement of thrombin generation in whole blood as well as in plasma. METHODS: The measuring system consists of a single-use electrochemical sensor in the shape of a strip and a measuring unit connected to a personal computer, recording the electrical signal. Blood is added to a specific reagent mixture immobilized in dry form on the strip, including a coagulation activator (e.g., tissue factor or silica) and an electrogenic substrate specific to thrombin. RESULTS: Increasing thrombin concentrations gave standard curves with progressively increasing maximal current and decreasing time to reach the peak. Because the measurement was unaffected by color or turbidity, any type of blood sample could be analyzed: platelet-poor plasma, platelet-rich plasma, and whole blood. The test strips with the predried reagents were stable when stored for several months before testing. Analysis of the combined results obtained with different activators allowed discrimination between defects of the extrinsic, intrinsic, and common coagulation pathways. Activated protein C (APC) predried on the strips allowed identification of APC-resistance in plasma and whole blood samples. CONCLUSIONS: The biosensor system provides a new method for assessing thrombin generation in plasma or whole blood samples as small as 10 microL. The assay is easy to use, thus allowing it to be performed in a point-of-care setting.
Resumo:
This methods paper outlines the overall design of a community-based multidisciplinary longitudinal study with the intent to stimulate interest and communication from scientists and practitioners studying the role of physical activity in preventive medicine. In adults, lack of regular exercise is a major risk factor in the development of chronic degenerative diseases and is a major contributor to obesity, and now we have evidence that many of our children are not sufficiently active to prevent early symptoms of chronic disease. The lifestyle of our kids (LOOK) study investigates how early physical activity contributes to health and development, utilizing a longitudinal design and a cohort of eight hundred and thirty 7-8-year-old (grade 2) school children followed to age 11-12 years (grade 6), their average family income being very close to that of Australia. We will test two hypotheses, that (a) the quantity and quality of physical activity undertaken by primary school children will influence their psychological and physical health and development; (b) compared with existing practices in primary schools, a physical education program administered by visiting specialists will enhance health and development, and lead to a more positive perception of physical activity. To test the first hypothesis we will monitor all children longitudinally over the 4 years. To test the second we will involve an intervention group of 430 children who receive two 50min physical education classes every week from visiting specialists and a control group of 400 who continue with their usual primary school physical education with their class-room teachers. At the end of grades 2, 4, and 6 we will measure several areas of health and development including blood risk factors for chronic disease, cardiovascular structure and function, physical fitness, psychological characteristics and perceptions of physical activity, bone structure and strength, motor control, body composition, nutritional intake, influence of teachers and family, and academic performance.
Resumo:
PURPOSE: Resonance frequency analysis (RFA) offers the opportunity to monitor the osseointegration of an implant in a simple, noninvasive way. A better comprehension of the relationship between RFA and parameters related to bone quality would therefore help clinicians improve diagnoses. In this study, a bone analog made from polyurethane foam was used to isolate the influences of bone density and cortical thickness in RFA. MATERIALS AND METHODS: Straumann standard implants were inserted in polyurethane foam blocks, and primary implant stability was measured with RFA. The blocks were composed of two superimposed layers with different densities. The top layer was dense to mimic cortical bone, whereas the bottom layer had a lower density to represent trabecular bone. Different densities for both layers and different thicknesses for the simulated cortical layer were tested, resulting in eight different block combinations. RFA was compared with two other mechanical evaluations of primary stability: removal torque and axial loading response. RESULTS: The primary stability measured with RFA did not correlate with the two other methods, but there was a significant correlation between removal torque and the axial loading response (P < .005). Statistical analysis revealed that each method was sensitive to different aspects of bone quality. RFA was the only method able to detect changes in both bone density and cortical thickness. However, changes in trabecular bone density were easier to distinguish with removal torque and axial loading than with RFA. CONCLUSIONS: This study shows that RFA, removal torque, and axial loading are sensitive to different aspects of the bone-implant interface. This explains the absence of correlation among the methods and proves that no standard procedure exists for the evaluation of primary stability.
Resumo:
OBJECTIVES: To investigate the contribution of a real-time PCR assay for the detection of Treponema pallidum in various biological specimens with the secondary objective of comparing its value according to HIV status. METHODS: Prospective cohort of incident syphilis cases from three Swiss hospitals (Geneva and Bern University Hospitals, Outpatient Clinic for Dermatology of Triemli, Zurich) diagnosed between January 2006 and September 2008. A case-control study was nested into the cohort. Biological specimens (blood, lesion swab or urine) were taken at diagnosis (as clinical information) and analysed by real-time PCR using the T pallidum 47 kDa gene. RESULTS: 126 specimens were collected from 74 patients with primary (n = 26), secondary (n = 40) and latent (n = 8) syphilis. Among primary syphilis, sensitivity was 80% in lesion swabs, 28% in whole blood, 55% in serum and 29% in urine, whereas among secondary syphilis, it was 20%, 36%, 47% and 44%, respectively. Among secondary syphilis, plasma and cerebrospinal fluid were also tested and provided a sensitivity of 100% and 50%, respectively. The global sensitivity of T pallidum by PCR (irrespective of the compartment tested) was 65% during primary, 53% during secondary and null during latent syphilis. No difference regarding serology or PCR results was observed among HIV-infected patients. Specificity was 100%. CONCLUSIONS: Syphilis PCR provides better sensitivity in lesion swabs from primary syphilis and displays only moderate sensitivity in blood from primary and secondary syphilis. HIV status did not modify the internal validity of PCR for the diagnosis of primary or secondary syphilis.