947 resultados para common method variance
Resumo:
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique which is commonly used to quantify changes in blood oxygenation and flow coupled to neuronal activation. One of the primary goals of fMRI studies is to identify localized brain regions where neuronal activation levels vary between groups. Single voxel t-tests have been commonly used to determine whether activation related to the protocol differs across groups. Due to the generally limited number of subjects within each study, accurate estimation of variance at each voxel is difficult. Thus, combining information across voxels in the statistical analysis of fMRI data is desirable in order to improve efficiency. Here we construct a hierarchical model and apply an Empirical Bayes framework on the analysis of group fMRI data, employing techniques used in high throughput genomic studies. The key idea is to shrink residual variances by combining information across voxels, and subsequently to construct an improved test statistic in lieu of the classical t-statistic. This hierarchical model results in a shrinkage of voxel-wise residual sample variances towards a common value. The shrunken estimator for voxelspecific variance components on the group analyses outperforms the classical residual error estimator in terms of mean squared error. Moreover, the shrunken test-statistic decreases false positive rate when testing differences in brain contrast maps across a wide range of simulation studies. This methodology was also applied to experimental data regarding a cognitive activation task.
Resumo:
Francisella tularensis, a small Gram-negative facultative intracellular bacterium, is the causative agent of tularaemia, a severe zoonotic disease transmitted to humans mostly by vectors such as ticks, flies and mosquitoes. The disease is endemic in many parts of the northern hemisphere. Among animals, the most affected species belong to rodents and lagomorphs, in particular hares. However, in the recent years, many cases of tularaemia among small monkeys in zoos were reported. We have developed a real-time PCR that allows to quantify F. tularensis in tissue samples. Using this method, we identified the spleen and the kidney as the most heavily infected organ containing up to 400 F. tularensis bacteria per simian host cell in two common squirrel monkeys (Saimiri sciureus) from a zoo that died of tularaemia. In other organs such as the brain, F. tularensis was detected at much lower titres. The strain that caused the infection was identified as F. tularensis subsp. holarctica biovar I, which is susceptible to erythromycin. The high number of F. tularensis present in soft organs such as spleen, liver and kidney represents a high risk for persons handling such carcasses and explains the transmission of the disease to a pathologist during post-mortem analysis. Herein, we show that real-time PCR allows a reliable and rapid diagnosis of F. tularensis directly from tissue samples of infected animals, which is crucial in order to attempt accurate prophylactic measures, especially in cases where humans or other animals have been exposed to this highly contagious pathogen.
Resumo:
OBJECTIVE: To assess the effects of a single intravenous dose of butorphanol (0.1 mg kg(-1)) on the nociceptive withdrawal reflex (NWR) using threshold, suprathreshold and repeated subthreshold electrical stimuli in conscious horses. STUDY DESIGN: 'Unblinded', prospective experimental study. ANIMALS: Ten adult horses, five geldings and five mares, mean body mass 517 kg (range 487-569 kg). METHODS: The NWR was elicited using single transcutaneous electrical stimulation of the palmar digital nerve. Repeated stimulations were applied to evoke temporal summation. Surface electromyography was performed to record and quantify the responses of the common digital extensor muscle to stimulation and behavioural reactions were scored. Before butorphanol administration and at fixed time points up to 2 hours after injection, baseline threshold intensities for NWR and temporal summation were defined and single suprathreshold stimulations applied. Friedman repeated-measures analysis of variance on ranks and Wilcoxon signed-rank test were used with the Student-Newman-Keul's method applied post-hoc. The level of significance (alpha) was set at 0.05. RESULTS: Butorphanol did not modify either the thresholds for NWR and temporal summation or the reaction scores, but the difference between suprathreshold and threshold reflex amplitudes was reduced when single stimulation was applied. Upon repeated stimulation after butorphanol administration, a significant decrease in the relative amplitude was calculated for both the 30-80 and the 80-200 millisecond intervals after each stimulus, and for the whole post-stimulation interval in the right thoracic limb. In the left thoracic limb a decrease in the relative amplitude was found only in the 30-80 millisecond epoch. CONCLUSION: Butorphanol at 0.1 mg kg(-1) has no direct action on spinal Adelta nociceptive activity but may have some supraspinal effects that reduce the gain of the nociceptive system. CLINICAL RELEVANCE: Butorphanol has minimal effect on sharp immediate Adelta-mediated pain but may alter spinal processing and decrease the delayed sensations of pain.
Resumo:
BACKGROUND: Over the last 4 years ADAMTS-13 measurement underwent dramatic progress with newer and simpler methods. AIMS: Blind evaluation of newer methods for their performance characteristics. DESIGN: The literature was searched for new methods and the authors invited to join the evaluation. Participants were provided with a set of 60 coded frozen plasmas that were prepared centrally by dilutions of one ADAMTS-13-deficient plasma (arbitrarily set at 0%) into one normal-pooled plasma (set at 100%). There were six different test plasmas ranging from 100% to 0%. Each plasma was tested 'blind' 10 times by each method and results expressed as percentage vs. the local and the common standard provided by the organizer. RESULTS: There were eight functional and three antigen assays. Linearity of observed-vs.-expected ADAMTS-13 levels assessed as r2 ranged from 0.931 to 0.998. Between-run reproducibility expressed as the (mean) CV for repeated measurements was below 10% for three methods, 10-15% for five methods and up to 20% for the remaining three. F-values (analysis of variance) calculated to assess the capacity to distinguish between ADAMTS-13 levels (the higher the F-value, the better the capacity) ranged from 3965 to 137. Between-method variability (CV) amounted to 24.8% when calculated vs. the local and to 20.5% when calculated vs. the common standard. Comparative analysis showed that functional assays employing modified von Willebrand factor peptides as substrate for ADAMTS-13 offer the best performance characteristics. CONCLUSIONS: New assays for ADAMTS-13 have the potential to make the investigation/management of patients with thrombotic microangiopathies much easier than in the past.
Resumo:
In 1998-2001 Finland suffered the most severe insect outbreak ever recorded, over 500,000 hectares. The outbreak was caused by the common pine sawfly (Diprion pini L.). The outbreak has continued in the study area, Palokangas, ever since. To find a good method to monitor this type of outbreaks, the purpose of this study was to examine the efficacy of multi-temporal ERS-2 and ENVISAT SAR imagery for estimating Scots pine (Pinus sylvestris L.) defoliation. Three methods were tested: unsupervised k-means clustering, supervised linear discriminant analysis (LDA) and logistic regression. In addition, I assessed if harvested areas could be differentiated from the defoliated forest using the same methods. Two different speckle filters were used to determine the effect of filtering on the SAR imagery and subsequent results. The logistic regression performed best, producing a classification accuracy of 81.6% (kappa 0.62) with two classes (no defoliation, >20% defoliation). LDA accuracy was with two classes at best 77.7% (kappa 0.54) and k-means 72.8 (0.46). In general, the largest speckle filter, 5 x 5 image window, performed best. When additional classes were added the accuracy was usually degraded on a step-by-step basis. The results were good, but because of the restrictions in the study they should be confirmed with independent data, before full conclusions can be made that results are reliable. The restrictions include the small size field data and, thus, the problems with accuracy assessment (no separate testing data) as well as the lack of meteorological data from the imaging dates.
Resumo:
Ensuring water is safe at source and point-of-use is important in areas of the world where drinking water is collected from communal supplies. This report describes a study in rural Mali to determine the appropriateness of assumptions common among development organizations that drinking water will remain safe at point-of-use if collected from a safe (improved) source. Water was collected from ten sources (borehole wells with hand pumps, and hand-dug wells) and forty-five households using water from each source type. Water quality was evaluated seasonally (quarterly) for levels of total coliform, E.coli, and turbidity. Microbial testing was done using the 3M Petrifilm™ method. Turbidity testing was done using a turbidity tube. Microbial testing results were analyzed using statistical tests including Kruskal-Wallis, Mann Whitney, and analysis of variance. Results show that water from hand pumps did not contain total coliform or E.coli and had turbidity under 5 NTUs, whereas water from dug wells had high levels of bacteria and turbidity. However water at point-of-use (household) from hand pumps showed microbial contamination - at times being indistinguishable from households using dug wells - indicating a decline in water quality from source to point-of-use. Chemical treatment at point-of-use is suggested as an appropriate solution to eliminating any post-source contamination. Additionally, it is recommended that future work be done to modify existing water development strategies to consider water quality at point-of-use.
Resumo:
Computed tomography (CT) has proved to be a valuable investigative tool for mummy research and is the method of choice for examining mummies. It allows for noninvasive insight, especially with virtual endoscopy, which reveals detailed information about the mummy's sex, age, constitution, injuries, health, and mummification techniques used. CT also supplies three-dimensional information about the scanned object. Mummification processes can be summarized as "artificial," when the procedure was performed on a body with the aim of preservation, or as "natural," when the body's natural environment resulted in preservation. The purpose of artificial mummification was to preserve that person's morphologic features by delaying or arresting the decay of the body. The ancient Egyptians are most famous for this. Their use of evisceration followed by desiccation with natron (a compound of sodium salts) to halt putrefaction and prevent rehydration was so effective that their embalmed bodies have survived for nearly 4500 years. First, the body was cleaned with a natron solution; then internal organs were removed through the cribriform plate and abdomen. The most important, and probably the most lengthy, phase was desiccation. After the body was dehydrated, the body cavities were rinsed and packed to restore the body's former shape. Finally, the body was wrapped. Animals were also mummified to provide food for the deceased, to accompany the deceased as pets, because they were seen as corporal manifestations of deities, and as votive offerings. Artificial mummification was performed on every continent, especially in South and Central America.
Resumo:
Complex human diseases are a major challenge for biological research. The goal of my research is to develop effective methods for biostatistics in order to create more opportunities for the prevention and cure of human diseases. This dissertation proposes statistical technologies that have the ability of being adapted to sequencing data in family-based designs, and that account for joint effects as well as gene-gene and gene-environment interactions in the GWA studies. The framework includes statistical methods for rare and common variant association studies. Although next-generation DNA sequencing technologies have made rare variant association studies feasible, the development of powerful statistical methods for rare variant association studies is still underway. Chapter 2 demonstrates two adaptive weighting methods for rare variant association studies based on family data for quantitative traits. The results show that both proposed methods are robust to population stratification, robust to the direction and magnitude of the effects of causal variants, and more powerful than the methods using weights suggested by Madsen and Browning [2009]. In Chapter 3, I extended the previously proposed test for Testing the effect of an Optimally Weighted combination of variants (TOW) [Sha et al., 2012] for unrelated individuals to TOW &ndash F, TOW for Family &ndash based design. Simulation results show that TOW &ndash F can control for population stratification in wide range of population structures including spatially structured populations, is robust to the directions of effect of causal variants, and is relatively robust to percentage of neutral variants. In GWA studies, this dissertation consists of a two &ndash locus joint effect analysis and a two-stage approach accounting for gene &ndash gene and gene &ndash environment interaction. Chapter 4 proposes a novel two &ndash stage approach, which is promising to identify joint effects, especially for monotonic models. The proposed approach outperforms a single &ndash marker method and a regular two &ndash stage analysis based on the two &ndash locus genotypic test. In Chapter 5, I proposed a gene &ndash based two &ndash stage approach to identify gene &ndash gene and gene &ndash environment interactions in GWA studies which can include rare variants. The two &ndash stage approach is applied to the GAW 17 dataset to identify the interaction between KDR gene and smoking status.
Resumo:
Disturbances in power systems may lead to electromagnetic transient oscillations due to mismatch of mechanical input power and electrical output power. Out-of-step conditions in power system are common after the disturbances where the continuous oscillations do not damp out and the system becomes unstable. Existing out-of-step detection methods are system specific as extensive off-line studies are required for setting of relays. Most of the existing algorithms also require network reduction techniques to apply in multi-machine power systems. To overcome these issues, this research applies Phasor Measurement Unit (PMU) data and Zubov’s approximation stability boundary method, which is a modification of Lyapunov’s direct method, to develop a novel out-of-step detection algorithm. The proposed out-of-step detection algorithm is tested in a Single Machine Infinite Bus system, IEEE 3-machine 9-bus, and IEEE 10-machine 39-bus systems. Simulation results show that the proposed algorithm is capable of detecting out-of-step conditions in multi-machine power systems without using network reduction techniques and a comparative study with an existing blinder method demonstrate that the decision times are faster. The simulation case studies also demonstrate that the proposed algorithm does not depend on power system parameters, hence it avoids the need of extensive off-line system studies as needed in other algorithms.
Resumo:
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.
Resumo:
AIMS Common carotid artery intima-media thickness (CCIMT) is widely used as a surrogate marker of atherosclerosis, given its predictive association with cardiovascular disease (CVD). The interpretation of CCIMT values has been hampered by the absence of reference values, however. We therefore aimed to establish reference intervals of CCIMT, obtained using the probably most accurate method at present (i.e. echotracking), to help interpretation of these measures. METHODS AND RESULTS We combined CCIMT data obtained by echotracking on 24 871 individuals (53% men; age range 15-101 years) from 24 research centres worldwide. Individuals without CVD, cardiovascular risk factors (CV-RFs), and BP-, lipid-, and/or glucose-lowering medication constituted a healthy sub-population (n = 4234) used to establish sex-specific equations for percentiles of CCIMT across age. With these equations, we generated CCIMT Z-scores in different reference sub-populations, thereby allowing for a standardized comparison between observed and predicted ('normal') values from individuals of the same age and sex. In the sub-population without CVD and treatment (n = 14 609), and in men and women, respectively, CCIMT Z-scores were independently associated with systolic blood pressure [standardized βs 0.19 (95% CI: 0.16-0.22) and 0.18 (0.15-0.21)], smoking [0.25 (0.19-0.31) and 0.11 (0.04-0.18)], diabetes [0.19 (0.05-0.33) and 0.19 (0.02-0.36)], total-to-HDL cholesterol ratio [0.07 (0.04-0.10) and 0.05 (0.02-0.09)], and body mass index [0.14 (0.12-0.17) and 0.07 (0.04-0.10)]. CONCLUSION We estimated age- and sex-specific percentiles of CCIMT in a healthy population and assessed the association of CV-RFs with CCIMT Z-scores, which enables comparison of IMT values for (patient) groups with different cardiovascular risk profiles, helping interpretation of such measures obtained both in research and clinical settings.
Resumo:
An accurate and coherent chronological framework is essential for the interpretation of climatic and environmental records obtained from deep polar ice cores. Until now, one common ice core age scale had been developed based on an inverse dating method (Datice), combining glaciological modelling with absolute and stratigraphic markers between 4 ice cores covering the last 50 ka (thousands of years before present) (Lemieux-Dudon et al., 2010). In this paper, together with the companion paper of Veres et al. (2013), we present an extension of this work back to 800 ka for the NGRIP, TALDICE, EDML, Vostok and EDC ice cores using an improved version of the Datice tool. The AICC2012 (Antarctic Ice Core Chronology 2012) chronology includes numerous new gas and ice stratigraphic links as well as improved evaluation of background and associated variance scenarios. This paper concentrates on the long timescales between 120–800 ka. In this framework, new measurements of δ18Oatm over Marine Isotope Stage (MIS) 11–12 on EDC and a complete δ18Oatm record of the TALDICE ice cores permit us to derive additional orbital gas age constraints. The coherency of the different orbitally deduced ages (from δ18Oatm, δO2/N2 and air content) has been verified before implementation in AICC2012. The new chronology is now independent of other archives and shows only small differences, most of the time within the original uncertainty range calculated by Datice, when compared with the previous ice core reference age scale EDC3, the Dome F chronology, or using a comparison between speleothems and methane. For instance, the largest deviation between AICC2012 and EDC3 (5.4 ka) is obtained around MIS 12. Despite significant modifications of the chronological constraints around MIS 5, now independent of speleothem records in AICC2012, the date of Termination II is very close to the EDC3 one.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
To improve our understanding of the Asian monsoon system, we developed a hydroclimate reconstruction in a marginal monsoon shoulder region for the period prior to the industrial era. Here, we present the first moisture sensitive tree-ring chronology, spanning 501 years for the Dieshan Mountain area, a boundary region of the Asian summer monsoon in the northeastern Tibetan Plateau. This reconstruction was derived from 101 cores of 68 old-growth Chinese pine (Pinus tabulaeformis) trees. We introduce a Hilbert–Huang Transform (HHT) based standardization method to develop the tree-ring chronology, which has the advantages of excluding non-climatic disturbances in individual tree-ring series. Based on the reliable portion of the chronology, we reconstructed the annual (prior July to current June) precipitation history since 1637 for the Dieshan Mountain area and were able to explain 41.3% of the variance. The extremely dry years in this reconstruction were also found in historical documents and are also associated with El Niño episodes. Dry periods were reconstructed for 1718–1725, 1766–1770 and 1920–1933, whereas 1782–1788 and 1979–1985 were wet periods. The spatial signatures of these events were supported by data from other marginal regions of the Asian summer monsoon. Over the past four centuries, out-of-phase relationships between hydroclimate variations in the Dieshan Mountain area and far western Mongolia were observed during the 1718–1725 and 1766–1770 dry periods and the 1979–1985 wet period.
Resumo:
A non-parametric method was developed and tested to compare the partial areas under two correlated Receiver Operating Characteristic curves. Based on the theory of generalized U-statistics the mathematical formulas have been derived for computing ROC area, and the variance and covariance between the portions of two ROC curves. A practical SAS application also has been developed to facilitate the calculations. The accuracy of the non-parametric method was evaluated by comparing it to other methods. By applying our method to the data from a published ROC analysis of CT image, our results are very close to theirs. A hypothetical example was used to demonstrate the effects of two crossed ROC curves. The two ROC areas are the same. However each portion of the area between two ROC curves were found to be significantly different by the partial ROC curve analysis. For computation of ROC curves with large scales, such as a logistic regression model, we applied our method to the breast cancer study with Medicare claims data. It yielded the same ROC area computation as the SAS Logistic procedure. Our method also provides an alternative to the global summary of ROC area comparison by directly comparing the true-positive rates for two regression models and by determining the range of false-positive values where the models differ. ^