939 resultados para measurement error models
Resumo:
Arterial spin labeling (ASL) is a technique for noninvasively measuring cerebral perfusion using magnetic resonance imaging. Clinical applications of ASL include functional activation studies, evaluation of the effect of pharmaceuticals on perfusion, and assessment of cerebrovascular disease, stroke, and brain tumor. The use of ASL in the clinic has been limited by poor image quality when large anatomic coverage is required and the time required for data acquisition and processing. This research sought to address these difficulties by optimizing the ASL acquisition and processing schemes. To improve data acquisition, optimal acquisition parameters were determined through simulations, phantom studies and in vivo measurements. The scan time for ASL data acquisition was limited to fifteen minutes to reduce potential subject motion. A processing scheme was implemented that rapidly produced regional cerebral blood flow (rCBF) maps with minimal user input. To provide a measure of the precision of the rCBF values produced by ASL, bootstrap analysis was performed on a representative data set. The bootstrap analysis of single gray and white matter voxels yielded a coefficient of variation of 6.7% and 29% respectively, implying that the calculated rCBF value is far more precise for gray matter than white matter. Additionally, bootstrap analysis was performed to investigate the sensitivity of the rCBF data to the input parameters and provide a quantitative comparison of several existing perfusion models. This study guided the selection of the optimum perfusion quantification model for further experiments. The optimized ASL acquisition and processing schemes were evaluated with two ASL acquisitions on each of five normal subjects. The gray-to-white matter rCBF ratios for nine of the ten acquisitions were within ±10% of 2.6 and none were statistically different from 2.6, the typical ratio produced by a variety of quantitative perfusion techniques. Overall, this work produced an ASL data acquisition and processing technique for quantitative perfusion and functional activation studies, while revealing the limitations of the technique through bootstrap analysis. ^
Resumo:
Lovell and Rouse (LR) have recently proposed a modification of the standard DEA model that overcomes the infeasibility problem often encountered in computing super-efficiency. In the LR procedure one appropriately scales up the observed input vector (scale down the output vector) of the relevant super-efficient firm thereby usually creating its inefficient surrogate. An alternative procedure proposed in this paper uses the directional distance function introduced by Chambers, Chung, and Färe and the resulting Nerlove-Luenberger (NL) measure of super-efficiency. The fact that the directional distance function combines features of both an input-oriented and an output-oriented model, generally leads to a more complete ranking of the observations than either of the oriented models. An added advantage of this approach is that the NL super-efficiency measure is unique and does not depend on any arbitrary choice of a scaling parameter. A data set on international airlines from Coelli, Perelman, and Griffel-Tatje (2002) is utilized in an illustrative empirical application.
Resumo:
This paper proposes asymptotically optimal tests for unstable parameter process under the feasible circumstance that the researcher has little information about the unstable parameter process and the error distribution, and suggests conditions under which the knowledge of those processes does not provide asymptotic power gains. I first derive a test under known error distribution, which is asymptotically equivalent to LR tests for correctly identified unstable parameter processes under suitable conditions. The conditions are weak enough to cover a wide range of unstable processes such as various types of structural breaks and time varying parameter processes. The test is then extended to semiparametric models in which the underlying distribution in unknown but treated as unknown infinite dimensional nuisance parameter. The semiparametric test is adaptive in the sense that its asymptotic power function is equivalent to the power envelope under known error distribution.
Resumo:
Random Forests™ is reported to be one of the most accurate classification algorithms in complex data analysis. It shows excellent performance even when most predictors are noisy and the number of variables is much larger than the number of observations. In this thesis Random Forests was applied to a large-scale lung cancer case-control study. A novel way of automatically selecting prognostic factors was proposed. Also, synthetic positive control was used to validate Random Forests method. Throughout this study we showed that Random Forests can deal with large number of weak input variables without overfitting. It can account for non-additive interactions between these input variables. Random Forests can also be used for variable selection without being adversely affected by collinearities. ^ Random Forests can deal with the large-scale data sets without rigorous data preprocessing. It has robust variable importance ranking measure. Proposed is a novel variable selection method in context of Random Forests that uses the data noise level as the cut-off value to determine the subset of the important predictors. This new approach enhanced the ability of the Random Forests algorithm to automatically identify important predictors for complex data. The cut-off value can also be adjusted based on the results of the synthetic positive control experiments. ^ When the data set had high variables to observations ratio, Random Forests complemented the established logistic regression. This study suggested that Random Forests is recommended for such high dimensionality data. One can use Random Forests to select the important variables and then use logistic regression or Random Forests itself to estimate the effect size of the predictors and to classify new observations. ^ We also found that the mean decrease of accuracy is a more reliable variable ranking measurement than mean decrease of Gini. ^
Resumo:
Studies on the relationship between psychosocial determinants and HIV risk behaviors have produced little evidence to support hypotheses based on theoretical relationships. One limitation inherent in many articles in the literature is the method of measurement of the determinants and the analytic approach selected. ^ To reduce the misclassification associated with unit scaling of measures specific to internalized homonegativity, I evaluated the psychometric properties of the Reactions to Homosexuality scale in a confirmatory factor analytic framework. In addition, I assessed the measurement invariance of the scale across racial/ethnic classifications in a sample of men who have sex with men. The resulting measure contained eight items loading on three first-order factors. Invariance assessment identified metric and partial strong invariance between racial/ethnic groups in the sample. ^ Application of the updated measure to a structural model allowed for the exploration of direct and indirect effects of internalized homonegativity on unprotected anal intercourse. Pathways identified in the model show that drug and alcohol use at last sexual encounter, the number of sexual partners in the previous three months and sexual compulsivity all contribute directly to risk behavior. Internalized homonegativity reduced the likelihood of exposure to drugs, alcohol or higher numbers of partners. For men who developed compulsive sexual behavior as a coping strategy for internalized homonegativity, there was an increase in the prevalence odds of risk behavior. ^ In the final stage of the analysis, I conducted a latent profile analysis of the items in the updated Reactions to Homosexuality scale. This analysis identified five distinct profiles, which suggested that the construct was not homogeneous in samples of men who have sex with men. Lack of prior consideration of these distinct manifestations of internalized homonegativity may have contributed to the analytic difficulty in identifying a relationship between the trait and high-risk sexual practices. ^
Resumo:
Strategies are compared for the development of a linear regression model with stochastic (multivariate normal) regressor variables and the subsequent assessment of its predictive ability. Bias and mean squared error of four estimators of predictive performance are evaluated in simulated samples of 32 population correlation matrices. Models including all of the available predictors are compared with those obtained using selected subsets. The subset selection procedures investigated include two stopping rules, C$\sb{\rm p}$ and S$\sb{\rm p}$, each combined with an 'all possible subsets' or 'forward selection' of variables. The estimators of performance utilized include parametric (MSEP$\sb{\rm m}$) and non-parametric (PRESS) assessments in the entire sample, and two data splitting estimates restricted to a random or balanced (Snee's DUPLEX) 'validation' half sample. The simulations were performed as a designed experiment, with population correlation matrices representing a broad range of data structures.^ The techniques examined for subset selection do not generally result in improved predictions relative to the full model. Approaches using 'forward selection' result in slightly smaller prediction errors and less biased estimators of predictive accuracy than 'all possible subsets' approaches but no differences are detected between the performances of C$\sb{\rm p}$ and S$\sb{\rm p}$. In every case, prediction errors of models obtained by subset selection in either of the half splits exceed those obtained using all predictors and the entire sample.^ Only the random split estimator is conditionally (on $\\beta$) unbiased, however MSEP$\sb{\rm m}$ is unbiased on average and PRESS is nearly so in unselected (fixed form) models. When subset selection techniques are used, MSEP$\sb{\rm m}$ and PRESS always underestimate prediction errors, by as much as 27 percent (on average) in small samples. Despite their bias, the mean squared errors (MSE) of these estimators are at least 30 percent less than that of the unbiased random split estimator. The DUPLEX split estimator suffers from large MSE as well as bias, and seems of little value within the context of stochastic regressor variables.^ To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample be used for model development, and a leave-one-out statistic (e.g. PRESS) be used for assessment. ^
Resumo:
Life expectancy has consistently increased over the last 150 years due to improvements in nutrition, medicine, and public health. Several studies found that in many developed countries, life expectancy continued to rise following a nearly linear trend, which was contrary to a common belief that the rate of improvement in life expectancy would decelerate and was fit with an S-shaped curve. Using samples of countries that exhibited a wide range of economic development levels, we explored the change in life expectancy over time by employing both nonlinear and linear models. We then observed if there were any significant differences in estimates between linear models, assuming an auto-correlated error structure. When data did not have a sigmoidal shape, nonlinear growth models sometimes failed to provide meaningful parameter estimates. The existence of an inflection point and asymptotes in the growth models made them inflexible with life expectancy data. In linear models, there was no significant difference in the life expectancy growth rate and future estimates between ordinary least squares (OLS) and generalized least squares (GLS). However, the generalized least squares model was more robust because the data involved time-series variables and residuals were positively correlated. ^
Resumo:
The influence of respiratory motion on patient anatomy poses a challenge to accurate radiation therapy, especially in lung cancer treatment. Modern radiation therapy planning uses models of tumor respiratory motion to account for target motion in targeting. The tumor motion model can be verified on a per-treatment session basis with four-dimensional cone-beam computed tomography (4D-CBCT), which acquires an image set of the dynamic target throughout the respiratory cycle during the therapy session. 4D-CBCT is undersampled if the scan time is too short. However, short scan time is desirable in clinical practice to reduce patient setup time. This dissertation presents the design and optimization of 4D-CBCT to reduce the impact of undersampling artifacts with short scan times. This work measures the impact of undersampling artifacts on the accuracy of target motion measurement under different sampling conditions and for various object sizes and motions. The results provide a minimum scan time such that the target tracking error is less than a specified tolerance. This work also presents new image reconstruction algorithms for reducing undersampling artifacts in undersampled datasets by taking advantage of the assumption that the relevant motion of interest is contained within a volume-of-interest (VOI). It is shown that the VOI-based reconstruction provides more accurate image intensity than standard reconstruction. The VOI-based reconstruction produced 43% fewer least-squares error inside the VOI and 84% fewer error throughout the image in a study designed to simulate target motion. The VOI-based reconstruction approach can reduce acquisition time and improve image quality in 4D-CBCT.
Resumo:
A limiting factor in the accuracy and precision of U/Pb zircon dates is accurate correction for initial disequilibrium in the 238U and 235U decay chains. The longest-lived-and therefore most abundant-intermediate daughter product in the 235U isotopic decay chain is 231Pa (T1/2 = 32.71 ka), and the partitioning behavior of Pa in zircon is not well constrained. Here we report high-precision thermal ionization mass spectrometry (TIMS) U-Pb zircon data from two samples from Ocean Drilling Program (ODP) Hole 735B, which show evidence for incorporation of excess 231Pa during zircon crystallization. The most precise analyses from the two samples have consistent Th-corrected 206Pb/238U dates with weighted means of 11.9325 ± 0.0039 Ma (n = 9) and 11.920 ± 0.011 Ma (n = 4), but distinctly older 207Pb/235U dates that vary from 12.330 ± 0.048 Ma to 12.140 ± 0.044 Ma and 12.03 ± 0.24 to 12.40 ± 0.27 Ma, respectively. If the excess 207Pb is due to variable initial excess 231Pa, calculated initial (231Pa)/(235U) activity ratios for the two samples range from 5.6 ± 1.0 to 9.6 ± 1.1 and 3.5 ± 5.2 to 11.4 ± 5.8. The data from the more precisely dated sample yields estimated DPazircon/DUzircon from 2.2-3.8 and 5.6-9.6, assuming (231Pa)/(235U) of the melt equal to the global average of recently erupted mid-ocean ridge basaltic glasses or secular equilibrium, respectively. High precision ID-TIMS analyses from nine additional samples from Hole 735B and nearby Hole 1105A suggest similar partitioning. The lower range of DPazircon/DUzircon is consistent with ion microprobe measurements of 231Pa in zircons from Holocene and Pleistocene rhyolitic eruptions (Schmitt (2007; doi:10.2138/am.2007.2449) and Schmitt (2011; doi:10.1146/annurev-earth-040610-133330)). The data suggest that 231Pa is preferentially incorporated during zircon crystallization over a range of magmatic compositions, and excess initial 231Pa may be more common in zircons than acknowledged. The degree of initial disequilibrium in the 235U decay chain suggested by the data from this study, and other recent high precision datasets, leads to resolvable discordance in high precision dates of Cenozoic to Mesozoic zircons. Minor discordance in zircons of this age may therefore reflect initial excess 231Pa and does not require either inheritance or Pb loss.
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
A portable Fourier transform spectrometer (FTS), model EM27/SUN, was deployed onboard the research vessel Polarstern to measure the column-average dry air mole fractions of carbon dioxide (XCO2) and methane (XCH4) by means of direct sunlight absorption spectrometry. We report on technical developments as well as data calibration and reduction measures required to achieve the targeted accuracy of fractions of a percent in retrieved XCO2 and XCH4 while operating the instrument under field conditions onboard the moving platform during a 6-week cruise on the Atlantic from Cape Town (South Africa, 34° S, 18° E; 5 March 2014) to Bremerhaven (Germany, 54° N, 19° E; 14 April 2014). We demonstrate that our solar tracker typically achieved a tracking precision of better than 0.05° toward the center of the sun throughout the ship cruise which facilitates accurate XCO2 and XCH4 retrievals even under harsh ambient wind conditions. We define several quality filters that screen spectra, e.g., when the field of view was partially obstructed by ship structures or when the lines-of-sight crossed the ship exhaust plume. The measurements in clean oceanic air, can be used to characterize a spurious air-mass dependency. After the campaign, deployment of the spectrometer alongside the TCCON (Total Carbon Column Observing Network) instrument at Karlsruhe, Germany, allowed for determining a calibration factor that makes the entire campaign record traceable to World Meteorological Organization (WMO) standards. Comparisons to observations of the GOSAT satellite and concentration fields modeled by the European Centre for Medium-Range Weather Forecasts (ECMWF) Copernicus Atmosphere Monitoring Service (CAMS) demonstrate that the observational setup is well suited to provide validation opportunities above the ocean and along interhemispheric transects.
Resumo:
This dataset present result from the DFG- funded Arctic-Turbulence-Experiment (ARCTEX-2006) performed by the University of Bayreuth on the island of Svalbard, Norway, during the winter/spring transition 2006. From May 5 to May 19, 2006 turbulent flux and meteorological measurements were performed on the monitoring field near Ny-Ålesund, at 78°55'24'' N, 11°55'15'' E Kongsfjord, Svalbard (Spitsbergen), Norway. The ARCTEX-2006 campaign site was located about 200 m southeast of the settlement on flat snow covered tundra, 11 m to 14 m above sea level. The permanent sites used for this study consisted of the 10 m meteorological tower of the Alfred Wegener Institute for Polar- and Marine Research (AWI), the international standardized radiation measurement site of the Baseline Surface Radiation Network (BSRN), the radiosonde launch site and the AWI tethered balloon launch sites. The temporary sites - set up by the University of Bayreuth - were a 6 m meteorological gradient tower, an eddy-flux measurement complex (EF), and a laser-scintillometer section (SLS). A quality assessment and data correction was applied to detect and eliminate specific measurement errors common at a high arctic landscape. In addition, the quality checked sensible heat flux measurements are compared with bulk aerodynamic formulas that are widely used in atmosphere-ocean/land-ice models for polar regions as described in Ebert and Curry (1993, doi:10.1029/93JC00656) and Launiainen and Cheng (1995). These parameterization approaches easily allow estimation of the turbulent surface fluxes from routine meteorological measurements. The data show: - the role of the intermittency of the turbulent atmospheric fluctuation of momentum and scalars, - the existence of a disturbed vertical temperature profile (sharp inversion layer) close to the surface, - the relevance of possible free convection events for the snow or ice melt in the Arctic spring at Svalbard, and - the relevance of meso-scale atmospheric circulation pattern and air-mass advection for the near-surface turbulent heat exchange in the Arctic spring at Svalbard. Recommendations and improvements regarding the interpretation of eddy-flux and laser-scintillometer data as well as the arrangement of the instrumentation under polar distinct exchange conditions and (extreme) weather situations could be derived.
Resumo:
This study focuses on the present-day surface elevation of the Greenland and Antarctic ice sheets. Based on 3 years of CryoSat-2 data acquisition we derived new elevation models (DEMs) as well as elevation change maps and volume change estimates for both ice sheets. Here we present the new DEMs and their corresponding error maps. The accuracy of the derived DEMs for Greenland and Antarctica is similar to those of previous DEMs obtained by satellite-based laser and radar altimeters. Comparisons with ICESat data show that 80% of the CryoSat-2 DEMs have an uncertainty of less than 3 m ± 15 m. The surface elevation change rates between January 2011 and January 2014 are presented for both ice sheets. We compared our results to elevation change rates obtained from ICESat data covering the time period from 2003 to 2009. The comparison reveals that in West Antarctica the volume loss has increased by a factor of 3. It also shows an anomalous thickening in Dronning Maud Land, East Antarctica which represents a known large-scale accumulation event. This anomaly partly compensates for the observed increased volume loss of the Antarctic Peninsula and West Antarctica. For Greenland we find a volume loss increased by a factor of 2.5 compared to the ICESat period with large negative elevation changes concentrated at the west and southeast coasts. The combined volume change of Greenland and Antarctica for the observation period is estimated to be -503 ± 107 km**3/yr. Greenland contributes nearly 75% to the total volume change with -375 ± 24 km**3/yr.
Resumo:
A continuous age model for the brief climate excursion at the Paleocene-Eocene boundary has been constructed by assuming a constant flux of extraterrestrial 3He (3He[ET]) to the seafloor. 3He[ET] measurements from ODP Site 690 provide quantitative evidence for the rapid onset (
Resumo:
We present new high-resolution N isotope records from the Gulf of Tehuantepec and the Nicaragua Basin spanning the last 50-70 ka. The Tehuantepec site is situated within the core of the north subtropical denitrification zone while the Nicaragua site is at the southern boundary. The d15N record from Nicaragua shows an 'Antarctic' timing similar to denitrification changes observed off Peru-Chile but is radically different from the northern records. We attribute this to the leakage of isotopically heavy nitrate from the South Pacific oxygen minimum zone (OMZ) into the Nicaragua Basin. The Nicaragua record leads the other eastern tropical North Pacific (ETNP) records by about 1000 years because denitrification peaks in the eastern tropical South Pacific (ETSP) before denitrification starts to increase in the Northern Hemisphere OMZ, i.e., during warming episodes in Antarctica. We find that the influence of the heavy nitrate leakage from the ETSP is still noticeable, although attenuated, in the Gulf of Tehuantepec record, particularly at the end of the Heinrich events, and tends to alter the recording of millennial timescale denitrification changes in the ETNP. This implies (1) that sedimentary d15N records from the southern parts of the ETNP cannot be used straightforwardly as a proxy for local denitrification and (2) that denitrification history in the ETNP, like in the Arabian Sea, is synchronous with Greenland temperature changes. These observations reinforce the conclusion that on millennial timescales during the last ice age, denitrification in the ETNP is strongly influenced by climatic variations that originated in the high-latitude North Atlantic region, while commensurate changes in Southern Ocean hydrography more directly, and slightly earlier, affected oxygen concentrations in the ETSP. Furthermore, the d15N records imply ongoing physical communication across the equator in the shallow subsurface continuously over the last 50-70 ka.