952 resultados para MULTIVARIATE CALIBRATION
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods
Resumo:
Catadioptric sensors are combinations of mirrors and lenses made in order to obtain a wide field of view. In this paper we propose a new sensor that has omnidirectional viewing ability and it also provides depth information about the nearby surrounding. The sensor is based on a conventional camera coupled with a laser emitter and two hyperbolic mirrors. Mathematical formulation and precise specifications of the intrinsic and extrinsic parameters of the sensor are discussed. Our approach overcomes limitations of the existing omni-directional sensors and eventually leads to reduced costs of production
Resumo:
A compositional time series is obtained when a compositional data vector is observed atdifferent points in time. Inherently, then, a compositional time series is a multivariatetime series with important constraints on the variables observed at any instance in time.Although this type of data frequently occurs in situations of real practical interest, atrawl through the statistical literature reveals that research in the field is very much in itsinfancy and that many theoretical and empirical issues still remain to be addressed. Anyappropriate statistical methodology for the analysis of compositional time series musttake into account the constraints which are not allowed for by the usual statisticaltechniques available for analysing multivariate time series. One general approach toanalyzing compositional time series consists in the application of an initial transform tobreak the positive and unit sum constraints, followed by the analysis of the transformedtime series using multivariate ARIMA models. In this paper we discuss the use of theadditive log-ratio, centred log-ratio and isometric log-ratio transforms. We also presentresults from an empirical study designed to explore how the selection of the initialtransform affects subsequent multivariate ARIMA modelling as well as the quality ofthe forecasts
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
BACKGROUND: Prevention of cardiovascular disease (CVD) at the individual level should rely on the assessment of absolute risk using population-specific risk tables. OBJECTIVE: To compare the predictive accuracy of the original and the calibrated SCORE functions regarding 10-year cardiovascular risk in Switzerland. DESIGN: Cross-sectional, population-based study (5773 participants aged 35-74 years). METHODS: The SCORE equation for low-risk countries was calibrated based on the Swiss CVD mortality rates and on the CVD risk factor levels from the study sample. The predicted number of CVD deaths after a 10-year period was computed from the original and the calibrated equations and from the observed cardiovascular mortality for 2003. RESULTS: According to the original and calibrated functions, 16.3 and 15.8% of men and 8.2 and 8.9% of women, respectively, had a 10-year CVD risk > or =5%. Concordance correlation coefficient between the two functions was 0.951 for men and 0.948 for women, both P<0.001. Both risk functions adequately predicted the 10-year cumulative number of CVD deaths: in men, 71 (original) and 74 (calibrated) deaths for 73 deaths when using the CVD mortality rates; in women, 44 (original), 45 (calibrated) and 45 (CVD mortality rates), respectively. Compared to the original function, the calibrated function classified more women and fewer men at high-risk. Moreover, the calibrated function gave better risk estimates among participants aged over 65 years. CONCLUSION: The original SCORE function adequately predicts CVD death in Switzerland, particularly for individuals aged less than 65 years. The calibrated function provides more reliable estimates for older individuals.
Resumo:
New zircon U-Pb ages are proposed for late Early and Middle Triassic volcanic ash layers from the Luolou and Baifeng formations (northwestern Guangxi, South China). These ages are based on analyses of single, thermally annealed and chemically abraded zircons. Calibration with ammonoid ages indicate a 250.6 +/- 0.5 Ma age for the early Spathian Tirolites/Columbites beds, a 248.1 +/- 0.4 Ma age for the late Spathian Neopopanoceras haugi Zone, a 246.9 +/- 0.4 Ma age for the early middle Anisian Acrochordiceras hyatti Zone, and a 244.6 +/- 0.5 Ma age for the late middle Anisian Balatonites shoshonensis Zone. The new dates and previously published U-Pb ages indicate a duration of ca. 3 my for the Spathian, and minimal durations of 4.5 +/- 0.6 my for the Early Triassic and of 6.6+0.7/-0.9 my for the Anisian. The new Spathian dates are in a better agreement with a 252.6 +/- 0.2 Ma age than with a 251.4 +/- 0.3 Ma age for the Permian-Triassic boundary. These dates also highlight the extremely uneven duration of the four Early Triassic substages (Griesbachian, Dienerian, Smithian, and Spathian), of which the Spathian exceeds half of the duration of the entire Early Triassic. The simplistic assumption of equal duration of the four Early Triassic subdivisions is no longer tenable for the reconstruction of recovery patterns following the end Permian mass extinction. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Measurement of three-dimensional (3D) knee joint angle outside a laboratory is of benefit in clinical examination and therapeutic treatment comparison. Although several motion capture devices exist, there is a need for an ambulatory system that could be used in routine practice. Up-to-date, inertial measurement units (IMUs) have proven to be suitable for unconstrained measurement of knee joint differential orientation. Nevertheless, this differential orientation should be converted into three reliable and clinically interpretable angles. Thus, the aim of this study was to propose a new calibration procedure adapted for the joint coordinate system (JCS), which required only IMUs data. The repeatability of the calibration procedure, as well as the errors in the measurement of 3D knee angle during gait in comparison to a reference system were assessed on eight healthy subjects. The new procedure relying on active and passive movements reported a high repeatability of the mean values (offset<1 degrees) and angular patterns (SD<0.3 degrees and CMC>0.9). In comparison to the reference system, this functional procedure showed high precision (SD<2 degrees and CC>0.75) and moderate accuracy (between 4.0 degrees and 8.1 degrees) for the three knee angle. The combination of the inertial-based system with the functional calibration procedure proposed here resulted in a promising tool for the measurement of 3D knee joint angle. Moreover, this method could be adapted to measure other complex joint, such as ankle or elbow.
Resumo:
A validation study has been performed using the Soil and Water Assessment Tool (SWAT) model with data collected for the Upper Maquoketa River Watershed (UMRW), which drains over 16,000 ha in northeast Iowa. This validation assessment builds on a previous study with nested modeling for the UMRW that required both the Agricultural Policy EXtender (APEX) model and SWAT. In the nested modeling approach, edge-offield flows and pollutant load estimates were generated for manure application fields with APEX and were then subsequently routed to the watershed outlet in SWAT, along with flows and pollutant loadings estimated for the rest of the watershed routed to the watershed outlet. In the current study, the entire UMRW cropland area was simulated in SWAT, which required translating the APEX subareas into SWAT hydrologic response units (HRUs). Calibration and validation of the SWAT output was performed by comparing predicted flow and NO3-N loadings with corresponding in-stream measurements at the watershed outlet from 1999 to 2001. Annual stream flows measured at the watershed outlet were greatly under-predicted when precipitation data collected within the watershed during the 1999-2001 period were used to drive SWAT. Selection of alternative climate data resulted in greatly improved average annual stream predictions, and also relatively strong r2 values of 0.73 and 0.72 for the predicted average monthly flows and NO3-N loads, respectively. The impact of alternative precipitation data shows that as average annual precipitation increases 19%, the relative change in average annual streamflow is about 55%. In summary, the results of this study show that SWAT can replicate measured trends for this watershed and that climate inputs are very important for validating SWAT and other water quality models.
Resumo:
This paper points out an empirical puzzle that arises when an RBC economy with a job matching function is used to model unemployment. The standard model can generate sufficiently large cyclical fluctuations in unemployment, or a sufficiently small response of unemployment to labor market policies, but it cannot do both. Variable search and separation, finite UI benefit duration, efficiency wages, and capital all fail to resolve this puzzle. However, both sticky wages and match-specific productivity shocks help the model reproduce the stylized facts: both make the firm's flow of surplus more procyclical, thus making hiring more procyclical too.
Resumo:
The human brainstem is a densely packed, complex but highly organised structure. It not only serves as a conduit for long projecting axons conveying motor and sensory information, but also is the location of multiple primary nuclei that control or modulate a vast array of functions, including homeostasis, consciousness, locomotion, and reflexive and emotive behaviours. Despite its importance, both in understanding normal brain function as well as neurodegenerative processes, it remains a sparsely studied structure in the neuroimaging literature. In part, this is due to the difficulties in imaging the internal architecture of the brainstem in vivo in a reliable and repeatable fashion. A modified multivariate mixture of Gaussians (mmMoG) was applied to the problem of multichannel tissue segmentation. By using quantitative magnetisation transfer and proton density maps acquired at 3 T with 0.8 mm isotropic resolution, tissue probability maps for four distinct tissue classes within the human brainstem were created. These were compared against an ex vivo fixated human brain, imaged at 0.5 mm, with excellent anatomical correspondence. These probability maps were used within SPM8 to create accurate individual subject segmentations, which were then used for further quantitative analysis. As an example, brainstem asymmetries were assessed across 34 right-handed individuals using voxel based morphometry (VBM) and tensor based morphometry (TBM), demonstrating highly significant differences within localised regions that corresponded to motor and vocalisation networks. This method may have important implications for future research into MRI biomarkers of pre-clinical neurodegenerative diseases such as Parkinson's disease.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.