884 resultados para Multivariate T Components


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose an alternative approach to obtaining a permanent equilibrium exchange rate (PEER), based on an unobserved components (UC) model. This approach offers a number of advantages over the conventional cointegration-based PEER. Firstly, we do not rely on the prerequisite that cointegration has to be found between the real exchange rate and macroeconomic fundamentals to obtain non-spurious long-run relationships and the PEER. Secondly, the impact that the permanent and transitory components of the macroeconomic fundamentals have on the real exchange rate can be modelled separately in the UC model. This is important for variables where the long and short-run effects may drive the real exchange rate in opposite directions, such as the relative government expenditure ratio. We also demonstrate that our proposed exchange rate models have good out-of sample forecasting properties. Our approach would be a useful technique for central banks to estimate the equilibrium exchange rate and to forecast the long-run movements of the exchange rate.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data containing a group or groups of observations with longer than normal tails or atypical observations, the use of normal components may unduly affect the fit of the mixture model. In this paper, we consider a more robust approach by modelling the data by a mixture of t distributions. The use of the ECM algorithm to fit this t mixture model is described and examples of its use are given in the context of clustering multivariate data in the presence of atypical observations in the form of background noise.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An ongoing controversy in Amazonian palaeoecology is the manner in which Amazonian rainforest communities have responded to environmental change over the last glacial–interglacial cycle. Much of this controversy results from an inability to identify the floristic heterogeneity exhibited by rainforest communities within fossil pollen records. We apply multivariate (Principal Components Analysis) and classification (Unweighted Pair Group with Arithmetic Mean Agglomerative Classification) techniques to floral-biometric, modern pollen trap and lake sediment pollen data situated within different rainforest communities in the tropical lowlands of Amazonian Bolivia. Modern pollen rain analyses from artificial pollen traps show that evergreen terra firme (well-drained), evergreen terra firme liana, evergreen seasonally inundated, and evergreen riparian rainforests may be readily differentiated, floristically and palynologically. Analogue matching techniques, based on Euclidean distance measures, are employed to compare these pollen signatures with surface sediment pollen assemblages from five lakes: Laguna Bella Vista, Laguna Chaplin, and Laguna Huachi situated within the Madeira-Tapajós moist forest ecoregion, and Laguna Isirere and Laguna Loma Suarez, which are situated within forest patches in the Beni savanna ecoregion. The same numerical techniques are used to compare rainforest pollen trap signatures with the fossil pollen record of Laguna Chaplin.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract Background Using univariate and multivariate variance components linkage analysis methods, we studied possible genotype × age interaction in cardiovascular phenotypes related to the aging process from the Framingham Heart Study. Results We found evidence for genotype × age interaction for fasting glucose and systolic blood pressure. Conclusions There is polygenic genotype × age interaction for fasting glucose and systolic blood pressure and quantitative trait locus × age interaction for a linkage signal for systolic blood pressure phenotypes located on chromosome 17 at 67 cM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this article, we evaluate the performance of the T2 chart based on the principal components (PC chart) and the simultaneous univariate control charts based on the original variables (SU X̄ charts) or based on the principal components (SUPC charts). The main reason to consider the PC chart lies on the dimensionality reduction. However, depending on the disturbance and on the way the original variables are related, the chart is very slow in signaling, except when all variables are negatively correlated and the principal component is wisely selected. Comparing the SU X̄, the SUPC and the T 2 charts we conclude that the SU X̄ charts (SUPC charts) have a better overall performance when the variables are positively (negatively) correlated. We also develop the expression to obtain the power of two S 2 charts designed for monitoring the covariance matrix. These joint S2 charts are, in the majority of the cases, more efficient than the generalized variance |S| chart.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Two contrasting multivariate statistical methods, viz., principal components analysis (PCA) and cluster analysis were applied to the study of neuropathological variations between cases of Alzheimer's disease (AD). To compare the two methods, 78 cases of AD were analyzed, each characterised by measurements of 47 neuropathological variables. Both methods of analysis revealed significant variations between AD cases. These variations were related primarily to differences in the distribution and abundance of senile plaques (SP) and neurofibrillary tangles (NFT) in the brain. Cluster analysis classified the majority of AD cases into five groups which could represent subtypes of AD. However, PCA suggested that variation between cases was more continuous with no distinct subtypes. Hence, PCA may be a more appropriate method than cluster analysis in the study of neuropathological variations between AD cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, pyrolysis-molecular beam mass spectrometry analysis coupled with principal components analysis and (13)C-labeled tetramethylammonium hydroxide thermochemolysis were used to study lignin oxidation, depolymerization, and demethylation of spruce wood treated by biomimetic oxidative systems. Neat Fenton and chelator-mediated Fenton reaction (CMFR) systems as well as cellulosic enzyme treatments were used to mimic the nonenzymatic process involved in wood brown-rot biodegradation. The results suggest that compared with enzymatic processes, Fenton-based treatment more readily opens the structure of the lignocellulosic matrix, freeing cellulose fibrils from the matrix. The results demonstrate that, under the current treatment conditions, Fenton and CMFR treatment cause limited demethoxylation of lignin in the insoluble wood residue. However, analysis of a water-extractable fraction revealed considerable soluble lignin residue structures that had undergone side chain oxidation as well as demethoxylation upon CMFR treatment. This research has implications for our understanding of nonenzymatic degradation of wood and the diffusion of CMFR agents in the wood cell wall during fungal degradation processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As part of a large ongoing project, the Memory, Attention and Problem Solving (MAPS) study, we investigated whether genetic variability explains some of the variance in psychophysiological correlates of brain function, namely, the P3 and SW components of event-related potentials (ERPs). These ERP measures are minute time recordings of brain processes and, because they reflect fundamental cognitive processing, provide a unique window on the millisecondto- millisecond transactions that occur at the cognitive level and taking place in the human brain. The extent to which the variance in P3 and SW components is influenced by genetic factors was examined in 350 identical and nonidentical twin pairs aged 16 years. ERPs were recorded from 15 scalp electrodes during the performance of a visuospatial delayed response task that engages working memory. Multivariate genetic analyses using MX were used to estimate genetic and environmental influences on individual differences in brain functioning and to identify putative genetic factors common to the ERP measures and psychometric IQ. For each of the ERP measures, correlation among electrode sites was high, a spatial pattern was evident, and a large part of the genetic variation in the ERPs appeared to be mediated by a common genetic factor. Moderate within-pair concordance in MZ pairs was found for all ERP measures, with higher correlations found for P3 than SW, and the MZ twin pair correlations were approximately twice the DZ correlations, suggesting a genetic influence. Correlations between ERP measures and psychometric IQ were found and, although moderately low, were evident across electrode site. The analyses show that the ERP components, P3 and SW, are promising phenotypes of the neuroelectrical activity of the brain and have the potential to be used in linkage and association analysis in the search for QTLs influencing cognitive function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fault detection and isolation (FDI) are important steps in the monitoring and supervision of industrial processes. Biological wastewater treatment (WWT) plants are difficult to model, and hence to monitor, because of the complexity of the biological reactions and because plant influent and disturbances are highly variable and/or unmeasured. Multivariate statistical models have been developed for a wide variety of situations over the past few decades, proving successful in many applications. In this paper we develop a new monitoring algorithm based on Principal Components Analysis (PCA). It can be seen equivalently as making Multiscale PCA (MSPCA) adaptive, or as a multiscale decomposition of adaptive PCA. Adaptive Multiscale PCA (AdMSPCA) exploits the changing multivariate relationships between variables at different time-scales. Adaptation of scale PCA models over time permits them to follow the evolution of the process, inputs or disturbances. Performance of AdMSPCA and adaptive PCA on a real WWT data set is compared and contrasted. The most significant difference observed was the ability of AdMSPCA to adapt to a much wider range of changes. This was mainly due to the flexibility afforded by allowing each scale model to adapt whenever it did not signal an abnormal event at that scale. Relative detection speeds were examined only summarily, but seemed to depend on the characteristics of the faults/disturbances. The results of the algorithms were similar for sudden changes, but AdMSPCA appeared more sensitive to slower changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the current context of serious climate changes, where the increase of the frequency of some extreme events occurrence can enhance the rate of periods prone to high intensity forest fires, the National Forest Authority often implements, in several Portuguese forest areas, a regular set of measures in order to control the amount of fuel mass availability (PNDFCI, 2008). In the present work we’ll present a preliminary analysis concerning the assessment of the consequences given by the implementation of prescribed fire measures to control the amount of fuel mass in soil recovery, in particular in terms of its water retention capacity, its organic matter content, pH and content of iron. This work is included in a larger study (Meira-Castro, 2009(a); Meira-Castro, 2009(b)). According to the established praxis on the data collection, embodied in multidimensional matrices of n columns (variables in analysis) by p lines (sampled areas at different depths), and also considering the quantitative data nature present in this study, we’ve chosen a methodological approach that considers the multivariate statistical analysis, in particular, the Principal Component Analysis (PCA ) (Góis, 2004). The experiments were carried out in a soil cover over a natural site of Andaluzitic schist, in Gramelas, Caminha, NW Portugal, who was able to maintain itself intact from prescribed burnings from four years and was submit to prescribed fire in March 2008. The soils samples were collected from five different plots at six different time periods. The methodological option that was adopted have allowed us to identify the most relevant relational structures inside the n variables, the p samples and in two sets at the same time (Garcia-Pereira, 1990). Consequently, and in addition to the traditional outputs produced from the PCA, we have analyzed the influence of both sampling depths and geomorphological environments in the behavior of all variables involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Univariate statistical control charts, such as the Shewhart chart, do not satisfy the requirements for process monitoring on a high volume automated fuel cell manufacturing line. This is because of the number of variables that require monitoring. The risk of elevated false alarms, due to the nature of the process being high volume, can present problems if univariate methods are used. Multivariate statistical methods are discussed as an alternative for process monitoring and control. The research presented is conducted on a manufacturing line which evaluates the performance of a fuel cell. It has three stages of production assembly that contribute to the final end product performance. The product performance is assessed by power and energy measurements, taken at various time points throughout the discharge testing of the fuel cell. The literature review performed on these multivariate techniques are evaluated using individual and batch observations. Modern techniques using multivariate control charts on Hotellings T2 are compared to other multivariate methods, such as Principal Components Analysis (PCA). The latter, PCA, was identified as the most suitable method. Control charts such as, scores, T2 and DModX charts, are constructed from the PCA model. Diagnostic procedures, using Contribution plots, for out of control points that are detected using these control charts, are also discussed. These plots enable the investigator to perform root cause analysis. Multivariate batch techniques are compared to individual observations typically seen on continuous processes. Recommendations, for the introduction of multivariate techniques that would be appropriate for most high volume processes, are also covered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Several researchers seek methods for the selection of homogeneous groups of animals in experimental studies, a fact justified because homogeneity is an indispensable prerequisite for casualization of treatments. The lack of robust methods that comply with statistical and biological principles is the reason why researchers use empirical or subjective methods, influencing their results. Objective: To develop a multivariate statistical model for the selection of a homogeneous group of animals for experimental research and to elaborate a computational package to use it. Methods: The set of echocardiographic data of 115 male Wistar rats with supravalvular aortic stenosis (AoS) was used as an example of model development. Initially, the data were standardized, and became dimensionless. Then, the variance matrix of the set was submitted to principal components analysis (PCA), aiming at reducing the parametric space and at retaining the relevant variability. That technique established a new Cartesian system into which the animals were allocated, and finally the confidence region (ellipsoid) was built for the profile of the animals’ homogeneous responses. The animals located inside the ellipsoid were considered as belonging to the homogeneous batch; those outside the ellipsoid were considered spurious. Results: The PCA established eight descriptive axes that represented the accumulated variance of the data set in 88.71%. The allocation of the animals in the new system and the construction of the confidence region revealed six spurious animals as compared to the homogeneous batch of 109 animals. Conclusion: The biometric criterion presented proved to be effective, because it considers the animal as a whole, analyzing jointly all parameters measured, in addition to having a small discard rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The Charlson index (Charlson, 1987) is a commonly used comorbidity index in outcome studies. Still, the use of different weights makes its calculation cumbersome, while the sum of its components (comorbidities) is easier to compute. In this study, we assessed the effects of 1) the Charlson index adapted for the Swiss population and 2) the sum of its components (number of comorbidities, maximum 15) on a) in-hospital deaths and b) cost of hospitalization. Methods: Anonymous data was obtained from the administrative database of the department of internal medicine of the Lausanne University Hospital (CHUV). All hospitalizations of adult (>=18 years) patients occurring between 2003 and 2011 were included. For each hospitalization, the Charlson index and the number of comorbidities were calculated. Analyses were conducted using Stata. Results: Data from 32,741 hospitalizations occurring between 2003 and 2011 was analyzed. On bivariate analysis, both the Charlson index and the number of comorbidities were significantly and positively associated with in hospital death. Conversely, multivariate adjustment for age, gender and calendar year using Cox regression showed that the association was no longer significant for the number of comorbidities (table). On bivariate analysis, hospitalization costs increased both with Charlson index and with number of comorbidities, but the increase was much steeper for the number of comorbidities (figure). Robust regression after adjusting for age, gender, calendar year and duration of hospital stay showed that the increase in one comorbidity led to an average increase in hospital costs of 321 CHF (95% CI: 272 to 370), while the increase in one score point of the Charlson index led to a decrease in hospital costs of 49 CHF (95% CI: 31 to 67). Conclusion: Charlson index is better than the number of comorbidities in predicting in-hospital death. Conversely, the number of comorbidities significantly increases hospital costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the application of a PCA analysis on categorical data prior to diagnose a patients data set using a Case-Based Reasoning (CBR) system. The particularity is that the standard PCA techniques are designed to deal with numerical attributes, but our medical data set contains many categorical data and alternative methods as RS-PCA are required. Thus, we propose to hybridize RS-PCA (Regular Simplex PCA) and a simple CBR. Results show how the hybrid system produces similar results when diagnosing a medical data set, that the ones obtained when using the original attributes. These results are quite promising since they allow to diagnose with less computation effort and memory storage