921 resultados para Multivariate Equations
Resumo:
This paper proposes a contemporaneous-threshold multivariate smooth transition autoregressive (C-MSTAR) model in which the regime weights depend on the ex ante probabilities that latent regime-specific variables exceed certain threshold values. A key feature of the model is that the transition function depends on all the parameters of the model as well as on the data. Since the mixing weights are also a function of the regime-specific innovation covariance matrix, the model can account for contemporaneous regime-specific co-movements of the variables. The stability and distributional properties of the proposed model are discussed, as well as issues of estimation, testing and forecasting. The practical usefulness of the C-MSTAR model is illustrated by examining the relationship between US stock prices and interest rates.
Stabilized Petrov-Galerkin methods for the convection-diffusion-reaction and the Helmholtz equations
Resumo:
We present two new stabilized high-resolution numerical methods for the convection–diffusion–reaction (CDR) and the Helmholtz equations respectively. The work embarks upon a priori analysis of some consistency recovery procedures for some stabilization methods belonging to the Petrov–Galerkin framework. It was found that the use of some standard practices (e.g. M-Matrices theory) for the design of essentially non-oscillatory numerical methods is not feasible when consistency recovery methods are employed. Hence, with respect to convective stabilization, such recovery methods are not preferred. Next, we present the design of a high-resolution Petrov–Galerkin (HRPG) method for the 1D CDR problem. The problem is studied from a fresh point of view, including practical implications on the formulation of the maximum principle, M-Matrices theory, monotonicity and total variation diminishing (TVD) finite volume schemes. The current method is next in line to earlier methods that may be viewed as an upwinding plus a discontinuity-capturing operator. Finally, some remarks are made on the extension of the HRPG method to multidimensions. Next, we present a new numerical scheme for the Helmholtz equation resulting in quasi-exact solutions. The focus is on the approximation of the solution to the Helmholtz equation in the interior of the domain using compact stencils. Piecewise linear/bilinear polynomial interpolation are considered on a structured mesh/grid. The only a priori requirement is to provide a mesh/grid resolution of at least eight elements per wavelength. No stabilization parameters are involved in the definition of the scheme. The scheme consists of taking the average of the equation stencils obtained by the standard Galerkin finite element method and the classical finite difference method. Dispersion analysis in 1D and 2D illustrate the quasi-exact properties of this scheme. Finally, some remarks are made on the extension of the scheme to unstructured meshes by designing a method within the Petrov–Galerkin framework.
Resumo:
This study presents a classification criteria for two-class Cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland, law enforcement authorities regularly ask laboratories to determine cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. In this study, the classification analysis is based on data obtained from the relative proportion of three major leaf compounds measured by gas-chromatography interfaced with mass spectrometry (GC-MS). The aim is to discriminate between drug type (illegal) and fiber type (legal) cannabis at an early stage of the growth. A Bayesian procedure is proposed: a Bayes factor is computed and classification is performed on the basis of the decision maker specifications (i.e. prior probability distributions on cannabis type and consequences of classification measured by losses). Classification rates are computed with two statistical models and results are compared. Sensitivity analysis is then performed to analyze the robustness of classification criteria.
Resumo:
We study preconditioning techniques for discontinuous Galerkin discretizations of isotropic linear elasticity problems in primal (displacement) formulation. We propose subspace correction methods based on a splitting of the vector valued piecewise linear discontinuous finite element space, that are optimal with respect to the mesh size and the Lamé parameters. The pure displacement, the mixed and the traction free problems are discussed in detail. We present a convergence analysis of the proposed preconditioners and include numerical examples that validate the theory and assess the performance of the preconditioners.
Resumo:
Neutrality tests in quantitative genetics provide a statistical framework for the detection of selection on polygenic traits in wild populations. However, the existing method based on comparisons of divergence at neutral markers and quantitative traits (Q(st)-F(st)) suffers from several limitations that hinder a clear interpretation of the results with typical empirical designs. In this article, we propose a multivariate extension of this neutrality test based on empirical estimates of the among-populations (D) and within-populations (G) covariance matrices by MANOVA. A simple pattern is expected under neutrality: D = 2F(st)/(1 - F(st))G, so that neutrality implies both proportionality of the two matrices and a specific value of the proportionality coefficient. This pattern is tested using Flury's framework for matrix comparison [common principal-component (CPC) analysis], a well-known tool in G matrix evolution studies. We show the importance of using a Bartlett adjustment of the test for the small sample sizes typically found in empirical studies. We propose a dual test: (i) that the proportionality coefficient is not different from its neutral expectation [2F(st)/(1 - F(st))] and (ii) that the MANOVA estimates of mean square matrices between and among populations are proportional. These two tests combined provide a more stringent test for neutrality than the classic Q(st)-F(st) comparison and avoid several statistical problems. Extensive simulations of realistic empirical designs suggest that these tests correctly detect the expected pattern under neutrality and have enough power to efficiently detect mild to strong selection (homogeneous, heterogeneous, or mixed) when it is occurring on a set of traits. This method also provides a rigorous and quantitative framework for disentangling the effects of different selection regimes and of drift on the evolution of the G matrix. We discuss practical requirements for the proper application of our test in empirical studies and potential extensions.
Resumo:
A regression analysis using a linked file of all Swiss births und perinatal deaths 1979-1981 showed a significant relation between birthweight and canton. Sex of infant and multiplicity of birth were significant, too. For live births, marital and socio-economic status of mother and father relate to birthweight. Logistic regressions brought out relationships between the risk of stillbirth and occupation of father, nationality and marital status of mother, apart from birthweight. For live births, only sex and (weakly) marital status and rank of the child were influencial after correction for birthweight.
Resumo:
Attrition in longitudinal studies can lead to biased results. The study is motivated by the unexpected observation that alcohol consumption decreased despite increased availability, which may be due to sample attrition of heavy drinkers. Several imputation methods have been proposed, but rarely compared in longitudinal studies of alcohol consumption. The imputation of consumption level measurements is computationally particularly challenging due to alcohol consumption being a semi-continuous variable (dichotomous drinking status and continuous volume among drinkers), and the non-normality of data in the continuous part. Data come from a longitudinal study in Denmark with four waves (2003-2006) and 1771 individuals at baseline. Five techniques for missing data are compared: Last value carried forward (LVCF) was used as a single, and Hotdeck, Heckman modelling, multivariate imputation by chained equations (MICE), and a Bayesian approach as multiple imputation methods. Predictive mean matching was used to account for non-normality, where instead of imputing regression estimates, "real" observed values from similar cases are imputed. Methods were also compared by means of a simulated dataset. The simulation showed that the Bayesian approach yielded the most unbiased estimates for imputation. The finding of no increase in consumption levels despite a higher availability remained unaltered. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Synchronization behavior of electroencephalographic (EEG) signals is important for decoding information processing in the human brain. Modern multichannel EEG allows a transition from traditional measurements of synchronization in pairs of EEG signals to whole-brain synchronization maps. The latter can be based on bivariate measures (BM) via averaging over pair-wise values or, alternatively, on multivariate measures (MM), which directly ascribe a single value to the synchronization in a group. In order to compare BM versus MM, we applied nine different estimators to simulated multivariate time series with known parameters and to real EEGs.We found widespread correlations between BM and MM, which were almost frequency-independent for all the measures except coherence. The analysis of the behavior of synchronization measures in simulated settings with variable coupling strength, connection probability, and parameter mismatch showed that some of them, including S-estimator, S-Renyi, omega, and coherence, aremore sensitive to linear interdependences,while others, like mutual information and phase locking value, are more responsive to nonlinear effects. Onemust consider these properties together with the fact thatMM are computationally less expensive and, therefore, more efficient for the large-scale data sets than BM while choosing a synchronization measure for EEG analysis.
Resumo:
In order to obtain a high-resolution Pleistocene stratigraphy, eleven continuouslycored boreholes, 100 to 220m deep were drilled in the northern part of the PoPlain by Regione Lombardia in the last five years. Quantitative provenanceanalysis (QPA, Weltje and von Eynatten, 2004) of Pleistocene sands was carriedout by using multivariate statistical analysis (principal component analysis, PCA,and similarity analysis) on an integrated data set, including high-resolution bulkpetrography and heavy-mineral analyses on Pleistocene sands and of 250 majorand minor modern rivers draining the southern flank of the Alps from West toEast (Garzanti et al, 2004; 2006). Prior to the onset of major Alpine glaciations,metamorphic and quartzofeldspathic detritus from the Western and Central Alpswas carried from the axial belt to the Po basin longitudinally parallel to theSouthAlpine belt by a trunk river (Vezzoli and Garzanti, 2008). This scenariorapidly changed during the marine isotope stage 22 (0.87 Ma), with the onset ofthe first major Pleistocene glaciation in the Alps (Muttoni et al, 2003). PCA andsimilarity analysis from core samples show that the longitudinal trunk river at thistime was shifted southward by the rapid southward and westward progradation oftransverse alluvial river systems fed from the Central and Southern Alps.Sediments were transported southward by braided river systems as well as glacialsediments transported by Alpine valley glaciers invaded the alluvial plain.Kew words: Detrital modes; Modern sands; Provenance; Principal ComponentsAnalysis; Similarity, Canberra Distance; palaeodrainage
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods
Resumo:
BACKGROUND: Chronic kidney disease (CKD) represents an increasing health burden. We present the population-based prevalence of CKD and compare the CKD Epidemiology collaboration (CKD-EPI) and modification of diet in renal disease (MDRD) equations to estimate the glomerular filtration rate, using the revised CKD classification with three albuminuria classes. We also explore factors associated with CKD. METHODS: The Swiss population-based, cross-sectional CoLaus study conducted in Lausanne (2003-2006) included 2810 men and 3111 women aged 35-75. CKD prevalence was assessed using CKD-EPI and MDRD equations and albuminuria estimated by the albumin-to-creatinine ratio in spot morning urine. Multivariate logistic regression was used to analyse determinants of CKD. RESULTS: Prevalence [95% confidence interval (CI)] of all stages CKD was 10.0% (9.2-10.8%) with CKD-EPI and 13.8% (12.9-14.6%) with MDRD. Using the revised CKD classification, the prevalence of low-, medium-, high- and very high-risk groups was 90.0, 8.46, 1.18 and 0.35% with CKD-EPI, respectively. With MDRD, the corresponding values were 86.24, 11.86, 1.55 and 0.35%. Using the revised classification, CKD-EPI systematically reclassified people in a lower risk category than MDRD. Age and obesity were more strongly associated with CKD in men [odds ratio (95% CI): 2.23(1.95; 2.56) per 10 years and 3.05(2.08;4.47), respectively] than in women [1.46 (1.29; 1.65) and 1.78 (1.30;2.44), respectively]. Hypertension, type 2 diabetes, serum homocysteine and uric acid were positively independently associated with CKD in men and women. CONCLUSIONS: One in 10 adults suffers from CKD in the population of Lausanne. CKD-EPI systematically reclassifies people in a lower CKD risk category than MDRD. Serum homocysteine and uric acid levels are associated with CKD independently of classical risk factors such as age, hypertension and diabetes.
Resumo:
A compositional time series is obtained when a compositional data vector is observed atdifferent points in time. Inherently, then, a compositional time series is a multivariatetime series with important constraints on the variables observed at any instance in time.Although this type of data frequently occurs in situations of real practical interest, atrawl through the statistical literature reveals that research in the field is very much in itsinfancy and that many theoretical and empirical issues still remain to be addressed. Anyappropriate statistical methodology for the analysis of compositional time series musttake into account the constraints which are not allowed for by the usual statisticaltechniques available for analysing multivariate time series. One general approach toanalyzing compositional time series consists in the application of an initial transform tobreak the positive and unit sum constraints, followed by the analysis of the transformedtime series using multivariate ARIMA models. In this paper we discuss the use of theadditive log-ratio, centred log-ratio and isometric log-ratio transforms. We also presentresults from an empirical study designed to explore how the selection of the initialtransform affects subsequent multivariate ARIMA modelling as well as the quality ofthe forecasts
Resumo:
A dimensional analysis of the classical equations related to the dynamics of vector-borne infections is presented. It is provided a formal notation to complete the expressions for the Ross' Threshold Theorem, the Macdonald's basic reproduction "rate" and sporozoite "rate", Garret-Jones' vectorial capacity and Dietz-Molineaux-Thomas' force of infection. The analysis was intended to provide a formal notation that complete the classical equations proposed by these authors.