924 resultados para Multivariate statistics
Resumo:
The optimization of the organic modifier concentration in micellar electrokinetic capillary chromatography (MECC) has been achieved by a uniform design and iterative optimization method, which has been developed for the optimization of composition of the mobile phase in high performance liquid chromatography. According to the proposed method, the uniform design technique has been applied to design the starting experiments, which can reduce the number of experiments compared with traditional simultaneous methods, such as the orthano design. The hierarchical chromatographic response function has been modified to evaluate the separation quality of a chromatogram in MECC. An iterative procedure has been adopted to search the optimal concentration of organic modifiers for improving the accuracy of retention predicted and the quality of the chromatogram. Validity of the optimization method has been proved by the separation of 31 aromatic compounds in MECC. (C) 2000 John Wiley & Sons, Inc.
Resumo:
Multivariate classification methods were used to evaluate data on the concentrations of eight metals in human senile lenses measured by atomic absorption spectrometry. Principal components analysis and hierarchical clustering separated senile cataract lenses, nuclei from cataract lenses, and normal lenses into three classes on the basis of the eight elements. Stepwise discriminant analysis was applied to give discriminant functions with five selected variables. Results provided by the linear learning machine method were also satisfactory; the k-nearest neighbour method was less useful.
Resumo:
Humans recognize optical reflectance properties of surfaces such as metal, plastic, or paper from a single image without knowledge of illumination. We develop a machine vision system to perform similar recognition tasks automatically. Reflectance estimation under unknown, arbitrary illumination proves highly underconstrained due to the variety of potential illumination distributions and surface reflectance properties. We have found that the spatial structure of real-world illumination possesses some of the statistical regularities observed in the natural image statistics literature. A human or computer vision system may be able to exploit this prior information to determine the most likely surface reflectance given an observed image. We develop an algorithm for reflectance classification under unknown real-world illumination, which learns relationships between surface reflectance and certain features (statistics) computed from a single observed image. We also develop an automatic feature selection method.
Resumo:
Janet Taylor, Ross D King, Thomas Altmann and Oliver Fiehn (2002). Application of metabolomics to plant genotype discrimination using statistics and machine learning. 1st European Conference on Computational Biology (ECCB). (published as a journal supplement in Bioinformatics 18: S241-S248).
Resumo:
Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.
Resumo:
Under natural viewing conditions small movements of the eye, head, and body prevent the maintenance of a steady direction of gaze. It is known that stimuli tend to fade when they a restabilized on the retina for several seconds. However; it is unclear whether the physiological motion of the retinal image serves a visual purpose during the brief periods of natural visual fixation. This study examines the impact of fixational instability on the statistics of the visua1 input to the retina and on the structure of neural activity in the early visual system. We show that fixational instability introduces a component in the retinal input signals that in the presence of natural images, lacks spatial correlations. This component strongly influences neural activity in a model of the LGN. It decorrelates cell responses even if the contrast sensitivity functions of simulated cells arc not perfectly tuned to counterbalance the power-law spectrum of natural images. A decorrelation of neural activity at the early stages of the visual system has been proposed to be beneficial for discarding statistical redundancies in the input signals. The results of this study suggest that fixational instability might contribute to establishing efficient representations of natural stimuli.
Resumo:
Background: We conducted a survival analysis of all the confirmed cases of Adult Tuberculosis (TB) patients treated in Cork-City, Ireland. The aim of this study was to estimate Survival time (ST), including median time of survival and to assess the association and impact of covariates (TB risk factors) to event status and ST. The outcome of the survival analysis is reported in this paper. Methods: We used a retrospective cohort study research design to review data of 647 bacteriologically confirmed TB patients from the medical record of two teaching hospitals. Mean age 49 years (Range 18–112). We collected information on potential risk factors of all confirmed cases of TB treated between 2008–2012. For the survival analysis, the outcome of interest was ‘treatment failure’ or ‘death’ (whichever came first). A univariate descriptive statistics analysis was conducted using a non- parametric procedure, Kaplan -Meier (KM) method to estimate overall survival (OS), while the Cox proportional hazard model was used for the multivariate analysis to determine possible association of predictor variables and to obtain adjusted hazard ratio. P value was set at <0.05, log likelihood ratio test at >0.10. Data were analysed using SPSS version 15.0. Results: There was no significant difference in the survival curves of male and female patients. (Log rank statistic = 0.194, df = 1, p = 0.66) and among different age group (Log rank statistic = 1.337, df = 3, p = 0.72). The mean overall survival (OS) was 209 days (95%CI: 92–346) while the median was 51 days (95% CI: 35.7–66). The mean ST for women was 385 days (95%CI: 76.6–694) and for men was 69 days (95%CI: 48.8–88.5). Multivariate Cox regression showed that patient who had history of drug misuse had 2.2 times hazard than those who do not have drug misuse. Smokers and alcohol drinkers had hazard of 1.8 while patients born in country of high endemicity (BICHE) had hazard of 6.3 and HIV co-infection hazard was 1.2. Conclusion: There was no significant difference in survival curves of male and female and among age group. Women had a higher ST compared to men. But men had a higher hazard rate compared to women. Anti-TNF, immunosuppressive medication and diabetes were found to be associated with longer ST, while alcohol, smoking, RICHE, BICHE was associated with shorter ST.
Resumo:
In this paper, we examine exchange rates in Vietnam’s transitional economy. Evidence of long-run equilibrium are established in most cases through a single co-integrating vector among endogenous variables that determine the real exchange rates. This supports relative PPP in which ECT of the system can be combined linearly into a stationary process, reducing deviation from PPP in the long run. Restricted coefficient vectors ß’ = (1, 1, -1) for real exchange rates of currencies in question are not rejected. This empirics of relative PPP adds to found evidences by many researchers, including Flre et al. (1999), Lee (1999), Johnson (1990), Culver and Papell (1999), Cuddington and Liang (2001). Instead of testing for different time series on a common base currency, we use different base currencies (USD, GBP, JPY and EUR). By doing so we want to know the whether theory may posit significant differences against one currency? We have found consensus, given inevitable technical differences, even with smallerdata sample for EUR. Speeds of convergence to PPP and adjustment are faster compared to results from other researches for developed economies, using both observed and bootstrapped HL measures. Perhaps, a better explanation is the adjustment from hyperinflation period, after which the theory indicates that adjusting process actually accelerates. We observe that deviation appears to have been large in early stages of the reform, mostly overvaluation. Over time, its correction took place leading significant deviations to gradually disappear.
Resumo:
This article examines the behavior of equity trading volume and volatility for the individual firms composing the Standard & Poor's 100 composite index. Using multivariate spectral methods, we find that fractionally integrated processes best describe the long-run temporal dependencies in both series. Consistent with a stylized mixture-of-distributions hypothesis model in which the aggregate "news"-arrival process possesses long-memory characteristics, the long-run hyperbolic decay rates appear to be common across each volume-volatility pair.
Resumo:
Assuming that daily spot exchange rates follow a martingale process, we derive the implied time series process for the vector of 30-day forward rate forecast errors from using weekly data. The conditional second moment matrix of this vector is modelled as a multivariate generalized ARCH process. The estimated model is used to test the hypothesis that the risk premium is a linear function of the conditional variances and covariances as suggested by the standard asset pricing theory literature. Little supportt is found for this theory; instead lagged changes in the forward rate appear to be correlated with the 'risk premium.'. © 1990.
Resumo:
We discuss a general approach to dynamic sparsity modeling in multivariate time series analysis. Time-varying parameters are linked to latent processes that are thresholded to induce zero values adaptively, providing natural mechanisms for dynamic variable inclusion/selection. We discuss Bayesian model specification, analysis and prediction in dynamic regressions, time-varying vector autoregressions, and multivariate volatility models using latent thresholding. Application to a topical macroeconomic time series problem illustrates some of the benefits of the approach in terms of statistical and economic interpretations as well as improved predictions. Supplementary materials for this article are available online. © 2013 Copyright Taylor and Francis Group, LLC.