970 resultados para Multivariate data
Resumo:
We propose a multivariate approach to the study of geographic species distribution which does not require absence data. Building on Hutchinson's concept of the ecological niche, this factor analysis compares, in the multidimensional space of ecological variables, the distribution of the localities where the focal species was observed to a reference set describing the whole study area. The first factor extracted maximizes the marginality of the focal species, defined as the ecological distance between the species optimum and the mean habitat within the reference area. The other factors maximize the specialization of this focal species, defined as the ratio of the ecological variance in mean habitat to that observed for the focal species. Eigenvectors and eigenvalues are readily interpreted and can be used to build habitat-suitability maps. This approach is recommended in Situations where absence data are not available (many data banks), unreliable (most cryptic or rare species), or meaningless (invaders). We provide an illustration and validation of the method for the alpine ibex, a species reintroduced in Switzerland which presumably has not yet recolonized its entire range.
Resumo:
In this paper we design and develop several filtering strategies for the analysis of data generated by a resonant bar gravitational wave (GW) antenna, with the goal of assessing the presence (or absence) therein of long-duration monochromatic GW signals, as well as the eventual amplitude and frequency of the signals, within the sensitivity band of the detector. Such signals are most likely generated in the fast rotation of slightly asymmetric spinning stars. We develop practical procedures, together with a study of their statistical properties, which will provide us with useful information on the performance of each technique. The selection of candidate events will then be established according to threshold-crossing probabilities, based on the Neyman-Pearson criterion. In particular, it will be shown that our approach, based on phase estimation, presents a better signal-to-noise ratio than does pure spectral analysis, the most common approach.
Resumo:
BACKGROUND AND PURPOSE: There are few data on long-term clinical results and tolerance of brachytherapy in anal canal cancer. We present one of the largest retrospective analyses of anal canal cancers treated with external beam radiotherapy with/without (±) chemotherapy followed by a brachytherapy boost. MATERIALS AND METHODS: We performed a retrospective analysis of clinical results in terms of efficacy and toxicity. The impact of different clinical and therapeutic variables on these outcomes was studied. RESULTS: From May 1992 to December 2009, 209 patients received brachytherapy after external beam radiotherapy ± chemotherapy. Of these patients, 163 were stage II or stage IIIA (UICC 2002) and 58 were N1-3. According to age, ECOG performance status (PS), and comorbidities, patients received either radiotherapy alone (58/209) or radiochemotherapy (151/209). The median follow-up was 72.8 months. The 5- and 10-year local control rates were 78.6 and 73.9 %, respectively. Globally, severe acute and late G3-4 reactions (NCI-CTC scale v. 4.0) occurred in 11.2 and 6.3 % of patients, respectively. Univariate analysis showed the statistical impact of the pelvic treatment volume (p = 0.046) and of the total dose (p = 0.02) on the risk of severe acute and late toxicities, respectively. Only six patients required permanent colostomy because of severe late anorectal toxicities. CONCLUSION: After a long follow-up time, brachytherapy showed an acceptable toxicity profile and high local control rates in patients with anal canal cancer.
Resumo:
BACKGROUND: Globally, Africans and African Americans experience a disproportionate burden of type 2 diabetes, compared to other race and ethnic groups. The aim of the study was to examine the association of plasma glucose with indices of glucose metabolism in young adults of African origin from 5 different countries. METHODS: We identified participants from the Modeling the Epidemiologic Transition Study, an international study of weight change and cardiovascular disease (CVD) risk in five populations of African origin: USA (US), Jamaica, Ghana, South Africa, and Seychelles. For the current study, we included 667 participants (34.8 ± 6.3 years), with measures of plasma glucose, insulin, leptin, and adiponectin, as well as moderate and vigorous physical activity (MVPA, minutes/day [min/day]), daily sedentary time (min/day), anthropometrics, and body composition. RESULTS: Among the 282 men, body mass index (BMI) ranged from 22.1 to 29.6 kg/m(2) in men and from 25.8 to 34.8 kg/m(2) in 385 women. MVPA ranged from 26.2 to 47.1 min/day in men, and from 14.3 to 27.3 min/day in women and correlated with adiposity (BMI, waist size, and % body fat) only among US males after controlling for age. Plasma glucose ranged from 4.6 ± 0.8 mmol/L in the South African men to 5.8 mmol/L US men, while the overall prevalence for diabetes was very low, except in the US men and women (6.7 and 12 %, respectively). Using multivariate linear regression, glucose was associated with BMI, age, sex, smoking hypertension, daily sedentary time but not daily MVPA. CONCLUSION: Obesity, metabolic risk, and other potential determinants vary significantly between populations at differing stages of the epidemiologic transition, requiring tailored public health policies to address local population characteristics.
Resumo:
An important aspect of immune monitoring for vaccine development, clinical trials, and research is the detection, measurement, and comparison of antigen-specific T-cells from subject samples under different conditions. Antigen-specific T-cells compose a very small fraction of total T-cells. Developments in cytometry technology over the past five years have enabled the measurement of single-cells in a multivariate and high-throughput manner. This growth in both dimensionality and quantity of data continues to pose a challenge for effective identification and visualization of rare cell subsets, such as antigen-specific T-cells. Dimension reduction and feature extraction play pivotal role in both identifying and visualizing cell populations of interest in large, multi-dimensional cytometry datasets. However, the automated identification and visualization of rare, high-dimensional cell subsets remains challenging. Here we demonstrate how a systematic and integrated approach combining targeted feature extraction with dimension reduction can be used to identify and visualize biological differences in rare, antigen-specific cell populations. By using OpenCyto to perform semi-automated gating and features extraction of flow cytometry data, followed by dimensionality reduction with t-SNE we are able to identify polyfunctional subpopulations of antigen-specific T-cells and visualize treatment-specific differences between them.
Resumo:
Within the special geometry of the simplex, the sample space of compositional data, compositional orthonormal coordinates allow the application of any multivariate statistical approach. The search for meaningful coordinates has suggested balances (between two groups of parts)—based on a sequential binary partition of a D-part composition—and a representation in form of a CoDa-dendrogram. Projected samples are represented in a dendrogram-like graph showing: (a) the way of grouping parts; (b) the explanatory role of subcompositions generated in the partition process; (c) the decomposition of the variance; (d) the center and quantiles of each balance. The representation is useful for the interpretation of balances and to describe the sample in a single diagram independently of the number of parts. Also, samples of two or more populations, as well as several samples from the same population, can be represented in the same graph, as long as they have the same parts registered. The approach is illustrated with an example of food consumption in Europe
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
La tècnica de l’electroencefalograma (EEG) és una de les tècniques més utilitzades per estudiar el cervell. En aquesta tècnica s’enregistren els senyals elèctrics que es produeixen en el còrtex humà a través d’elèctrodes col•locats al cap. Aquesta tècnica, però, presenta algunes limitacions a l’hora de realitzar els enregistraments, la principal limitació es coneix com a artefactes, que són senyals indesitjats que es mesclen amb els senyals EEG. L’objectiu d’aquest treball de final de màster és presentar tres nous mètodes de neteja d’artefactes que poden ser aplicats en EEG. Aquests estan basats en l’aplicació de la Multivariate Empirical Mode Decomposition, que és una nova tècnica utilitzada per al processament de senyal. Els mètodes de neteja proposats s’apliquen a dades EEG simulades que contenen artefactes (pestanyeigs), i un cop s’han aplicat els procediments de neteja es comparen amb dades EEG que no tenen pestanyeigs, per comprovar quina millora presenten. Posteriorment, dos dels tres mètodes de neteja proposats s’apliquen sobre dades EEG reals. Les conclusions que s’han extret del treball són que dos dels nous procediments de neteja proposats es poden utilitzar per realitzar el preprocessament de dades reals per eliminar pestanyeigs.
Resumo:
Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.
Resumo:
The optimization of the anaerobic degradation of the azo dye Remazol golden yellow RNL was performed according to multivariate experimental designs: a 2² full-factorial design and a central composite design (CCD). The CCD revealed that the best incubation conditions (90% color removal) for the degradation of the azo dye (50 mg L- 1) were achieved with 350 mg L- 1 of yeast extract and 45 mL of anaerobic supernatant (free cell extract) produced from the incubation of 650 mg L- 1 of anaerobic microorganisms and 250 mg L- 1 of glucose. A first-order kinetics model best fit the experimental data (k = 0.0837 h- 1, R² = 0.9263).
Resumo:
The aim of this present work was to provide a more fast, simple and less expensive to analyze sulfur content in diesel samples than by the standard methods currently used. Thus, samples of diesel fuel with sulfur concentrations varying from 400 and 2500 mgkg-1 were analyzed by two methodologies: X-ray fluorescence, according to ASTM D4294 and by Fourier transform infrared spectrometry (FTIR). The spectral data obtained from FTIR were used to build multivariate calibration models by partial least squares (PLS). Four models were built in three different ways: 1) a model using the full spectra (665 to 4000 cm-1), 2) two models using some specific spectrum regions and 3) a model with variable selected by classic method of variable selection stepwise. The model obtained by variable selection stepwise and the model built with region spectra between 665 and 856 cm-1 and 1145 and 2717 cm-1 showed better results in the determination of sulfur content.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.
Resumo:
In this paper, we propose exact inference procedures for asset pricing models that can be formulated in the framework of a multivariate linear regression (CAPM), allowing for stable error distributions. The normality assumption on the distribution of stock returns is usually rejected in empirical studies, due to excess kurtosis and asymmetry. To model such data, we propose a comprehensive statistical approach which allows for alternative - possibly asymmetric - heavy tailed distributions without the use of large-sample approximations. The methods suggested are based on Monte Carlo test techniques. Goodness-of-fit tests are formally incorporated to ensure that the error distributions considered are empirically sustainable, from which exact confidence sets for the unknown tail area and asymmetry parameters of the stable error distribution are derived. Tests for the efficiency of the market portfolio (zero intercepts) which explicitly allow for the presence of (unknown) nuisance parameter in the stable error distribution are derived. The methods proposed are applied to monthly returns on 12 portfolios of the New York Stock Exchange over the period 1926-1995 (5 year subperiods). We find that stable possibly skewed distributions provide statistically significant improvement in goodness-of-fit and lead to fewer rejections of the efficiency hypothesis.
Resumo:
Multivariate lifetime data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated lifetime when an individual is followed for the occurrence of two or more types of events, or when distinct individuals have dependent event times. In most studies there are covariates such as treatments, group indicators, individual characteristics, or environmental conditions, whose relationship to lifetime is of interest. This leads to a consideration of regression models.The well known Cox proportional hazards model and its variations, using the marginal hazard functions employed for the analysis of multivariate survival data in literature are not sufficient to explain the complete dependence structure of pair of lifetimes on the covariate vector. Motivated by this, in Chapter 2, we introduced a bivariate proportional hazards model using vector hazard function of Johnson and Kotz (1975), in which the covariates under study have different effect on two components of the vector hazard function. The proposed model is useful in real life situations to study the dependence structure of pair of lifetimes on the covariate vector . The well known partial likelihood approach is used for the estimation of parameter vectors. We then introduced a bivariate proportional hazards model for gap times of recurrent events in Chapter 3. The model incorporates both marginal and joint dependence of the distribution of gap times on the covariate vector . In many fields of application, mean residual life function is considered superior concept than the hazard function. Motivated by this, in Chapter 4, we considered a new semi-parametric model, bivariate proportional mean residual life time model, to assess the relationship between mean residual life and covariates for gap time of recurrent events. The counting process approach is used for the inference procedures of the gap time of recurrent events. In many survival studies, the distribution of lifetime may depend on the distribution of censoring time. In Chapter 5, we introduced a proportional hazards model for duration times and developed inference procedures under dependent (informative) censoring. In Chapter 6, we introduced a bivariate proportional hazards model for competing risks data under right censoring. The asymptotic properties of the estimators of the parameters of different models developed in previous chapters, were studied. The proposed models were applied to various real life situations.