954 resultados para Instrumental variable regression
Resumo:
In this paper, reanalysis fields from the ECMWF have been statistically downscaled to predict from large-scale atmospheric fields, surface moisture flux and daily precipitation at two observatories (Zaragoza and Tortosa, Ebro Valley, Spain) during the 1961-2001 period. Three types of downscaling models have been built: (i) analogues, (ii) analogues followed by random forests and (iii) analogues followed by multiple linear regression. The inputs consist of data (predictor fields) taken from the ERA-40 reanalysis. The predicted fields are precipitation and surface moisture flux as measured at the two observatories. With the aim to reduce the dimensionality of the problem, the ERA-40 fields have been decomposed using empirical orthogonal functions. Available daily data has been divided into two parts: a training period used to find a group of about 300 analogues to build the downscaling model (1961-1996) and a test period (19972001), where models' performance has been assessed using independent data. In the case of surface moisture flux, the models based on analogues followed by random forests do not clearly outperform those built on analogues plus multiple linear regression, while simple averages calculated from the nearest analogues found in the training period, yielded only slightly worse results. In the case of precipitation, the three types of model performed equally. These results suggest that most of the models' downscaling capabilities can be attributed to the analogues-calculation stage.
Resumo:
ENGLISH: We analyzed catches per unit of effort (CPUE) from the Japanese longline fishery for bigeye tuna (Thunnus obesus) in the central and eastern Pacific Ocean (EPO) with regression tree methods. Regression trees have not previously been used to estimate time series of abundance indices fronl CPUE data. The "optimally sized" tree had 139 parameters; year, month, latitude, and longitude interacted to affect bigeye CPUE. The trend in tree-based abundance indices for the EPO was similar to trends estimated from a generalized linear model and fronl an empirical model that combines oceanographic data with information on the distribution of fish relative to environmental conditions. The regression tree was more parsimonious and would be easier to implement than the other two nl0dels, but the tree provided no information about the nlechanisms that caused bigeye CPUEs to vary in time and space. Bigeye CPUEs increased sharply during the mid-1980's and were more variable at the northern and southern edges of the fishing grounds. Both of these results can be explained by changes in actual abundance and changes in catchability. Results from a regression tree that was fitted to a subset of the data indicated that, in the EPO, bigeye are about equally catchable with regular and deep longlines. This is not consistent with observations that bigeye are more abundant at depth and indicates that classification by gear type (regular or deep longline) may not provide a good measure of capture depth. Asimulated annealing algorithm was used to summarize the tree-based results by partitioning the fishing grounds into regions where trends in bigeye CPUE were similar. Simulated annealing can be useful for designing spatial strata in future sampling programs. SPANISH: Analizamos la captura por unidad de esfuerzo (CPUE) de la pesquería palangrera japonesa de atún patudo (Thunnus obesus) en el Océano Pacifico oriental (OPO) y central con métodos de árbol de regresión. Hasta ahora no se han usado árboles de regresión para estimar series de tiempo de índices de abundancia a partir de datos de CPUE. EI árbol de "tamaño optimo" tuvo 139 parámetros; ano, mes, latitud, y longitud interactuaron para afectar la CPUE de patudo. La tendencia en los índices de abundancia basados en árboles para el OPO fue similar a las tendencias estimadas con un modelo lineal generalizado y con un modelo empírico que combina datos oceanográficos con información sobre la distribución de los peces en relación con las condiciones ambientales. EI árbol de regresión fue mas parsimonioso y seria mas fácil de utilizar que los dos otros modelos, pero no proporciono información sobre los mecanismos que causaron que las CPUE de patudo valiaran en el tiempo y en el espacio. Las CPUE de patudo aumentaron notablemente a mediados de los anos 80 y fueron mas variables en los extremos norte y sur de la zona de pesca. Estos dos resultados pueden ser explicados por cambios en la abundancia real y cambios en la capturabilidad. Los resultados de un arbal de regresión ajustado a un subconjunto de los datos indican que, en el OPO, el patudo es igualmente capturable con palangres regulares y profundos. Esto no es consistente con observaciones de que el patudo abunda mas a profundidad e indica que clasificación por tipo de arte (palangre regular 0 profundo) podría no ser una buena medida de la profundidad de captura. Se uso un algoritmo de templado simulado para resumir los resultados basados en el árbol clasificando las zonas de pesca en zonas con tendencias similares en la CPUE de patudo. El templado simulado podría ser útil para diseñar estratos espaciales en programas futuros de muestreo. (PDF contains 45 pages.)
Resumo:
We develop a convex relaxation of maximum a posteriori estimation of a mixture of regression models. Although our relaxation involves a semidefinite matrix variable, we reformulate the problem to eliminate the need for general semidefinite programming. In particular, we provide two reformulations that admit fast algorithms. The first is a max-min spectral reformulation exploiting quasi-Newton descent. The second is a min-min reformulation consisting of fast alternating steps of closed-form updates. We evaluate the methods against Expectation-Maximization in a real problem of motion segmentation from video data.
Resumo:
We demonstrate how a prior assumption of smoothness can be used to enhance the reconstruction of free energy profiles from multiple umbrella sampling simulations using the Bayesian Gaussian process regression approach. The method we derive allows the concurrent use of histograms and free energy gradients and can easily be extended to include further data. In Part I we review the necessary theory and test the method for one collective variable. We demonstrate improved performance with respect to the weighted histogram analysis method and obtain meaningful error bars without any significant additional computation. In Part II we consider the case of multiple collective variables and compare to a reconstruction using least squares fitting of radial basis functions. We find substantial improvements in the regimes of spatially sparse data or short sampling trajectories. A software implementation is made available on www.libatoms.org.
Resumo:
The Gaussian process latent variable model (GP-LVM) has been identified to be an effective probabilistic approach for dimensionality reduction because it can obtain a low-dimensional manifold of a data set in an unsupervised fashion. Consequently, the GP-LVM is insufficient for supervised learning tasks (e. g., classification and regression) because it ignores the class label information for dimensionality reduction. In this paper, a supervised GP-LVM is developed for supervised learning tasks, and the maximum a posteriori algorithm is introduced to estimate positions of all samples in the latent variable space. We present experimental evidences suggesting that the supervised GP-LVM is able to use the class label information effectively, and thus, it outperforms the GP-LVM and the discriminative extension of the GP-LVM consistently. The comparison with some supervised classification methods, such as Gaussian process classification and support vector machines, is also given to illustrate the advantage of the proposed method.
Resumo:
In chemistry for chemical analysis of a multi-component sample or quantitative structure-activity/property relationship (QSAR/QSPR) studies, variable selection is a key step. In this study, comparisons between different methods were performed. These methods include three classical methods such as forward selection, backward elimination and stepwise regression; orthogonal descriptors; leaps-and-bounds regression and genetic algorithm. Thirty-five nitrobenzenes were taken as the data set. From these structures quantum chemical parameters, topological indices and indicator variable were extracted as the descriptors for the comparisons of variable selections. The interesting results have been obtained. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, the comparison of orthogonal descriptors and Leaps-and-Bounds regression analysis is performed. The results obtained by using orthogonal descriptors are better than that obtained by using Leaps-and-Bounds regression for the data set of nitrobenzenes used in this study. Leaps-and-Bounds regression can be used effectively for selection of variables in quantitative structure-activity/property relationship(QSAR/QSPR) studies. Consequently, orthogonalisation of descriptors is also a good method for variable selection for studies on QSAR/QSPR.
Resumo:
In this paper, we introduce the method of leaps and bounds regression which can be used to select variables quickly and obtain the best regression models. These models contain one variable, two variables, three variables and so on. The results obtained by using leaps and bounds regression were compared with those achieved by using stepwise regression to lead to the conclusion that leaps and bounds regression is an effective method.
Resumo:
This paper provides a root-n consistent, asymptotically normal weighted least squares estimator of the coefficients in a truncated regression model. The distribution of the errors is unknown and permits general forms of unknown heteroskedasticity. Also provided is an instrumental variables based two-stage least squares estimator for this model, which can be used when some regressors are endogenous, mismeasured, or otherwise correlated with the errors. A simulation study indicates that the new estimators perform well in finite samples. Our limiting distribution theory includes a new asymptotic trimming result addressing the boundary bias in first-stage density estimation without knowledge of the support boundary. © 2007 Cambridge University Press.
Resumo:
We consider the problem of variable selection in regression modeling in high-dimensional spaces where there is known structure among the covariates. This is an unconventional variable selection problem for two reasons: (1) The dimension of the covariate space is comparable, and often much larger, than the number of subjects in the study, and (2) the covariate space is highly structured, and in some cases it is desirable to incorporate this structural information in to the model building process. We approach this problem through the Bayesian variable selection framework, where we assume that the covariates lie on an undirected graph and formulate an Ising prior on the model space for incorporating structural information. Certain computational and statistical problems arise that are unique to such high-dimensional, structured settings, the most interesting being the phenomenon of phase transitions. We propose theoretical and computational schemes to mitigate these problems. We illustrate our methods on two different graph structures: the linear chain and the regular graph of degree k. Finally, we use our methods to study a specific application in genomics: the modeling of transcription factor binding sites in DNA sequences. © 2010 American Statistical Association.
Resumo:
This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens automatically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham's-razor effect. Our second goal is to contrast empirical-Bayes and fully Bayesian approaches to variable selection through examples, theoretical results and simulations. Considerable differences between the two approaches are found. In particular, we prove a theorem that characterizes a surprising aymptotic discrepancy between fully Bayes and empirical Bayes. This discrepancy arises from a different source than the failure to account for hyperparameter uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when the empirical-Bayes estimate converges asymptotically to the true variable-inclusion probability, the potential for a serious difference remains. © Institute of Mathematical Statistics, 2010.
Resumo:
C17 polyacetylenes are a group of bioactive compounds present in carrots which have recently gained scientific attention due to their cytotoxicity against cancer cells. In common with many bioactive compounds, their levels may be influenced by thermal processes, such as boiling or water immersion. This study investigated the effect of a number of water immersion time/temperature combinations on concentrations of these compounds and attempted to model the changes. Carrot samples were thermally treated by heating in water at temperatures from 50–100 °C and holding times of 2–60 min. Following heating, levels of falcarinol (FaOH), falcarindiol (FaDOH), falcarindiol-3-acetate (FaDOAc) and Hunter colour parameters (L*, a*, b*) were determined. FaOH, FaDOH, FaDOAc levels were significantly reduced at lower temperatures (50–60 °C). In contrast, samples heated at temperatures from 70–100 °C exhibited higher levels of polyacetylenes (p < 0.05) than did raw unprocessed samples. Regression modelling was used to model the effects of temperature and holding time on the levels of the variables measured. Temperature treatment and holding time were found to significantly affect the polyacetylene content of carrot disks. Predicted models were found to be significant (p < 0.05) with high coefficients of determination (R2).
Resumo:
Nitrogen Dioxide (NO2) is known to act as an environmental trigger for many respiratory illnesses. As a pollutant it is difficult to map accurately, as concentrations can vary greatly over small distances. In this study three geostatistical techniques were compared, producing maps of NO2 concentrations in the United Kingdom (UK). The primary data source for each technique was NO2 point data, generated from background automatic monitoring and background diffusion tubes, which are analysed by different laboratories on behalf of local councils and authorities in the UK. The techniques used were simple kriging (SK), ordinary kriging (OK) and simple kriging with a locally varying mean (SKlm). SK and OK make use of the primary variable only. SKlm differs in that it utilises additional data to inform prediction, and hence potentially reduces uncertainty. The secondary data source was Oxides of Nitrogen (NOx) derived from dispersion modelling outputs, at 1km x 1km resolution for the UK. These data were used to define the locally varying mean in SKlm, using two regression approaches: (i) global regression (GR) and (ii) geographically weighted regression (GWR). Based upon summary statistics and cross-validation prediction errors, SKlm using GWR derived local means produced the most accurate predictions. Therefore, using GWR to inform SKlm was beneficial in this study.
Resumo:
BACKGROUND: PET/CT scanning can determine suitability for curative therapy and inform decision making when considering radical therapy in patients with non-small cell lung cancer (NSCLC). Metastases to central mediastinal lymph nodes (N2) may alter such management decisions. We report a 2 year retrospective series assessing N2 lymph node staging accuracy with PET/CT compared to pathological analysis at surgery.
METHODS: Patients with NSCLC attending our centre (excluding those who had induction chemotherapy) who had staging PET/CT scans and pathological nodal sampling between June 2006 and June 2008 were analysed. For each lymph node assessed pathologically, the corresponding PET/CT status was determined. 64 patients with 200 N2 lymph nodes were analysed.
RESULTS: Sensitivity of PET/CT scans for indentifying involved N2 lymph nodes was
39%, specificity 96% and overall accuracy 90%. For individual lymph node analysis, logistic regression demonstrated a significant linear association between PET/CT sensitivity and time from scanning to surgery (p=0.031) but not for specificity and accuracy. Those scanned <9 weeks before pathological sampling were significantly more sensitive (64% >9 weeks, 0% ≥ 9 weeks, p=0.013) and more accurate (94% <9 weeks, 81% ≥ 9 weeks, p=0.007). Differences in specificity were not seen (97% <9 weeks, 91% ≥ 9 weeks, p=0.228). No significant difference in specificity was found at any time point.
CONCLUSIONS: We recommend that if a PET/CT scan is older than 9 weeks, and management would be altered by the presence of N2 nodes, re-staging of the
mediastinum should be undertaken.