965 resultados para REGRESSION APPROACH
Resumo:
Radiotherapy is one of the main treatments used against cancer. Radiotherapy uses radiation to destroy cancerous cells trying, at the same time, to minimize the damages in healthy tissues. The planning of a radiotherapy treatment is patient dependent, resulting in a lengthy trial and error procedure until a treatment complying as most as possible with the medical prescription is found. Intensity Modulated Radiation Therapy (IMRT) is one technique of radiation treatment that allows the achievement of a high degree of conformity between the area to be treated and the dose absorbed by healthy tissues. Nevertheless, it is still not possible to eliminate completely the potential treatments’ side-effects. In this retrospective study we use the clinical data from patients with head-and-neck cancer treated at the Portuguese Institute of Oncology of Coimbra and explore the possibility of classifying new and untreated patients according to the probability of xerostomia 12 months after the beginning of IMRT treatments by using a logistic regression approach. The results obtained show that the classifier presents a high discriminative ability in predicting the binary response “at risk for xerostomia at 12 months”
Resumo:
This article focuses on business risk management in the insurance industry. A methodology for estimating the profit loss caused by each customer in the portfolio due to policy cancellation is proposed. Using data from a European insurance company, customer behaviour over time is analyzed in order to estimate the probability of policy cancelation and the resulting potential profit loss due to cancellation. Customers may have up to two different lines of business contracts: motor insurance and other diverse insurance (such as, home contents, life or accident insurance). Implications for understanding customer cancellation behaviour as the core of business risk management are outlined.
Resumo:
Two speed management policies were implemented in the metropolitan area of Barcelona aimed at reducing air pollution concentration levels. In 2008, the maximum speed limit was reduced to 80 km/h and, in 2009, a variable speed system was introduced on some metropolitan motorways. This paper evaluates whether such policies have been successful in promoting cleaner air, not only in terms of mean pollutant levels but also during high and low pollution episodes. We use a quantile regression approach for fixed effect panel data. We find that the variable speed system improves air quality with regard to the two pollutants considered here, being most effective when nitrogen oxide levels are not too low and when particulate matter concentrations are below extremely high levels. However, reducing the maximum speed limit from 120/100 km/h to 80 km/h has no effect – or even a slightly increasing effect –on the two pollutants, depending on the pollution scenario. Length: 32 pages
Resumo:
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest.
Resumo:
In this paper, by employing the threshold regression method, we estimate the average tariff equivalent of fixed costs for the use of a free trade agreement (FTA) among all existing FTAs in the world. It is estimated to be 3.2%. This global estimate serves as a reference rate in the evaluation of each FTA’s fixed costs.
Resumo:
Authors discuss the effects that economic crises generate on the global market shares of tourism destinations, through a series of potential transmission mechanisms based on the main economic competitiveness determinants identified in the previous literature using a non-linear approach. Specifically a Markov Switching Regression approach is used to estimate the effect of two basic transmission mechanisms: reductions of internal and external tourism demands and falling investment.
Resumo:
Resumo:
This study analyzes the impact of individual characteristics as well as occupation and industry on male wage inequality in nine European countries. Unlike previous studies, we consider regression models for five inequality measures and employ the recentered influence function regression method proposed by Firpo et al. (2009) to test directly the influence of covariates on inequality. We conclude that there is heterogeneity in the effects of covariates on inequality across countries and throughout wage distribution. Heterogeneity among countries is more evident in education and experience whereas occupation and industry characteristics as well as holding a supervisory position reveal more similar effects. Our results are compatible with the skill biased technological change, rapid rise in the integration of trade and financial markets as well as explanations related to the increase of the remunerative package of top executives.
Resumo:
We propose a 3D-2D image registration method that relates image features of 2D projection images to the transformation parameters of the 3D image by nonlinear regression. The method is compared with a conventional registration method based on iterative optimization. For evaluation, simulated X-ray images (DRRs) were generated from coronary artery tree models derived from 3D CTA scans. Registration of nine vessel trees was performed, and the alignment quality was measured by the mean target registration error (mTRE). The regression approach was shown to be slightly less accurate, but much more robust than the method based on an iterative optimization approach.
Resumo:
A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.
Resumo:
An automatic algorithm is derived for constructing kernel density estimates based on a regression approach that directly optimizes generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. Local regularization is incorporated into the density construction process to further enforce sparsity. Examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample Parzen window density estimate.
Resumo:
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
Economic performance increasingly relies on global economic environment due to the growing importance of trade and nancial links among countries. Literature on growth spillovers shows various gains obtained by this interaction. This work aims at analyzing the possible e ects of a potential economic growth downturn in China, Germany and United States on the growth of other economies. We use global autoregressive regression approach to assess interdependence among countries. Two types of phenomena are simulated. The rst one is a one time shock that hit these economies. Our simulations use a large shock of -2.5 standard deviations, a gure very similar to what we saw back in the 2008 crises. The second experiment simulate the e ect of a hypothetical downturn of the aforementioned economies. Our results suggest that the United States play the role of a global economy a ecting countries across the globe whereas Germany and China play a regional role.
Resumo:
Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.