881 resultados para Regression-based decomposition.
Resumo:
We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modelling continuous proportions that are affected by independent variables. We derive small-sample adjustments to the likelihood ratio statistic in this class of models. The adjusted statistics can be easily implemented from standard statistical software. We present Monte Carlo simulations showing that inference based on the adjusted statistics we propose is much more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.
Resumo:
We present simple matrix formulae for corrected score statistics in symmetric nonlinear regression models. The corrected score statistics follow more closely a chi (2) distribution than the classical score statistic. Our simulation results indicate that the corrected score tests display smaller size distortions than the original score test. We also compare the sizes and the powers of the corrected score tests with bootstrap-based score tests.
Resumo:
The purpose of this article is to present a new method to predict the response variable of an observation in a new cluster for a multilevel logistic regression. The central idea is based on the empirical best estimator for the random effect. Two estimation methods for multilevel model are compared: penalized quasi-likelihood and Gauss-Hermite quadrature. The performance measures for the prediction of the probability for a new cluster observation of the multilevel logistic model in comparison with the usual logistic model are examined through simulations and an application.
Resumo:
We have considered a Bayesian approach for the nonlinear regression model by replacing the normal distribution on the error term by some skewed distributions, which account for both skewness and heavy tails or skewness alone. The type of data considered in this paper concerns repeated measurements taken in time on a set of individuals. Such multiple observations on the same individual generally produce serially correlated outcomes. Thus, additionally, our model does allow for a correlation between observations made from the same individual. We have illustrated the procedure using a data set to study the growth curves of a clinic measurement of a group of pregnant women from an obstetrics clinic in Santiago, Chile. Parameter estimation and prediction were carried out using appropriate posterior simulation schemes based in Markov Chain Monte Carlo methods. Besides the deviance information criterion (DIC) and the conditional predictive ordinate (CPO), we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. For our data set, all these criteria chose the skew-t model as the best model for the errors. These DIC and CPO criteria are also validated, for the model proposed here, through a simulation study. As a conclusion of this study, the DIC criterion is not trustful for this kind of complex model.
Resumo:
Birnbaum-Saunders models have largely been applied in material fatigue studies and reliability analyses to relate the total time until failure with some type of cumulative damage. In many problems related to the medical field, such as chronic cardiac diseases and different types of cancer, a cumulative damage caused by several risk factors might cause some degradation that leads to a fatigue process. In these cases, BS models can be suitable for describing the propagation lifetime. However, since the cumulative damage is assumed to be normally distributed in the BS distribution, the parameter estimates from this model can be sensitive to outlying observations. In order to attenuate this influence, we present in this paper BS models, in which a Student-t distribution is assumed to explain the cumulative damage. In particular, we show that the maximum likelihood estimates of the Student-t log-BS models attribute smaller weights to outlying observations, which produce robust parameter estimates. Also, some inferential results are presented. In addition, based on local influence and deviance component and martingale-type residuals, a diagnostics analysis is derived. Finally, a motivating example from the medical field is analyzed using log-BS regression models. Since the parameter estimates appear to be very sensitive to outlying and influential observations, the Student-t log-BS regression model should attenuate such influences. The model checking methodologies developed in this paper are used to compare the fitted models.
Resumo:
This report presents an algorithm for locating the cut points for and separatingvertically attached traffic signs in Sweden. This algorithm provides severaladvanced digital image processing features: binary image which representsvisual object and its complex rectangle background with number one and zerorespectively, improved cross correlation which shows the similarity of 2Dobjects and filters traffic sign candidates, simplified shape decompositionwhich smoothes contour of visual object iteratively in order to reduce whitenoises, flipping point detection which locates black noises candidates, chasmfilling algorithm which eliminates black noises, determines the final cut pointsand separates originally attached traffic signs into individual ones. At each step,the mediate results as well as the efficiency in practice would be presented toshow the advantages and disadvantages of the developed algorithm. Thisreport concentrates on contour-based recognition of Swedish traffic signs. Thegeneral shapes cover upward triangle, downward triangle, circle, rectangle andoctagon. At last, a demonstration program would be presented to show howthe algorithm works in real-time environment.
Resumo:
Nested by linear cointegration first provided in Granger (1981), the definition of nonlinear cointegration is presented in this paper. Sequentially, a nonlinear cointegrated economic system is introduced. What we mainly study is testing no nonlinear cointegration against nonlinear cointegration by residual-based test, which is ready for detecting stochastic trend in nonlinear autoregression models. We construct cointegrating regression along with smooth transition components from smooth transition autoregression model. Some properties are analyzed and discussed during the estimation procedure for cointegrating regression, including description of transition variable. Autoregression of order one is considered as the model of estimated residuals for residual-based test, from which the teststatistic is obtained. Critical values and asymptotic distribution of the test statistic that we request for different cointegrating regressions with different sample sizes are derived based on Monte Carlo simulation. The proposed theoretical methods and models are illustrated by an empirical example, comparing the results with linear cointegration application in Hamilton (1994). It is concluded that there exists nonlinear cointegration in our system in the final results.
Resumo:
In this paper, we study the influence of the National Telecom Business Volume by the data in 2008 that have been published in China Statistical Yearbook of Statistics. We illustrate the procedure of modeling “National Telecom Business Volume” on the following eight variables, GDP, Consumption Levels, Retail Sales of Social Consumer Goods Total Renovation Investment, the Local Telephone Exchange Capacity, Mobile Telephone Exchange Capacity, Mobile Phone End Users, and the Local Telephone End Users. The testing of heteroscedasticity and multicollinearity for model evaluation is included. We also consider AIC and BIC criterion to select independent variables, and conclude the result of the factors which are the optimal regression model for the amount of telecommunications business and the relation between independent variables and dependent variable. Based on the final results, we propose several recommendations about how to improve telecommunication services and promote the economic development.
Resumo:
Background: British government policy for older people focuses on a vision of active ageing and independent living. In the face of diminishing personal capacities, the use of appropriate home-based technology (HBT) devices could potentially meet a wide range of needs and consequently improve many aspects of older people's quality of life such as physical health, psychosocial well-being, social relationships, and their physical or living environment. This study aimed to examine the use of HBT devices and the correlation between use of such devices and quality of life among older people living in extra-care housing (ECH). Methods: A structured questionnaire was administered for this study. Using purposive sampling 160 older people living in extra-care housing schemes were selected from 23 schemes in England. A face-to-face interview was conducted in each participant's living unit. In order to measure quality of life, the SEIQoL-Adapted and CASP-19 were used. Results: Although most basic appliances and emergency call systems were used in the living units, communally provided facilities such as personal computers, washing machines, and assisted bathing equipment in the schemes were not well utilised. Multiple regression analysis adjusted for confounders including age, sex, marital status, living arrangement and mobility use indicated a coefficient of 1.17 with 95% CI (0.05, 2.29) and p = 0.04 [SEIQoL-Adapted] and 2.83 with 95% CI (1.17, 4.50) and p = 0.001 [CASP-19]. Conclusions: The findings of the present study will be value to those who are developing new form of specialised housing for older people with functional limitations and, in particular, guiding investments in technological aids. The results of the present study also indicate that the home is an essential site for developing residential technologies.
Resumo:
Background: Evidence-based practice (EBP) is emphasized to increase the quality of care and patient safety. EBP is often described as a process consisting of distinct activities including, formulating questions, searching for information, compiling the appraised information, implementing evidence, and evaluating the resulting practice. To increase registered nurses' (RNs') practice of EBP, variables associated with such activities need to be explored. The aim of the study was to examine individual and organizational factors associated with EBP activities among RNs 2 years post graduation. Methods: A cross-sectional design based on a national sample of RNs was used. Data were collected in 2007 from a cohort of RNs, included in the Swedish Longitudinal Analyses of Nursing Education/Employment study. The sample consisted of 1256 RNs (response rate 76%). Of these 987 RNs worked in healthcare at the time of the data collection. Data was self-reported and collected through annual postal surveys. EBP activities were measured using six single items along with instruments measuring individual and work-related variables. Data were analyzed using logistic regression models. Results: Associated factors were identified for all six EBP activities. Capability beliefs regarding EBP was a significant factor for all six activities (OR = 2.6 - 7.3). Working in the care of older people was associated with a high extent of practicing four activities (OR = 1.7 - 2.2). Supportive leadership and high collective efficacy were associated with practicing three activities (OR = 1.4 - 2.0). Conclusions: To be successful in enhancing EBP among newly graduated RNs, strategies need to incorporate both individually and organizationally directed factors.
Resumo:
Generalized linear mixed models are flexible tools for modeling non-normal data and are useful for accommodating overdispersion in Poisson regression models with random effects. Their main difficulty resides in the parameter estimation because there is no analytic solution for the maximization of the marginal likelihood. Many methods have been proposed for this purpose and many of them are implemented in software packages. The purpose of this study is to compare the performance of three different statistical principles - marginal likelihood, extended likelihood, Bayesian analysis-via simulation studies. Real data on contact wrestling are used for illustration.
Resumo:
This dissertation deals with the problem of making inference when there is weak identification in models of instrumental variables regression. More specifically we are interested in one-sided hypothesis testing for the coefficient of the endogenous variable when the instruments are weak. The focus is on the conditional tests based on likelihood ratio, score and Wald statistics. Theoretical and numerical work shows that the conditional t-test based on the two-stage least square (2SLS) estimator performs well even when instruments are weakly correlated with the endogenous variable. The conditional approach correct uniformly its size and when the population F-statistic is as small as two, its power is near the power envelopes for similar and non-similar tests. This finding is surprising considering the bad performance of the two-sided conditional t-tests found in Andrews, Moreira and Stock (2007). Given this counter intuitive result, we propose novel two-sided t-tests which are approximately unbiased and can perform as well as the conditional likelihood ratio (CLR) test of Moreira (2003).
Resumo:
The literature has emphasized that absorptive capacity (AC) leads to performance, but in projects its influences still unclear. Additionally, the project success is not well understood by the literature, and AC can be an important mechanism to explain it. Therefore, the purpose of this study is to investigate the effect of absorptive capacity on project performance in the construction industry of São Paulo State. We study this influence through potential and realized absorptive capacity proposed by Zahra and George (2002). For achieving this goal, we use a combination of qualitative and quantitative research. The qualitative research is based on 15 interviews with project managers in different sectors to understand the main constructs and support the next quantitative phase. The content analysis was the technique used to analyze those interviews. In quantitative phase through a survey questionnaire, we collected 157 responses in the construction sector with project managers. The confirmatory factor analysis and hierarchical linear regression were the techniques used to assess the data. Our findings suggest that the realized absorptive capacity has a positive influence on performance, but potential absorptive capacity and the interactions effect have no influence on performance. Moreover, the planning and monitoring have a positive impact on budget and schedule, and customer satisfaction while risk coping capacity has a positive impact on business success. In academics terms, this research enables a better understanding of the importance of absorptive capacity in the construction industry and it confirms that knowledge application in processes and routines enhances performance. For management, the absorptive capacity enables the improvements of internal capabilities reflected in the increased project management efficiency. Indeed, when a company manages project practices efficiently it enhances business and project performance; however, it needs initially to improve its internal abilities to enrich processes and routines through relevant knowledge.
Resumo:
This paper considers two-sided tests for the parameter of an endogenous variable in an instrumental variable (IV) model with heteroskedastic and autocorrelated errors. We develop the nite-sample theory of weighted-average power (WAP) tests with normal errors and a known long-run variance. We introduce two weights which are invariant to orthogonal transformations of the instruments; e.g., changing the order in which the instruments appear. While tests using the MM1 weight can be severely biased, optimal tests based on the MM2 weight are naturally two-sided when errors are homoskedastic. We propose two boundary conditions that yield two-sided tests whether errors are homoskedastic or not. The locally unbiased (LU) condition is related to the power around the null hypothesis and is a weaker requirement than unbiasedness. The strongly unbiased (SU) condition is more restrictive than LU, but the associated WAP tests are easier to implement. Several tests are SU in nite samples or asymptotically, including tests robust to weak IV (such as the Anderson-Rubin, score, conditional quasi-likelihood ratio, and I. Andrews' (2015) PI-CLC tests) and two-sided tests which are optimal when the sample size is large and instruments are strong. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. Dropping the restrictive assumptions of normality and known variance, the theory is shown to remain valid at the cost of asymptotic approximations. The MM2-SU test is optimal under the strong IV asymptotics, and outperforms other existing tests under the weak IV asymptotics.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)