713 resultados para mistimed covariates
Resumo:
Cross-cultural studies on eating behaviors and related constructs can identify cultural and social factors that contribute to eating disorder symptomatology. Eating disorders (EDs) are a major cause for concern in the U.S., and recent studies in Colombia have shown growing rates among their female population. In addition, cosmetic surgery procedures have been increasing rapidly in both the U.S. and Colombia, and preliminary research suggests a positive relation between disordered eating and endorsement of plastic surgery. In samples of college women from Colombia and the U.S., we investigated patterns of association between disordered eating variables and cosmetic surgery acceptance. Our approach utilized separate analyses for various subcomponents of disordered eating (to determine their unique associations with cosmetic surgery acceptance) while adjusting for potentially relevant covariates and examining cross-cultural patterns. Participants were students at an urban, public college in the U.S. (n=163) and an urban, private college in Colombia (n=179). Overall, our findings suggested that participants from Colombia with greater disordered eating were more likely to endorse cosmetic surgery for social reasons, while those from the U.S. were more likely to consider undergoing cosmetic surgery for personal reasons. Differing findings between the two samples may be due to cultural and social factors, which we delineate. These findings also have potential implications for presurgical counseling of cosmetic surgery candidates.
Resumo:
Using a unique neighborhood crime dataset for Bogotá in 2011, this study uses a spatial econometric approach and examines the role of socioeconomic and agglomeration variables in explaining the variance of crime. It uses two different types of crime, violent crime represented in homicides and property crime represented in residential burglaries. These two types of crime are then measured in non-standard crime statistics that are created as the area incidence for each crime in the neighborhood. The existence of crime hotspots in Bogotá has been shown in most of the literature, and using these non-standard crime statistics at this neighborhood level some hotspots arise again, thus validating the use of a spatial approach for these new crime statistics. The final specification includes socioeconomic, agglomeration, land-use and visual aspect variables that are then included in a SARAR model an estimated by the procedure devised by Kelejian and Prucha (2009). The resulting coefficients and marginal effects show the relevance of these crime hotspots which is similar with most previous studies. However, socioeconomic variables are significant and show the importance of age, and education. Agglomeration variables are significant and thus more densely populated areas are correlated with more crime. Interestingly, both types of crimes do not have the same significant covariates. Education and young male population have a different sign for homicide and residential burglaries. Inequality matters for homicides while higher real estate valuation matters for residential burglaries. Finally, density impacts positively both crimes.
Resumo:
We analyze the determinants of subjective returns of higher education in Colombia. The information on expectations has been collected in categories, motivating the use of interval regression and an ordered probit approaches for modeling the relationship between beliefs and measures of ability, conditioning on individual, school and regional covariates. The results suggest that there are considerable differences in the size of the expected returns according to some population groups and a strong dominance of college against technical education. Gender gaps disappear in college education but it is found that girls tend to believe that professional wages are more concentrated into higher income categories than boys. Finally, it seems that Colombian students overestimate the pecuniary returns to education.
Resumo:
Los métodos disponibles para realizar análisis de descomposición que se pueden aplicar cuando los datos son completamente observados, no son válidos cuando la variable de interés es censurada. Esto puede explicar la escasez de este tipo de ejercicios considerando variables de duración, las cuales se observan usualmente bajo censura. Este documento propone un método del tipo Oaxaca-Blinder para descomponer diferencias en la media en el contexto de datos censurados. La validez de dicho método radica en la identificación y estimación de la distribución conjunta de la variable de duración y un conjunto de covariables. Adicionalmente, se propone un método más general que permite descomponer otros funcionales de interés como la mediana o el coeficiente de Gini, el cual se basa en la especificación de la función de distribución condicional de la variable de duración dado un conjunto de covariables. Con el fin de evaluar el desempeño de dichos métodos, se realizan experimentos tipo Monte Carlo. Finalmente, los métodos propuestos son aplicados para analizar las brechas de género en diferentes características de la duración del desempleo en España, tales como la duración media, la probabilidad de ser desempleado de largo plazo y el coeficiente de Gini. Los resultados obtenidos permiten concluir que los factores diferentes a las características observables, tales como capital humano o estructura del hogar, juegan un papel primordial para explicar dichas brechas.
Resumo:
We propose and estimate a financial distress model that explicitly accounts for the interactions or spill-over effects between financial institutions, through the use of a spatial continuity matrix that is build from financial network data of inter bank transactions. Such setup of the financial distress model allows for the empirical validation of the importance of network externalities in determining financial distress, in addition to institution specific and macroeconomic covariates. The relevance of such specification is that it incorporates simultaneously micro-prudential factors (Basel 2) as well as macro-prudential and systemic factors (Basel 3) as determinants of financial distress. Results indicate network externalities are an important determinant of financial health of a financial institutions. The parameter that measures the effect of network externalities is both economically and statistical significant and its inclusion as a risk factor reduces the importance of the firm specific variables such as the size or degree of leverage of the financial institution. In addition we analyze the policy implications of the network factor model for capital requirements and deposit insurance pricing.
Resumo:
Understanding the effect of habitat fragmentation is a fundamental yet complicated aim of many ecological studies. Beni savanna is a naturally fragmented forest habitat, where forest islands exhibit variation in resources and threats. To understand how the availability of resources and threats affect the use of forest islands by parrots, we applied occupancy modeling to quantify use and detection probabilities for 12 parrot species on 60 forest islands. The presence of urucuri (Attalea phalerata) and macaw (Acrocomia aculeata) palms, the number of tree cavities on the islands, and the presence of selective logging,and fire were included as covariates associated with availability of resources and threats. The model-selection analysis indicated that both resources and threats variables explained the use of forest islands by parrots. For most species, the best models confirmed predictions. The number of cavities was positively associated with use of forest islands by 11 species. The area of the island and the presence of macaw palm showed a positive association with the probability of use by seven and five species, respectively, while selective logging and fire showed a negative association with five and six species, respectively. The Blue-throated Macaw (Ara glaucogularis), the critically endangered parrot species endemic to our study area, was the only species that showed a negative association with both threats. Monitoring continues to be essential to evaluate conservation and management actions of parrot populations. Understanding of how species are using this natural fragmented habitat will help determine which fragments should be preserved and which conservation actions are needed.
Resumo:
In this paper we consider the estimation of population size from onesource capture–recapture data, that is, a list in which individuals can potentially be found repeatedly and where the question is how many individuals are missed by the list. As a typical example, we provide data from a drug user study in Bangkok from 2001 where the list consists of drug users who repeatedly contact treatment institutions. Drug users with 1, 2, 3, . . . contacts occur, but drug users with zero contacts are not present, requiring the size of this group to be estimated. Statistically, these data can be considered as stemming from a zero-truncated count distribution.We revisit an estimator for the population size suggested by Zelterman that is known to be robust under potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a locally truncated Poisson likelihood which is equivalent to a binomial likelihood. This result allows the extension of the Zelterman estimator by means of logistic regression to include observed heterogeneity in the form of covariates. We also review an estimator proposed by Chao and explain why we are not able to obtain similar results for this estimator. The Zelterman estimator is applied in two case studies, the first a drug user study from Bangkok, the second an illegal immigrant study in the Netherlands. Our results suggest the new estimator should be used, in particular, if substantial unobserved heterogeneity is present.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture–recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture–recapture models. Alternative methods, still under the capture–recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture–recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao’s lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates—in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
Rationalizing non-participation as a resource deficiency in the household, this paper identifies strategies for milk-market development in the Ethiopian highlands. The additional amounts of covariates required for Positive marketable surplus -'distances-to market'-are computed from a model in which production and sales are correlated; sales are left-censored at some Unobserved thresholds production efficiencies are heterogeneous: and the data are in the form of a panel. Incorporating these features into the modeling exercise ant because they are fundamental to the data-generating environment. There are four reasons. First, because production and sales decisions are enacted within the same household, both decisions are affected by the same exogenous shocks, and production and sales are therefore likely to be correlated. Second. because selling, involves time and time is arguably the most important resource available to a subsistence household, the minimum Sales amount is not zero but, rather, some unobserved threshold that lies beyond zero. Third. the Potential existence of heterogeneous abilities in management, ones that lie latent from the econometrician's perspective, suggest that production efficiencies should be permitted to vary across households. Fourth, we observe a single set of households during multiple visits in a single production year. The results convey clearly that institutional and production) innovations alone are insufficient to encourage participation. Market-precipitating innovation requires complementary inputs, especially improvements in human capital and reductions in risk. Copyright (c) 20 08 John Wiley & Sons, Ltd.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
Background: Meta-analyses based on individual patient data (IPD) are regarded as the gold standard for systematic reviews. However, the methods used for analysing and presenting results from IPD meta-analyses have received little discussion. Methods We review 44 IPD meta-analyses published during the years 1999–2001. We summarize whether they obtained all the data they sought, what types of approaches were used in the analysis, including assumptions of common or random effects, and how they examined the effects of covariates. Results: Twenty-four out of 44 analyses focused on time-to-event outcomes, and most analyses (28) estimated treatment effects within each trial and then combined the results assuming a common treatment effect across trials. Three analyses failed to stratify by trial, analysing the data is if they came from a single mega-trial. Only nine analyses used random effects methods. Covariate-treatment interactions were generally investigated by subgrouping patients. Seven of the meta-analyses included data from less than 80% of the randomized patients sought, but did not address the resulting potential biases. Conclusions: Although IPD meta-analyses have many advantages in assessing the effects of health care, there are several aspects that could be further developed to make fuller use of the potential of these time-consuming projects. In particular, IPD could be used to more fully investigate the influence of covariates on heterogeneity of treatment effects, both within and between trials. The impact of heterogeneity, or use of random effects, are seldom discussed. There is thus considerable scope for enhancing the methods of analysis and presentation of IPD meta-analysis.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture-recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture-recapture models. Alternative methods, still under the capture-recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture-recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao's lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates-in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
In this paper, we apply one-list capture-recapture models to estimate the number of scrapie-affected holdings in Great Britain. We applied this technique to the Compulsory Scrapie Flocks Scheme dataset where cases from all the surveillance sources monitoring the presence of scrapie in Great Britain, the abattoir survey, the fallen stock survey and the statutory reporting of clinical cases, are gathered. Consequently, the estimates of prevalence obtained from this scheme should be comprehensive and cover all the different presentations of the disease captured individually by the surveillance sources. Two estimators were applied under the one-list approach: the Zelterman estimator and Chao's lower bound estimator. Our results could only inform with confidence the scrapie-affected holding population with clinical disease; this moved around the figure of 350 holdings in Great Britain for the period under study, April 2005-April 2006. Our models allowed the stratification by surveillance source and the input of covariate information, holding size and country of origin. None of the covariates appear to inform the model significantly. Crown Copyright (C) 2008 Published by Elsevier B.V. All rights reserved.
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
1. Wildlife managers often require estimates of abundance. Direct methods of estimation are often impractical, especially in closed-forest environments, so indirect methods such as dung or nest surveys are increasingly popular. 2. Dung and nest surveys typically have three elements: surveys to estimate abundance of the dung or nests; experiments to estimate the production (defecation or nest construction) rate; and experiments to estimate the decay or disappearance rate. The last of these is usually the most problematic, and was the subject of this study. 3. The design of experiments to allow robust estimation of mean time to decay was addressed. In most studies to date, dung or nests have been monitored until they disappear. Instead, we advocate that fresh dung or nests are located, with a single follow-up visit to establish whether the dung or nest is still present or has decayed. 4. Logistic regression was used to estimate probability of decay as a function of time, and possibly of other covariates. Mean time to decay was estimated from this function. 5. Synthesis and applications. Effective management of mammal populations usually requires reliable abundance estimates. The difficulty in estimating abundance of mammals in forest environments has increasingly led to the use of indirect survey methods, in which abundance of sign, usually dung (e.g. deer, antelope and elephants) or nests (e.g. apes), is estimated. Given estimated rates of sign production and decay, sign abundance estimates can be converted to estimates of animal abundance. Decay rates typically vary according to season, weather, habitat, diet and many other factors, making reliable estimation of mean time to decay of signs present at the time of the survey problematic. We emphasize the need for retrospective rather than prospective rates, propose a strategy for survey design, and provide analysis methods for estimating retrospective rates.