67 resultados para Meta-Frontier
em CentAUR: Central Archive University of Reading - UK
Resumo:
Based on a large dataset from eight Asian economies, we test the impact of post-crisis regulatory reforms on the performance of depository institutions in countries at different levels of financial development. We allow for technological heterogeneity and estimate a set of country-level stochastic cost frontiers followed by a deterministic bootstrapped meta-frontier to evaluate cost efficiency and cost technology. Our results support the view that liberalization policies have a positive impact on bank performance, while the reverse is true for prudential regulation policies. The removal of activities restrictions, bank privatization and foreign bank entry have a positive and significant impact on technological progress and cost efficiency. In contrast, prudential policies, which aim to protect the banking sector from excessive risk-taking, tend to adversely affect banks cost efficiency but not cost technology.
Resumo:
Following the 1997 crisis, banking sector reforms in Asia have been characterised by the emphasis on prudential regulation, associated with increased financial liberalisation. Using a panel data set of commercial banks from eight major Asian economies over the period 2001-2010, this study explores how the coexistence of liberalisation and prudential regulation affects banks’ cost characteristics. Given the presence of heterogeneity of technologies across countries, we use a stochastic frontier approach followed by the estimation of a deterministic meta-frontier to provide ‘true’ estimates of bank cost efficiency measures. Our results show that the liberalization of bank interest rates and the increase in foreign banks' presence have had a positive and significant impact on technological progress and cost efficiency. On the other hand, we find that prudential regulation might adversely affect bank cost performance. When designing an optimal regulatory framework, policy makers should combine policies which aim to foster financial stability without hindering financial intermediation.
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous score for classification of study participants into healthy or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might be confounded by a potentially unknown variation of the cut-off value. To cope with this phenomena it is suggested to use, instead, an overall estimate of the misclassification error previously suggested and used as Youden’s index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel–Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden’s index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
Technical efficiency is estimated and examined for a cross-section of Australian dairy farms using various frontier methodologies; Bayesian and Classical stochastic frontiers, and Data Envelopment Analysis. The results indicate technical inefficiency is present in the sample data. Also identified are statistical differences between the point estimates of technical efficiency generated by the various methodologies. However, the rank of farm level technical efficiency is statistically invariant to the estimation technique employed. Finally, when confidence/credible intervals of technical efficiency are compared significant overlap is found for many of the farms' intervals for all frontier methods employed. The results indicate that the choice of estimation methodology may matter, but the explanatory power of all frontier methods is significantly weaker when interval estimate of technical efficiency is examined.
Resumo:
In this paper, the yield increases resulting from the cultivation of Bt cotton in Maharashtra, India, are analysed. The study relies on commercial farm, rather than trial, data and is among the first of its kind to be based on real farm and market conditions. Findings show that since its commercial release in 2002, Bt cotton has had a significant positive impact on yields and on the economic performance of cotton growers in Maharashtra. This difference remains even after controlling for different soil and insecticide inputs in the production of Bt cotton. There is also significant spatial and temporal variation in this 'benefit', and much depends upon where production is taking place and on the season.
Resumo:
Background: The objective was to evaluate the efficacy and tolerability of donepezil (5 and 10 mg/day) compared with placebo in alleviating manifestations of mild to moderate Alzheimer's disease (AD). Method: A systematic review of individual patient data from Phase II and III double-blind, randomised, placebo-controlled studies of up to 24 weeks and completed by 20 December 1999. The main outcome measures were the ADAS-cog, the CIBIC-plus, and reports of adverse events. Results: A total of 2376 patients from ten trials were randomised to either donepezil 5 mg/day (n = 821), 10 mg/day (n = 662) or placebo (n = 893). Cognitive performance was better in patients receiving donepezil than in patients receiving placebo. At 12 weeks the differences in ADAS-cog scores were 5 mg/day-placebo: - 2.1 [95% confidence interval (CI), - 2.6 to - 1.6; p < 0.001], 10 mg/day-placebo: - 2.5 ( - 3.1 to - 2.0; p < 0.001). The corresponding results at 24 weeks were - 2.0 ( - 2.7 to - 1.3; p < 0.001) and - 3.1 ( - 3.9 to - 2.4; p < 0.001). The difference between the 5 and 10 mg/day doses was significant at 24 weeks (p = 0.005). The odds ratios (OR) of improvement on the CIBIC-plus at 12 weeks were: 5 mg/day-placebo 1.8 (1.5 to 2.1; p < 0.001), 10 mg/day-placebo 1.9 (1.5 to 2.4; p < 0.001). The corresponding values at 24 weeks were 1.9 (1.5 to 2.4; p = 0.001) and 2.1 (1.6 to 2.8; p < 0.001). Donepezil was well tolerated; adverse events were cholinergic in nature and generally of mild severity and brief in duration. Conclusion: Donepezil (5 and 10 mg/day) provides meaningful benefits in alleviating deficits in cognitive and clinician-rated global function in AD patients relative to placebo. Increased improvements in cognition were indicated for the higher dose. Copyright © 2004 John Wiley & Sons, Ltd.
Resumo:
Background: Meta-analyses based on individual patient data (IPD) are regarded as the gold standard for systematic reviews. However, the methods used for analysing and presenting results from IPD meta-analyses have received little discussion. Methods We review 44 IPD meta-analyses published during the years 1999–2001. We summarize whether they obtained all the data they sought, what types of approaches were used in the analysis, including assumptions of common or random effects, and how they examined the effects of covariates. Results: Twenty-four out of 44 analyses focused on time-to-event outcomes, and most analyses (28) estimated treatment effects within each trial and then combined the results assuming a common treatment effect across trials. Three analyses failed to stratify by trial, analysing the data is if they came from a single mega-trial. Only nine analyses used random effects methods. Covariate-treatment interactions were generally investigated by subgrouping patients. Seven of the meta-analyses included data from less than 80% of the randomized patients sought, but did not address the resulting potential biases. Conclusions: Although IPD meta-analyses have many advantages in assessing the effects of health care, there are several aspects that could be further developed to make fuller use of the potential of these time-consuming projects. In particular, IPD could be used to more fully investigate the influence of covariates on heterogeneity of treatment effects, both within and between trials. The impact of heterogeneity, or use of random effects, are seldom discussed. There is thus considerable scope for enhancing the methods of analysis and presentation of IPD meta-analysis.
Resumo:
We consider the case of a multicenter trial in which the center specific sample sizes are potentially small. Under homogeneity, the conventional procedure is to pool information using a weighted estimator where the weights used are inverse estimated center-specific variances. Whereas this procedure is efficient for conventional asymptotics (e. g. center-specific sample sizes become large, number of center fixed), it is commonly believed that the efficiency of this estimator holds true also for meta-analytic asymptotics (e.g. center-specific sample size bounded, potentially small, and number of centers large). In this contribution we demonstrate that this estimator fails to be efficient. In fact, it shows a persistent bias with increasing number of centers showing that it isnot meta-consistent. In addition, we show that the Cochran and Mantel-Haenszel weighted estimators are meta-consistent and, in more generality, provide conditions on the weights such that the associated weighted estimator is meta-consistent.
Resumo:
OBJECTIVES: This contribution provides a unifying concept for meta-analysis integrating the handling of unobserved heterogeneity, study covariates, publication bias and study quality. It is important to consider these issues simultaneously to avoid the occurrence of artifacts, and a method for doing so is suggested here. METHODS: The approach is based upon the meta-likelihood in combination with a general linear nonparametric mixed model, which lays the ground for all inferential conclusions suggested here. RESULTS: The concept is illustrated at hand of a meta-analysis investigating the relationship of hormone replacement therapy and breast cancer. The phenomenon of interest has been investigated in many studies for a considerable time and different results were reported. In 1992 a meta-analysis by Sillero-Arenas et al. concluded a small, but significant overall effect of 1.06 on the relative risk scale. Using the meta-likelihood approach it is demonstrated here that this meta-analysis is due to considerable unobserved heterogeneity. Furthermore, it is shown that new methods are available to model this heterogeneity successfully. It is argued further to include available study covariates to explain this heterogeneity in the meta-analysis at hand. CONCLUSIONS: The topic of HRT and breast cancer has again very recently become an issue of public debate, when results of a large trial investigating the health effects of hormone replacement therapy were published indicating an increased risk for breast cancer (risk ratio of 1.26). Using an adequate regression model in the previously published meta-analysis an adjusted estimate of effect of 1.14 can be given which is considerably higher than the one published in the meta-analysis of Sillero-Arenas et al. In summary, it is hoped that the method suggested here contributes further to a good meta-analytic practice in public health and clinical disciplines.
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous Score for classification of study participants into healthy, or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between Studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might he confounded by a potentially unknown variation of the cut-off Value. To cope with this phenomena it is suggested to use, instead an overall estimate of the misclassification error previously suggested and used as Youden's index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel-Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden's index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
We focus on the comparison of three statistical models used to estimate the treatment effect in metaanalysis when individually pooled data are available. The models are two conventional models, namely a multi-level and a model based upon an approximate likelihood, and a newly developed model, the profile likelihood model which might be viewed as an extension of the Mantel-Haenszel approach. To exemplify these methods, we use results from a meta-analysis of 22 trials to prevent respiratory tract infections. We show that by using the multi-level approach, in the case of baseline heterogeneity, the number of clusters or components is considerably over-estimated. The approximate and profile likelihood method showed nearly the same pattern for the treatment effect distribution. To provide more evidence two simulation studies are accomplished. The profile likelihood can be considered as a clear alternative to the approximate likelihood model. In the case of strong baseline heterogeneity, the profile likelihood method shows superior behaviour when compared with the multi-level model. Copyright (C) 2006 John Wiley & Sons, Ltd.