11 resultados para Sample selection
em CentAUR: Central Archive University of Reading - UK
Resumo:
A Bayesian method of estimating multivariate sample selection models is introduced and applied to the estimation of a demand system for food in the UK to account for censoring arising from infrequency of purchase. We show how it is possible to impose identifying restrictions on the sample selection equations and that, unlike a maximum likelihood framework, the imposition of adding up at both latent and observed levels is straightforward. Our results emphasise the role played by low incomes and socio-economic circumstances in leading to poor diets and also indicate that the presence of children in a household has a negative impact on dietary quality.
Resumo:
In recent issues of this Journal a debate has raged concerning the appropriate nature of academic research in the Asia Pacific region. While we support the desire for both rigor and regional relevance in this research, we wish to demonstrate a strong commonality between the performance of large Asian firms and others from Europe and North America. This prompts us to question the need for a new theory of the MNE based on the experience of Asian firms. Like their counterparts elsewhere, the large Asian firms mostly operate on an intra-regional basis. While in the literature it has been assumed that the path to success for Asian firms is globalization, we show that the data supporting this is confined to a handful of unrepresentative case studies. We also present a bibliometric analysis which shows an overwhelming case study sample selection bias in academic studies towards this small number of unrepresentative cases
Resumo:
Providing homeowners with real-time feedback on their electricity consumption through a dedicated display device has been shown to reduce consumption by approximately 6-10%. However, recent advances in smart grid technology have enabled larger sample sizes and more representative sample selection and recruitment methods for display trials. By analyzing these factors using data from current studies, this paper argues that a realistic, large-scale conservation effect from feedback is in the range of 3-5%. Subsequent analysis shows that providing real-time feedback may not be a cost effective strategy for reducing carbon emissions in Australia, but that it may enable additional benefits such as customer retention and peak-load shift.
Resumo:
In Sub-Saharan Africa (SSA) the technological advances of the Green Revolution (GR) have not been very successful. However, the efforts being made to re-introduce the revolution call for more socio-economic research into the adoption and the effects of the new technologies. The paper discusses an investigation on the effects of GR technology adoption on poverty among households in Ghana. Maximum likelihood estimation of a poverty model within the framework of Heckman's two stage method of correcting for sample selection was employed. Technology adoption was found to have positive effects in reducing poverty. Other factors that reduce poverty include education, credit, durable assets, living in the forest belt and in the south of the country. Technology adoption itself was also facilitated by education, credit, non-farm income and household labour supply as well as living in urban centres. Inarguably, technology adoption can be taken seriously by increasing the levels of complementary inputs such as credit, extension services and infrastructure. Above all, the fundamental problems of illiteracy, inequality and lack of effective markets must be addressed through increasing the levels of formal and non-formal education, equitable distribution of the 'national cake' and a more pragmatic management of the ongoing Structural Adjustment Programme.
Resumo:
Purpose – Price indices for commercial real estate markets are difficult to construct because assets are heterogeneous, they are spatially dispersed and they are infrequently traded. Appraisal-based indices are one response to these problems, but may understate volatility or fail to capture turning points in a timely manner. This paper estimates “transaction linked indices” for major European markets to see whether these offer a different perspective on market performance. The paper aims to discuss these issues. Design/methodology/approach – The assessed value method is used to construct the indices. This has been recently applied to commercial real estate datasets in the USA and UK. The underlying data comprise appraisals and sale prices for assets monitored by Investment Property Databank (IPD). The indices are compared to appraisal-based series for the countries concerned for Q4 2001 to Q4 2012. Findings – Transaction linked indices show stronger growth and sharper declines over the course of the cycle, but they do not notably lead their appraisal-based counterparts. They are typically two to four times more volatile. Research limitations/implications – Only country-level indicators can be constructed in many cases owing to low trading volumes in the period studied, and this same issue prevented sample selection bias from being analysed in depth. Originality/value – Discussion of the utility of transaction-based price indicators is extended to European commercial real estate markets. The indicators offer alternative estimates of real estate market volatility that may be useful in asset allocation and risk modelling, including in a regulatory context.
Resumo:
The steadily accumulating literature on technical efficiency in fisheries attests to the importance of efficiency as an indicator of fleet condition and as an object of management concern. In this paper, we extend previous work by presenting a Bayesian hierarchical approach that yields both efficiency estimates and, as a byproduct of the estimation algorithm, probabilistic rankings of the relative technical efficiencies of fishing boats. The estimation algorithm is based on recent advances in Markov Chain Monte Carlo (MCMC) methods—Gibbs sampling, in particular—which have not been widely used in fisheries economics. We apply the method to a sample of 10,865 boat trips in the US Pacific hake (or whiting) fishery during 1987–2003. We uncover systematic differences between efficiency rankings based on sample mean efficiency estimates and those that exploit the full posterior distributions of boat efficiencies to estimate the probability that a given boat has the highest true mean efficiency.
Resumo:
A number of authors have proposed clinical trial designs involving the comparison of several experimental treatments with a control treatment in two or more stages. At the end of the first stage, the most promising experimental treatment is selected, and all other experimental treatments are dropped from the trial. Provided it is good enough, the selected experimental treatment is then compared with the control treatment in one or more subsequent stages. The analysis of data from such a trial is problematic because of the treatment selection and the possibility of stopping at interim analyses. These aspects lead to bias in the maximum-likelihood estimate of the advantage of the selected experimental treatment over the control and to inaccurate coverage for the associated confidence interval. In this paper, we evaluate the bias of the maximum-likelihood estimate and propose a bias-adjusted estimate. We also propose an approach to the construction of a confidence region for the vector of advantages of the experimental treatments over the control based on an ordering of the sample space. These regions are shown to have accurate coverage, although they are also shown to be necessarily unbounded. Confidence intervals for the advantage of the selected treatment are obtained from the confidence regions and are shown to have more accurate coverage than the standard confidence interval based upon the maximum-likelihood estimate and its asymptotic standard error.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
This paper examines the selectivity and market timing performance of a sample of 21 UK property funds over the period Q3 1977 through to Q2 1987. The main finding of which that there is evidence of some superior selectivity performance on the part of UK property funds but that there are few funds who are able to successfully time the market.
Resumo:
This study examines differences in net selling price for residential real estate across male and female agents. A sample of 2,020 home sales transactions from Fulton County, Georgia are analyzed in a two-stage least squares, geospatial autoregressive corrected, semi-log hedonic model to test for gender and gender selection effects. Although agent gender seems to play a role in naïve models, its role becomes inconclusive as variables controlling for possible price and time on market expectations of the buyers and sellers are introduced to the models. Clear differences in real estate sales prices, time on market, and agent incomes across genders are unlikely due to differences in negotiation performance between genders or the mix of genders in a two-agent negotiation. The evidence suggests an interesting alternative to agent performance: that buyers and sellers with different reservation price and time on market expectations, such as those selling foreclosure homes, tend to select agents along gender lines.
Resumo:
This letter presents an effective approach for selection of appropriate terrain modeling methods in forming a digital elevation model (DEM). This approach achieves a balance between modeling accuracy and modeling speed. A terrain complexity index is defined to represent a terrain's complexity. A support vector machine (SVM) classifies terrain surfaces into either complex or moderate based on this index associated with the terrain elevation range. The classification result recommends a terrain modeling method for a given data set in accordance with its required modeling accuracy. Sample terrain data from the lunar surface are used in constructing an experimental data set. The results have shown that the terrain complexity index properly reflects the terrain complexity, and the SVM classifier derived from both the terrain complexity index and the terrain elevation range is more effective and generic than that designed from either the terrain complexity index or the terrain elevation range only. The statistical results have shown that the average classification accuracy of SVMs is about 84.3% ± 0.9% for terrain types (complex or moderate). For various ratios of complex and moderate terrain types in a selected data set, the DEM modeling speed increases up to 19.5% with given DEM accuracy.