501 resultados para Bootstrap
Resumo:
Two new methodologies are introduced to improve inference in the evaluation of mutual fund performance against benchmarks. First, the benchmark models are estimated using panel methods with both fund and time effects. Second, the non-normality of individual mutual fund returns is accounted for by using panel bootstrap methods. We also augment the standard benchmark factors with fund-specific characteristics, such as fund size. Using a dataset of UK equity mutual fund returns, we find that fund size has a negative effect on the average fund manager’s benchmark-adjusted performance. Further, when we allow for time effects and the non-normality of fund returns, we find that there is no evidence that even the best performing fund managers can significantly out-perform the augmented benchmarks after fund management charges are taken into account.
Resumo:
We developed orthogonal least-squares techniques for fitting crystalline lens shapes, and used the bootstrap method to determine uncertainties associated with the estimated vertex radii of curvature and asphericities of five different models. Three existing models were investigated including one that uses two separate conics for the anterior and posterior surfaces, and two whole lens models based on a modulated hyperbolic cosine function and on a generalized conic function. Two new models were proposed including one that uses two interdependent conics and a polynomial based whole lens model. The models were used to describe the in vitro shape for a data set of twenty human lenses with ages 7–82 years. The two-conic-surface model (7 mm zone diameter) and the interdependent surfaces model had significantly lower merit functions than the other three models for the data set, indicating that most likely they can describe human lens shape over a wide age range better than the other models (although with the two-conic-surfaces model being unable to describe the lens equatorial region). Considerable differences were found between some models regarding estimates of radii of curvature and surface asphericities. The hyperbolic cosine model and the new polynomial based whole lens model had the best precision in determining the radii of curvature and surface asphericities across the five considered models. Most models found significant increase in anterior, but not posterior, radius of curvature with age. Most models found a wide scatter of asphericities, but with the asphericities usually being positive and not significantly related to age. As the interdependent surfaces model had lower merit function than three whole lens models, there is further scope to develop an accurate model of the complete shape of human lenses of all ages. The results highlight the continued difficulty in selecting an appropriate model for the crystalline lens shape.
Resumo:
Biased estimation has the advantage of reducing the mean squared error (MSE) of an estimator. The question of interest is how biased estimation affects model selection. In this paper, we introduce biased estimation to a range of model selection criteria. Specifically, we analyze the performance of the minimum description length (MDL) criterion based on biased and unbiased estimation and compare it against modern model selection criteria such as Kay's conditional model order estimator (CME), the bootstrap and the more recently proposed hook-and-loop resampling based model selection. The advantages and limitations of the considered techniques are discussed. The results indicate that, in some cases, biased estimators can slightly improve the selection of the correct model. We also give an example for which the CME with an unbiased estimator fails, but could regain its power when a biased estimator is used.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.
Resumo:
Process models in organizational collections are typically modeled by the same team and using the same conventions. As such, these models share many characteristic features like size range, type and frequency of errors. In most cases merely small samples of these collections are available due to e.g. the sensitive information they contain. Because of their sizes, these samples may not provide an accurate representation of the characteristics of the originating collection. This paper deals with the problem of constructing collections of process models, in the form of Petri nets, from small samples of a collection for accurate estimations of the characteristics of this collection. Given a small sample of process models drawn from a real-life collection, we mine a set of generation parameters that we use to generate arbitrary-large collections that feature the same characteristics of the original collection. In this way we can estimate the characteristics of the original collection on the generated collections.We extensively evaluate the quality of our technique on various sample datasets drawn from both research and industry.
Resumo:
Despite many incidents about fake online consumer reviews have been reported, very few studies have been conducted to date to examine the trustworthiness of online consumer reviews. One of the reasons is the lack of an effective computational method to separate the untruthful reviews (i.e., spam) from the legitimate ones (i.e., ham) given the fact that prominent spam features are often missing in online reviews. The main contribution of our research work is the development of a novel review spam detection method which is underpinned by an unsupervised inferential language modeling framework. Another contribution of this work is the development of a high-order concept association mining method which provides the essential term association knowledge to bootstrap the performance for untruthful review detection. Our experimental results confirm that the proposed inferential language model equipped with high-order concept association knowledge is effective in untruthful review detection when compared with other baseline methods.
Resumo:
Lower fruit and vegetable intake among socioeconomically disadvantaged groups has been well documented, and may be a consequence of a higher consumption of take-out foods. This study examined whether, and to what extent, take-out food consumption mediated (explained) the association between socioeconomic position and fruit and vegetable intake. A cross-sectional postal survey was conducted among 1500 randomly selected adults aged 25–64 years in Brisbane, Australia in 2009 (response rate = 63.7%, N = 903). A food frequency questionnaire assessed usual daily servings of fruits and vegetables (0 to 6), overall take-out consumption (times/week) and the consumption of 22 specific take-out items (never to ≥once/day). These specific take-out items were grouped into “less healthy” and “healthy” choices and indices were created for each type of choice (0 to 100). Socioeconomic position was ascertained by education. The analyses were performed using linear regression, and a bootstrap re-sampling approach estimated the statistical significance of the mediated effects. Mean daily serves of fruits and vegetables was 1.89 (SD 1.05) and 2.47 (SD 1.12) respectively. The least educated group were more likely to consume fewer serves of fruit (B= –0.39, p<0.001) and vegetables (B= –0.43, p<0.001) compared with the highest educated. The consumption of “less healthy” take-out food partly explained (mediated) education differences in fruit and vegetable intake; however, no mediating effects were observed for overall and “healthy” take-out consumption. Regular consumption of “less healthy” take-out items may contribute to socioeconomic differences in fruit and vegetable intake, possibly by displacing these foods.
Resumo:
Since manually constructing domain-specific sentiment lexicons is extremely time consuming and it may not even be feasible for domains where linguistic expertise is not available. Research on the automatic construction of domain-specific sentiment lexicons has become a hot topic in recent years. The main contribution of this paper is the illustration of a novel semi-supervised learning method which exploits both term-to-term and document-to-term relations hidden in a corpus for the construction of domain specific sentiment lexicons. More specifically, the proposed two-pass pseudo labeling method combines shallow linguistic parsing and corpusbase statistical learning to make domain-specific sentiment extraction scalable with respect to the sheer volume of opinionated documents archived on the Internet these days. Another novelty of the proposed method is that it can utilize the readily available user-contributed labels of opinionated documents (e.g., the user ratings of product reviews) to bootstrap the performance of sentiment lexicon construction. Our experiments show that the proposed method can generate high quality domain-specific sentiment lexicons as directly assessed by human experts. Moreover, the system generated domain-specific sentiment lexicons can improve polarity prediction tasks at the document level by 2:18% when compared to other well-known baseline methods. Our research opens the door to the development of practical and scalable methods for domain-specific sentiment analysis.
Resumo:
This paper seeks to explain the lagging productivity in Singapore’s manufacturing noted in the statements of the Economic Strategies Committee Report 2010. Two methods are employed: the Malmquist productivity to measure total factor productivity change and Simar and Wilson’s (J Econ, 136:31–64, 2007) bootstrapped truncated regression approach. In the first stage, the nonparametric data envelopment analysis is used to measure technical efficiency. To quantify the economic drivers underlying inefficiencies, the second stage employs a bootstrapped truncated regression whereby bias-corrected efficiency estimates are regressed against explanatory variables. The findings reveal that growth in total factor productivity was attributed to efficiency change with no technical progress. Most industries were technically inefficient throughout the period except for ‘Pharmaceutical Products’. Sources of efficiency were attributed to quality of worker and flexible work arrangements while incessant use of foreign workers lowered efficiency.
Resumo:
Monitoring the natural environment is increasingly important as habit degradation and climate change reduce theworld’s biodiversity.We have developed software tools and applications to assist ecologists with the collection and analysis of acoustic data at large spatial and temporal scales.One of our key objectives is automated animal call recognition, and our approach has three novel attributes. First, we work with raw environmental audio, contaminated by noise and artefacts and containing calls that vary greatly in volume depending on the animal’s proximity to the microphone. Second, initial experimentation suggested that no single recognizer could dealwith the enormous variety of calls. Therefore, we developed a toolbox of generic recognizers to extract invariant features for each call type. Third, many species are cryptic and offer little data with which to train a recognizer. Many popular machine learning methods require large volumes of training and validation data and considerable time and expertise to prepare. Consequently we adopt bootstrap techniques that can be initiated with little data and refined subsequently. In this paper, we describe our recognition tools and present results for real ecological problems.