898 resultados para Nonparametric Estimators
Resumo:
ნაშრომში დამტკიცებულია აუცილებელი და საკმარისი პირობები, თუ როდის უშვებს ფიშერის სტატისტიკური სტრუქტურა პარამეტრების ძალდებულ შეფასებას.
Resumo:
Background: When performing the Valsalva maneuver (VM), adults and preadolescents produce the same expiratory resistance values. Objective: To analyze heart rate (HR) in preadolescents performing VM, and propose a new method for selecting expiratory resistance. Method: The maximal expiratory pressure (MEP) was measured in 45 sedentary children aged 9-12 years who subsequently performed VM for 20 s using an expiratory pressure of 60%, 70%, or 80% of MEP. HR was measured before, during, and after VM. These procedures were repeated 30 days later, and the data collected in the sessions (E1, E2) were analyzed and compared in periods before, during (0-10 and 10-20 s), and after VM using nonparametric tests. Results: All 45 participants adequately performed VM in E1 and E2 at 60% of MEP. However, only 38 (84.4%) and 25 (55.5%) of the participants performed the maneuver at 70% and 80% of MEP, respectively. The HR delta measured during 0-10 s and 10-20 s significantly increased as the expiratory effort increased, indicating an effective cardiac autonomic response during VM. However, our findings suggest the VM should not be performed at these intensities. Conclusion: HR increased with all effort intensities tested during VM. However, 60% of MEP was the only level of expiratory resistance that all participants could use to perform VM. Therefore, 60% of MEP may be the optimal expiratory resistance that should be used in clinical practice.
Resumo:
Brazilian fauna of drosophilids has been researched in various ecosystems, but those in mangrove forests remain overlooked in Brazil and elsewhere. The present study attempts to characterise the assemblages of drosophilids of this environment, based on 28 collections taken in three mangrove areas in Santa Catarina Island, southern Brazil. The three mangroves surveyed were different in their surroundings, which varied from highly urbanised areas to conservation areas with natural vegetation. Overall, 69 species were collected, and no remarkable difference was detected in species composition and abundances or in the richness, evenness and heterogeneity between sites. The species abundance distribution observed fitted to a theoretical lognormal distribution in the three mangroves. The species richness scored and the performance of the species richness estimators showed an unexpectedly high diversity, considering the very low floristic diversity and the harsh conditions of the environment. Regarding species composition and abundances, the drosophilid mangrove assemblages were shown to be more similar to those found in open environments, with a marked dominance of exotic species. Finally, considering the apparent lack of feeding and breeding sites, we suggest that mangrove forests are acting as sink habitats for the drosophilids populations.
Resumo:
Markowitz portfolio theory (1952) has induced research into the efficiency of portfolio management. This paper studies existing nonparametric efficiency measurement approaches for single period portfolio selection from a theoretical perspective and generalises currently used efficiency measures into the full mean-variance space. Therefore, we introduce the efficiency improvement possibility function (a variation on the shortage function), study its axiomatic properties in the context of Markowitz efficient frontier, and establish a link to the indirect mean-variance utility function. This framework allows distinguishing between portfolio efficiency and allocative efficiency. Furthermore, it permits retrieving information about the revealed risk aversion of investors. The efficiency improvement possibility function thus provides a more general framework for gauging the efficiency of portfolio management using nonparametric frontier envelopment methods based on quadratic optimisation.
Resumo:
We explore the determinants of usage of six different types of health care services, using the Medical Expenditure Panel Survey data, years 1996-2000. We apply a number of models for univariate count data, including semiparametric, semi-nonparametric and finite mixture models. We find that the complexity of the model that is required to fit the data well depends upon the way in which the data is pooled across sexes and over time, and upon the characteristics of the usage measure. Pooling across time and sexes is almost always favored, but when more heterogeneous data is pooled it is often the case that a more complex statistical model is required.
Resumo:
Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.
Resumo:
We review recent likelihood-based approaches to modeling demand for medical care. A semi-nonparametric model along the lines of Cameron and Johansson's Poisson polynomial model, but using a negative binomial baseline model, is introduced. We apply these models, as well a semiparametric Poisson, hurdle semiparametric Poisson, and finite mixtures of negative binomial models to six measures of health care usage taken from the Medical Expenditure Panel survey. We conclude that most of the models lead to statistically similar results, both in terms of information criteria and conditional and unconditional prediction. This suggests that applied researchers may not need to be overly concerned with the choice of which of these models they use to analyze data on health care demand.
Resumo:
The last 20 years have seen a significant evolution in the literature on horizontal inequity (HI) and have generated two major and "rival" methodological strands, namely, classical HI and reranking. We propose in this paper a class of ethically flexible tools that integrate these two strands. This is achieved using a measure of inequality that merges the well-known Gini coefficient and Atkinson indices, and that allows a decomposition of the total redistributive effect of taxes and transfers in a vertical equity effect and a loss of redistribution due to either classical HI or reranking. An inequality-change approach and a money-metric cost-of-inequality approach are developed. The latter approach makes aggregate classical HI decomposable across groups. As in recent work, equals are identified through a nonparametric estimation of the joint density of gross and net incomes. An illustration using Canadian data from 1981 to 1994 shows a substantial, and increasing, robust erosion of redistribution attributable both to classical HI and to reranking, but does not reveal which of reranking or classical HI is more important since this requires a judgement that is fundamentally normative in nature.
Resumo:
Locating new wind farms is of crucial importance for energy policies of the next decade. To select the new location, an accurate picture of the wind fields is necessary. However, characterizing wind fields is a difficult task, since the phenomenon is highly nonlinear and related to complex topographical features. In this paper, we propose both a nonparametric model to estimate wind speed at different time instants and a procedure to discover underrepresented topographic conditions, where new measuring stations could be added. Compared to space filling techniques, this last approach privileges optimization of the output space, thus locating new potential measuring sites through the uncertainty of the model itself.
Resumo:
El sistema de modulación ODFM es utilizado en diversas aplicaciones de banda ancha, tanto en comunicaciones por cable como en aplicaciones inalámbricas. Presenta numerosas ventajas frente a sistemas de banda ancha de portadora única ya que permite una alta eficiencia espectral, una fácil ecualización y una reducción del ISI. Por el contrario, presenta dificultades inherentes a su estructura, que son de vital importancia solventar, entre las cuales se encuentran los altos requisitos de sincronización. Este proyecto presenta métodos de sincronización de tiempo y frecuencia implementados y evaluados sobre una plataforma software basada en Matlab®, que recoge el sistema completo de transmisión basado fielmente en el estándar DVB-T. Tras una presentación de los principios de la modulación OFDM, en este documento se presenta un estudio detallado de este sistema de transmisión y su implementación, formando conjuntamente una plataforma de simulación para la evaluación de los estimadores implementados.
Resumo:
Least Squares estimators are notoriously known to generate sub-optimal exercise decisions when determining the optimal stopping time. The consequence is that the price of the option is underestimated. We show how variance reduction methods can be implemented to obtain more accurate option prices. We also extend the Longsta¤ and Schwartz (2001) method to price American options under stochastic volatility. These are two important contributions that are particularly relevant for practitioners. Finally, we extend the Glasserman and Yu (2004b) methodology to price Asian options and basket options.
Resumo:
This paper does two things. First, it presents alternative approaches to the standard methods of estimating productive efficiency using a production function. It favours a parametric approach (viz. the stochastic production frontier approach) over a nonparametric approach (e.g. data envelopment analysis); and, further, one that provides a statistical explanation of efficiency, as well as an estimate of its magnitude. Second, it illustrates the favoured approach (i.e. the ‘single stage procedure’) with estimates of two models of explained inefficiency, using data from the Thai manufacturing sector, after the crisis of 1997. Technical efficiency is modelled as being dependent on capital investment in three major areas (viz. land, machinery and office appliances) where land is intended to proxy the effects of unproductive, speculative capital investment; and both machinery and office appliances are intended to proxy the effects of productive, non-speculative capital investment. The estimates from these models cast new light on the five-year long, post-1997 crisis period in Thailand, suggesting a structural shift from relatively labour intensive to relatively capital intensive production in manufactures from 1998 to 2002.
Resumo:
Pricing American options is an interesting research topic since there is no analytical solution to value these derivatives. Different numerical methods have been proposed in the literature with some, if not all, either limited to a specific payoff or not applicable to multidimensional cases. Applications of Monte Carlo methods to price American options is a relatively new area that started with Longstaff and Schwartz (2001). Since then, few variations of that methodology have been proposed. The general conclusion is that Monte Carlo estimators tend to underestimate the true option price. The present paper follows Glasserman and Yu (2004b) and proposes a novel Monte Carlo approach, based on designing "optimal martingales" to determine stopping times. We show that our martingale approach can also be used to compute the dual as described in Rogers (2002).
Resumo:
In recent years there has been increasing concern about the identification of parameters in dynamic stochastic general equilibrium (DSGE) models. Given the structure of DSGE models it may be difficult to determine whether a parameter is identified. For the researcher using Bayesian methods, a lack of identification may not be evident since the posterior of a parameter of interest may differ from its prior even if the parameter is unidentified. We show that this can even be the case even if the priors assumed on the structural parameters are independent. We suggest two Bayesian identification indicators that do not suffer from this difficulty and are relatively easy to compute. The first applies to DSGE models where the parameters can be partitioned into those that are known to be identified and the rest where it is not known whether they are identified. In such cases the marginal posterior of an unidentified parameter will equal the posterior expectation of the prior for that parameter conditional on the identified parameters. The second indicator is more generally applicable and considers the rate at which the posterior precision gets updated as the sample size (T) is increased. For identified parameters the posterior precision rises with T, whilst for an unidentified parameter its posterior precision may be updated but its rate of update will be slower than T. This result assumes that the identified parameters are pT-consistent, but similar differential rates of updates for identified and unidentified parameters can be established in the case of super consistent estimators. These results are illustrated by means of simple DSGE models.
Resumo:
Spatial heterogeneity, spatial dependence and spatial scale constitute key features of spatial analysis of housing markets. However, the common practice of modelling spatial dependence as being generated by spatial interactions through a known spatial weights matrix is often not satisfactory. While existing estimators of spatial weights matrices are based on repeat sales or panel data, this paper takes this approach to a cross-section setting. Specifically, based on an a priori definition of housing submarkets and the assumption of a multifactor model, we develop maximum likelihood methodology to estimate hedonic models that facilitate understanding of both spatial heterogeneity and spatial interactions. The methodology, based on statistical orthogonal factor analysis, is applied to the urban housing market of Aveiro, Portugal at two different spatial scales.