964 resultados para Semi-parametric estimation
Resumo:
The goal of this paper is to introduce a class of tree-structured models that combines aspects of regression trees and smooth transition regression models. The model is called the Smooth Transition Regression Tree (STR-Tree). The main idea relies on specifying a multiple-regime parametric model through a tree-growing procedure with smooth transitions among different regimes. Decisions about splits are entirely based on a sequence of Lagrange Multiplier (LM) tests of hypotheses.
Resumo:
In this work we studied the asymptotic unbiasedness, the strong and the uniform strong consistencies of a class of kernel estimators fn as an estimator of the density function f taking values on a k-dimensional sphere
Resumo:
This work has as main objective to find mathematical models based on linear parametric estimation techniques applied to the problem of calculating the grow of gas in oil wells. In particular we focus on achieving grow models applied to the case of wells that produce by plunger-lift technique on oil rigs, in which case, there are high peaks in the grow values that hinder their direct measurement by instruments. For this, we have developed estimators based on recursive least squares and make an analysis of statistical measures such as autocorrelation, cross-correlation, variogram and the cumulative periodogram, which are calculated recursively as data are obtained in real time from the plant in operation; the values obtained for these measures tell us how accurate the used model is and how it can be changed to better fit the measured values. The models have been tested in a pilot plant which emulates the process gas production in oil wells
Resumo:
The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
Includes bibliography
Resumo:
This article analyses the trend of unfair inequality in Brazil (1995-2009) using a nonparametric approach to estimate the income function. The entropy metrics introduced by Li, Maasoumi and Racine (2009) are used to quantify income differences separately for each effort variable. A Gini coefficient of unfair inequality is calculated, based on the fitted values of the non-parametric estimation; and the robustness of the estimations, including circumstantial variables, is analysed. The trend of the entropies demonstrated a reduction in the income differential caused by education. The variables “hours worked” and “labour-market status” contribute significantly to explaining wage differences imputed to individual effort; but the migratory variable had little explanatory power. Lastly, the robustness analysis demonstrated the plausibility of the results obtained at each stage of the empirical work.
Resumo:
In the first chapter we develop a theoretical model investigating food consumption and body weight with a novel assumption regarding human caloric expenditure (i.e. metabolism), in order to investigate why individuals can be rationally trapped in an excessive weight equilibrium and why they struggle to lose weight even when offered incentives for weight-loss. This assumption allows the theoretical model to have multiple equilibria and to provide an explanation for why losing weight is so difficult even in the presence of incentives, without relying on rational addiction, time-inconsistency preferences or bounded rationality. In addition to this result we are able to characterize under which circumstances a temporary incentive can create a persistent weight loss. In the second chapter we investigate the possible contributions that social norms and peer effects had on the spread of obesity. In recent literature peer effects and social norms have been characterized as important pathways for the biological and behavioral spread of body weight, along with decreased food prices and physical activity. We add to this literature by proposing a novel concept of social norm related to what we define as social distortion in weight perception. The theoretical model shows that, in equilibrium, the effect of an increase in peers' weight on i's weight is unrelated to health concerns while it is mainly associated with social concerns. Using regional data from England we prove that such social component is significant in influencing individual weight. In the last chapter we investigate the relationship between body weight and employment probability. Using a semi-parametric regression we show that men and women employment probability do not follow a linear relationship with body mass index (BMI) but rather an inverted U-shaped one, peaking at a BMI way over the clinical threshold for overweight.
Resumo:
We analyze three sets of doubly-censored cohort data on incubation times, estimating incubation distributions using semi-parametric methods and assessing the comparability of the estimates. Weibull models appear to be inappropriate for at least one of the cohorts, and the estimates for the different cohorts are substantially different. We use these estimates as inputs for backcalculation, using a nonparametric method based on maximum penalized likelihood. The different incubations all produce fits to the reported AIDS counts that are as good as the fit from a nonstationary incubation distribution that models treatment effects, but the estimated infection curves are very different. We also develop a method for estimating nonstationarity as part of the backcalculation procedure and find that such estimates also depend very heavily on the assumed incubation distribution. We conclude that incubation distributions are so uncertain that meaningful error bounds are difficult to place on backcalculated estimates and that backcalculation may be too unreliable to be used without being supplemented by other sources of information in HIV prevalence and incidence.
Resumo:
In this paper, we focus on the model for two types of tumors. Tumor development can be described by four types of death rates and four tumor transition rates. We present a general semi-parametric model to estimate the tumor transition rates based on data from survival/sacrifice experiments. In the model, we make a proportional assumption of tumor transition rates on a common parametric function but no assumption of the death rates from any states. We derived the likelihood function of the data observed in such an experiment, and an EM algorithm that simplified estimating procedures. This article extends work on semi-parametric models for one type of tumor (see Portier and Dinse and Dinse) to two types of tumors.
Resumo:
Time series models relating short-term changes in air pollution levels to daily mortality counts typically assume that the effects of air pollution on the log relative rate of mortality do not vary with time. However, these short-term effects might plausibly vary by season. Changes in the sources of air pollution and meteorology can result in changes in characteristics of the air pollution mixture across seasons. The authors develop Bayesian semi-parametric hierarchical models for estimating time-varying effects of pollution on mortality in multi-site time series studies. The methods are applied to the updated National Morbidity and Mortality Air Pollution Study database for the period 1987--2000, which includes data for 100 U.S. cities. At the national level, a 10 micro-gram/m3 increase in PM(10) at lag 1 is associated with a 0.15 (95% posterior interval: -0.08, 0.39),0.14 (-0.14, 0.42), 0.36 (0.11, 0.61), and 0.14 (-0.06, 0.34) percent increase in mortality for winter, spring, summer, and fall, respectively. An analysis by geographical regions finds a strong seasonal pattern in the northeast (with a peak in summer) and little seasonal variation in the southern regions of the country. These results provide useful information for understanding particle toxicity and guiding future analyses of particle constituent data.
Resumo:
At the time when at least two-thirds of the US states have already mandated some form of seller's property condition disclosure statement and there is a movement in this direction nationally, this paper examines the impact of seller's property condition disclosure law on the residential real estate values, the information asymmetry in housing transactions and shift of risk from buyers and brokers to the sellers, and attempts to ascertain the factors that lead to adoption of the disclosur law. The analytical structure employs parametric panel data models, semi-parametric propensity score matching models, and an event study framework using a unique set of economic and institutional attributes for a quarterly panel of 291 US Metropolitan Statistical Areas (MSAs) and 50 US States spanning 21 years from 1984 to 2004. Exploiting the MSA level variation in house prices, the study finds that the average seller may be able to fetch a higher price (about three to four percent) for the house if she furnishes a state-mandated seller's property condition disclosure statement to the buyer.
Resumo:
We examine the impact of seller's Property Condition Disclosure Law on the residential real estate values. A disclosure law may address the information asymmetry in housing transactions shifting of risk from buyers and brokers to the sellers and raising housing prices as a result. We combine propensity score techniques from the treatment effects literature with a traditional event study approach. We assemble a unique set of economic and institutional attributes for a quarterly panel of 291 US Metropolitan Statistical Areas (MSAs) and 50 US States spanning 21 years from 1984 to 2004 is used to exploit the MSA level variation in house prices. The study finds that the average seller may be able to fetch a higher price (about three to four percent) for the house if she furnishes a state-mandated seller.s property condition disclosure statement to the buyer. When we compare the results from parametric and semi-parametric event analyses, we find that the semi-parametric or the propensity score analysis generals moderately larger estimated effects of the law on housing prices.
Resumo:
An extension of k-ratio multiple comparison methods to rank-based analyses is described. The new method is analogous to the Duncan-Godbold approximate k-ratio procedure for unequal sample sizes or correlated means. The close parallel of the new methods to the Duncan-Godbold approach is shown by demonstrating that they are based upon different parameterizations as starting points.^ A semi-parametric basis for the new methods is shown by starting from the Cox proportional hazards model, using Wald statistics. From there the log-rank and Gehan-Breslow-Wilcoxon methods may be seen as score statistic based methods.^ Simulations and analysis of a published data set are used to show the performance of the new methods. ^
Resumo:
Anti-P antibodies present in sera from patients with chronic Chagas heart disease (cChHD) recognize peptide R13, EEEDDDMGFGLFD, which encompasses the C-terminal region of the Trypanosoma cruzi ribosomal P1 and P2 proteins. This peptide shares homology with the C-terminal region (peptide H13 EESDDDMGFGLFD) of the human ribosomal P proteins, which is in turn the target of anti-P autoantibodies in systemic lupus erythematosus (SLE), and with the acidic epitope, AESDE, of the second extracellular loop of the β1-adrenergic receptor. Anti-P antibodies from chagasic patients showed a marked preference for recombinant parasite ribosomal P proteins and peptides, whereas anti-P autoantibodies from SLE reacted with human and parasite ribosomal P proteins and peptides to the same extent. A semi-quantitative estimation of the binding of cChHD anti-P antibodies to R13 and H13 using biosensor technology indicated that the average affinity constant was about 5 times higher for R13 than for H13. Competitive enzyme immunoassays demonstrated that cChHD anti-P antibodies bind to the acidic portions of peptide H13, as well as to peptide H26R, encompassing the second extracellular loop of the β1 adrenoreceptor. Anti-P antibodies isolated from cChHD patients exert a positive chronotropic effect in vitro on cardiomyocytes from neonatal rats, which resembles closely that of anti-β1 receptor antibodies isolated from the same patient. In contrast, SLE anti-P autoantibodies have no functional effect. Our results suggest that the adrenergic-stimulating activity of anti-P antibodies may be implicated in the induction of functional myocardial impairments observed in cChHD.
Resumo:
El objetivo del presente estudio consiste en analizar el impacto que la publicación de la noticia de obtención de un certificado de calidad (ISO 9000) tiene sobre el valor de mercado de la empresa y sobre la volatilidad del precio de cotización de las acciones. La muestra utilizada incluye todas las empresas que, habiendo obtenido un certificado de calidad, han cotizado en el mercado secundario de valores español entre los años 1993 y 1999. Para medir el impacto de la obtención un certificado de calidad sobre los resultados se ha analizado los excesos de rentabilidad, mientras para medir la variación en la volatilidad se han realizado cuatro test, dos paramétricos, uno no paramétrico y una propuesta de test semiparamétrico. Los resultados indican que el mercado de capitales reacciona positivamente a la obtención de este certificado, provocando además un incremento en la volatilidad de los precios de cotización.